Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-craftassist-an-open-source-framework-to-enable-interactive-bots-in-minecraft-by-facebook-researchers
Vincy Davis
19 Jul 2019
5 min read
Save for later

CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers

Vincy Davis
19 Jul 2019
5 min read
Two days ago, researchers from Facebook AI Research published a paper titled “CraftAssist: A Framework for Dialogue-enabled Interactive Agents”. The authors of this research are Facebook AI research engineers Jonathan Gray and Kavya Srinet, Facebook AI research scientist C. Lawrence Zitnick and Arthur Szlam and Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo and Siddharth Goyal. The paper describes the implementation of an assistant bot called CraftAssist which appears and interacts like another player, in the open sandbox game of Minecraft. The framework enables players to interact with the bot via in-game chat through various implemented tools and platforms. The players can also record these interactions through an in-game chat. The main aim of the bot is to be a useful and entertaining assistant to all the tasks listed and evaluated by the human players. Image Source: CraftAssist paper For motivating the wider AI research community to use the CraftAssist platform in their own experiments, Facebook researchers have open-sourced the framework, the baseline assistant, data and the models. The released data includes the functions which was used to build the 2,586 houses in Minecraft, the labeling data of the walls, roofs, etc. of the houses, human rephrasing of fixed commands, and the conversion of natural language commands to bot interpretable logical forms. The technology that allows the recording of human and bot interaction on a Minecraft server has also been released so that researcher will be able to independently collect data. Why is the Minecraft protocol used? Minecraft is a popular multiplayer volumetric pixel (voxel) 3D game based on building and crafting which allows multiplayer servers and players to collaborate and build, survive or compete with each other. It operates through a client and server architecture. The CraftAssist bot acts as a client and communicates with the Minecraft server using the Minecraft network protocol. The Minecraft protocol allows the bot to connect to any Minecraft server without the need for installing server-side mods. This lets the bot to easily join a multiplayer server along with human players or other bots. It also lets the bot to join an alternative server which implements the server-side component of the Minecraft network protocol. The CraftAssist bot uses a 3rd-party open source Cuberite server. It is a fast and extensible game server used for Minecraft. Read More: Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users How does the CraftAssist function? The block diagram below demonstrates how the bot interacts with incoming in-game chats and reaches the desired target. Image Source: CraftAssist paper Firstly, the incoming text is transformed into a logical form called the action dictionary. The action dictionary is then translated by a dialogue object which interacts with the memory module of the bot. This produces an action or a chat response to the user. The bot’s memory uses a relational database which is structured to recognize the relation between stored items of information. The major advantage of this type of memory is the easy to convert semantic parser, which is converted into a fully specified tasks. The bot responds to higher-level actions, called Tasks. Tasks are an interruptible process which follows a clear objective of step by step actions. It can adjust to long pauses between steps and can also push other Tasks onto a stack, like the way functions can call other functions in a standard programming language. Move, Build and Destroy are few of the many basic Tasks assigned to the bot. The The Dialogue Manager checks for illegal or profane words, then queries the semantic parser. The semantic parser takes the chat as input and produces an action dictionary. The action dictionary indicates that the text is a command given by a human and then specifies the high-level action to be performed by the bot. Once the task is created and pushed onto the Task stack, it is the responsibility of the command task ‘Move’ to compare the bot’s current location to the target location. This will make the bot to undertake a sequence of low-level step movements to reach the target. The core of the bot’s understanding of natural language depends on a neural semantic parser called the Text-toAction-Dictionary (TTAD) model. This model receives the incoming command/chat and then classifies it into an action dictionary which is interpreted by the Dialogue Object. The CraftAssist framework thus enables the bots in Minecraft to interact and play with players by understanding human interactions, using the implemented tools. The researchers hope that since the dataset of CraftAssist is now open-sourced, more developers will be empowered to contribute to this framework by assisting or training the bots, which might lead to the bots learning from human dialogue interactions, in the future. Developers have found the CraftAssist framework interesting. https://twitter.com/zehavoc/status/1151944917859688448 A user on Hacker News comments, “Wow, this is some amazing stuff! Congratulations!” Check out the paper CraftAssist: A Framework for Dialogue-enabled Interactive Agents for more details. Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects What to expect in Unreal Engine 4.23? A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news
Read more
  • 0
  • 0
  • 10028

article-image-introducing-abscissa-security-oriented-rust-application-framework-by-iqlusion
Bhagyashree R
19 Jul 2019
2 min read
Save for later

Introducing Abscissa, a security-oriented Rust application framework by iqlusion

Bhagyashree R
19 Jul 2019
2 min read
Earlier this month, iqlusion, an infrastructure provider for next-generation cryptocurrency technologies, announced the release of Abscissa 0.1, a security-oriented microframework for building Rust applications. Yesterday, the team announced the release of Abscissa 0.2. Tony Arcieri, the co-founder of iqlusion, wrote in a blog post, “After releasing v0.1, we’ve spent the past few weeks further polishing it up in tandem with this blog post, and just released a follow-up v0.2.” After developing a lot of Rust applications ranging from CLI to network services and managing a lot of the same copy/paste boilerplate, iqlusion decided to create the Abscissa framework. It aims to maximize functionality while minimizing the number of dependencies. What features does Abscissa come with? Command-line option parsing Abscissa comes with simple declarative option parser, which is based on the gumdrop crate. The option parser encompasses several improvements to provide better UX and tighter integration with the other parts of the framework, for example, overriding configuration settings using command-line options. Uses component architecture It uses a component architecture for extensibility, with a minimalist implementation and still is able to offer features like calculating dependency ordering and providing hooks into the application lifecycle. Configuration Allows simple parsing of Tom's Obvious, Minimal Language (TOML) configurations to serde-parsed configuration types that can be dynamically updated at runtime. Error handling Abscissa has a generic ‘Error’ type based on the ‘failure’ crate and a unified error-handling subsystem. Logging It uses the ‘log’ crate to provide application-level logging. Secrets management The optional ‘secrets’ module contains a ‘Secret’ type that derives serde’s Deserialize, which can be used for representing secret values parsed from configuration files or elsewhere. Terminal interactions It supports colored terminal output and is useful for Cargo-like status messages with easy-to-use macros. Read the official announcement for more details on Abscissa. You can also check out its GitHub repository. Introducing Ballista, a distributed compute platform based on Kubernetes and Rust Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more
Read more
  • 0
  • 0
  • 2526

article-image-google-plans-to-remove-xss-auditor-used-for-detecting-xss-vulnerabilities-from-its-chrome-web-browser
Amrata Joshi
19 Jul 2019
3 min read
Save for later

Google plans to remove XSS Auditor used for detecting XSS vulnerabilities from its Chrome web browser

Amrata Joshi
19 Jul 2019
3 min read
As per a recent report by Naked Security, Google is planning to remove XSS Auditor from its Chrome web browser which is its built-in function designed for detecting cross-site scripting (XSS) vulnerabilities.  Usually, an attacker injects their own code onto a legitimate website while performing the XSS attack. The attackers either adds the malicious code to a legitimate URL or they post content to a site that stores and displays what they’ve posted (persistent XSS). And if someone looks at the code injected by the attacker it would execute a command in their browser which can then result in stealing the victim’s cookies for infecting them with a virus. XSS Auditor uses a blocklist for identifying suspicious characters or HTML tags in request parameters and match them with content for spotting attackers that inject code into a page. Some developers have an issue with it because according to them, it doesn’t catch all XSS vulnerabilities in a site. The XSS Auditor also doesn’t spot an XSS code called bypasses which is common online. XSS Auditor has also been criticized a lot because attackers use XSS Auditors to disable the code on websites and is used for bypass techniques. Also, patching the XSS Auditor bypasses had brought issues in Chrome itself.  Google’s engineers had adapted XSS Auditor for filtering out troublesome XSS code instead of blocking access but it seems it wasn’t enough so they finally thought of taking it off. Last year, while discussing the plan to remove XSS Auditor, Google senior security engineer Eduardo Vela Nava said, “We haven’t found any evidence the XSSAuditor stops any XSS, and instead we have been experiencing difficulty explaining to developers at scale, why they should fix the bugs even when the browser says the attack was stopped. In the past 3 months we surveyed all internal XSS bugs that triggered the XSSAuditor and were able to find bypasses to all of them.” In Google Groups discussion, Google security engineer Thomas Sepez said, “Bypasses abound. It prevents some legit sites from working. Once detected, there’s nothing good to do. It introduces cross-site info leaks. Fixing all the info leaks has proven difficult.” Here, the question arises about how will the web developers check if their sites are buggy Without XSS Auditor. A feature that could act as a replacement to XSS Auditor is in development, it is basically an application programming interface (API) known as Trusted Types. It also treats user input as untrustworthy by default and further forces developers to take steps to sanitise it before it could be included in a web page. A user commented on HackerNews, “I'm working on the Trusted Types project in Google. To clarify, Trusted Types are not a replacement for XSS auditor. They are both related to XSS, but are fundamentally different and even target different flavors of XSS.”  According to a few users, the XSS Auditor was not that useful. Another comment reads, “Whilst the XSS auditor was able to protect against quite a wide range of payloads for reflected vulns, I think it caused more harm than good.” Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results Google’s language experts are listening to some recordings from its AI assistant Google Project Zero reveals an iMessage bug that bricks iPhone causing repetitive crash and respawn operations  
Read more
  • 0
  • 0
  • 2485

article-image-fedora-announces-the-first-preview-release-of-fedora-coreos-as-an-automatically-updating-linux-os-for-containerized-workloads
Vincy Davis
19 Jul 2019
3 min read
Save for later

Fedora announces the first preview release of Fedora CoreOS as an automatically updating Linux OS for containerized workloads

Vincy Davis
19 Jul 2019
3 min read
Three days ago, Fedora announced the first preview release of the open-source project Fedora CoreOS as a secure and reliable host for computer clusters. It is specifically designed for running containerized workloads with automatic updates to the latest OS improvements, bug fixes, and security updates. It is secure, minimal, monolithic and is optimized for working with Kubernetes. The main goal of Fedora CoreOS is to be a reliable container host to run containerized workloads securely and at scale. It integrates Ignition from Container Linux technology and rpm-ostree and SELinux hardening from Project Atomic Host. Fedora CoreOS is expected to be a successor to Container Linux eventually. The Container Linux project will continue to be supported throughout 2019, leaving users with ample time to migrate and provide feedback. Fedora has also assured Container Linux users that continued support will be provided to them without any disruption. Fedora CoreOS will also become the successor to Fedora Atomic Host. The current plan is for Fedora Atomic Host to have at least a 29 version and 6 months of lifecycle. Fedora CoreOS will support AWS, Azure, DigitalOcean, GCP, OpenStack, Packet, QEMU, VirtualBox, VMware, and bare-metal system platforms. The initial release of Fedora CoreOS will only run on bare metal, Quick Emulator (QEMU), VMware, and AWS on the 64-bit version of the x86 instruction set (x86_64) only. It supports provisioning via Ignition spec 3.0.0 and the Fedora CoreOS Config Transpiler, and will provide automatic updates with Zincati and rpm-ostree, and will run containers with Podman and Moby. Benjamin Gilbert from Red Hat, who is the primary sponsor for FedoraOS wrote a mail archive announcing the preview. Per Gilbert,  in the coming months, more platforms will be added to Fedora CoreOS and new functionalities will be explored. He has also notified users that the Fedora CoreOS preview should not be used for production workloads, as it may change before the stable release. Since Fedora CoreOS is freely available, it will embrace a variety of containerized use cases while Red Hat CoreOS will continue to provide a focused immutable host for OpenShift. It will be released and life-cycled at the same time as the platform. Users are happy with the first preview of Fedora CoreOS. https://twitter.com/datamattsson/status/1151963024175050758 A user on Reddit comments, “Wow looks awesome”. For details on how to create Ignition configs, head over to the Fedora Project docs. Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more! Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more
Read more
  • 0
  • 0
  • 2724

article-image-a-universal-bypass-tricks-cylance-ai-antivirus-into-accepting-all-top-10-malware-revealing-a-new-attack-surface-for-machine-learning-based-security
Sugandha Lahoti
19 Jul 2019
4 min read
Save for later

A universal bypass tricks Cylance AI antivirus into accepting all top 10 Malware revealing a new attack surface for machine learning based security

Sugandha Lahoti
19 Jul 2019
4 min read
Researchers from Skylight Cyber, an Australian cybersecurity enterprise, have tricked Blackberry Cylance’s AI-based antivirus product. They identified a peculiar bias of the antivirus product towards a specific game engine and bypassed it to trick the product into accepting malicious malware files. This discovery means companies working in the field of artificial intelligence-driven cybersecurity need to rethink their approach to creating new products. The bypass is not just limited to Cylance, researchers chose it as it is a leading vendor in the field and is publicly available. The researchers Adi Ashkenazy and Shahar Zini from Skylight Cyber say they can reverse the model of any AI-based EPP (Endpoint Protection Platform) product, and find a bias enabling a universal bypass. Essentially meaning if you could truly understand how a certain model works, and the type of features it uses to reach a decision, you would have the potential to fool it consistently. How did the researchers trick Cylance into thinking bad is good? Cylance’s machine-learning algorithm has been trained to favor a benign file, causing it to ignore malicious code if it sees strings from the benign file attached to a malicious file. The researchers took advantage of this and appended strings from a non-malicious file to a malicious one, tricking the system into thinking the malicious file is safe and avoiding detection. The trick works even if the Cylance engine previously concluded the same file was malicious before the benign strings were appended to it. The Cylance engine keeps a scoring mechanism ranging from -1000 for the most malicious files, and +1000 for the most benign of files. It also whitelists certain families of executable files to avoid triggering false positives on legitimate software. The researchers suspected that the machine learning would be biased toward code in those whitelisted files. So, they extracted strings from an online gaming program that Cylance had whitelisted and appended it to malicious files. The Cylance engine tagged the files benign and shifted scores from high negative numbers to high positive ones. https://youtu.be/NE4kgGjhf1Y The researchers tested against the WannaCry ransomware, Samsam ransomware, the popular Mimikatz hacking tool, and hundreds of other known malicious files. This method proved successful for 100% of the top 10 Malware for May 2019, and close to 90% for a larger sample of 384 malware. “As far as I know, this is a world-first, proven global attack on the ML [machine learning] mechanism of a security company,” told Adi Ashkenazy, CEO of Skylight Cyber to Motherboard, who first reported the news. “After around four years of super hype [about AI], I think this is a humbling example of how the approach provides a new attack surface that was not possible with legacy [antivirus software].” Gregory Webb, chief executive officer of malware protection firm Bromium Inc., told SiliconAngle that the news raises doubts about the concept of categorizing code as “good” or “bad.” “This exposes the limitations of leaving machines to make decisions on what can and cannot be trusted,” Webb said. “Ultimately, AI is not a silver bullet.” Martijn Grooten, a security researcher also added his views to the Cylance Bypass story. He states, “This is why we have good reasons to be concerned about the use of AI/ML in anything involving humans because it can easily reinforce and amplify existing biases.” The Cylance team have now confirmed the global bypass issue and will release a hotfix in the next few days. “We are aware that a bypass has been publicly disclosed by security researchers. We have verified there is an issue which can be leveraged to bypass the anti-malware component of the product. Our research and development teams have identified a solution and will release a hotfix automatically to all customers running current versions in the next few days,” the team wrote in a blog post. You can go through the blog post by Skylight Cyber researchers for additional information. Microsoft releases security updates: a “wormable” threat similar to WannaCry ransomware discovered 25 million Android devices infected with ‘Agent Smith’, a new mobile malware FireEye reports infrastructure-crippling Triton malware linked to Russian government tech institute
Read more
  • 0
  • 0
  • 6298

article-image-nativescript-6-0-releases-with-nativescript-appsync-tabview-dark-theme-and-much-more
Amrata Joshi
19 Jul 2019
2 min read
Save for later

NativeScript 6.0 releases with NativeScript AppSync, TabView, Dark theme and much more!

Amrata Joshi
19 Jul 2019
2 min read
Yesterday, the team behind NativeScript announced the release of NativeScript 6.0. This release features faster delivery of patches with the help of NativeScript AppSync and it comes with the NativeScript Core Theme that works for all NativeScript components. This release comes with an improved TabView that enables common scenarios without custom development. NativeScript 6.0 comes with support for AndroidX and Angular 8. https://twitter.com/ufsa/status/1151755519062958081 Introducing NativeScript AppSync Yesterday, the team also introduced NativeScript AppSync which is a beta service that enables users to deliver a new version of their application instantly. Users can have a look at the demo here: https://youtu.be/XG-ucFqjG6c Core Theme v2 and Dark Theme The NativeScript Core Theme provides common UI infrastructure for building consistent and good-looking user interface. The team is also introducing a Dark Theme that comes with the skins of the Light Theme.  Kendo Themes  The users who are using the Kendo components for their web applications can now reuse their Kendo theme in NativeScript. They can also use the Kendo Theme Builder for building a new theme for their NativeScript application.  Plug and play With this release, the NativeScript Core Theme is now completely plug and play. Users can now manually set classes to their components and can easily install the theme. TabView All the components of the TabView are now styleable and also the font icons are now supported. Users can now have multiple TabView components that are nested, similar to having tabs and bottom navigation on the same page. These new capabilities are still in beta. Bundle Workflow With NativeScript 6.0, the NativeScript CLI will now support the Bundle Workflow, a single unified way for building applications. Hot Module Replacement (HMR) is also enabled by default and users can disable it by providing the `--no-hmr` flag to the executed command. To know more about this news, check out the official blog post. NativeScript 5.0 released with code sharing, hot module replacement, and more! JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript Nativescript 4.1 has been released  
Read more
  • 0
  • 0
  • 1512
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-microsoft-azure-vp-demonstrates-holoportation-a-reconstructed-transmittable-3d-technology
Vincy Davis
18 Jul 2019
3 min read
Save for later

Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology

Vincy Davis
18 Jul 2019
3 min read
One of the major highlights at the ongoing Microsoft Inspire 2019 at Las Vegas, was the demonstration of Holoportation by Azure Corporate Vice President Julia White. Holoportation is a type of 3D capture technology that allows high-quality 3D models of people to be reconstructed, compressed and transmitted anywhere in the world in real-time. Microsoft Researchers have been working on this technology for several years by utilizing Mixed Reality (MR) devices with HoloLens, which is a pair of mixed reality smart glasses. Individuals owning these devices will be able to see each other virtually, thus giving the impression that they are in the same room at the same time. Yesterday, on Day 3 of the conference, White demonstrated this technology using Mixed Reality and Azure AI. White wore a HoloLens 2 headset which generated a ‘mini-me’ version of herself, which she could almost hold in her hand. After a little sparkling of green special effects, the miniature version got transformed into a full-size hologram of White. The hologram of White spoke in Japanese, even though the real White doesn’t know the language personally. The hologram White’s voice was the exact replica of the real White’s “unique voice signature”. https://www.youtube.com/watch?time_continue=169&v=auJJrHgG9Mc White said this “mind blowing” technology was made possible by using Mixed Real technology to create her hologram and to render it live. Next, it used Azure speech to text capability in English transcription to get the speech and then used Azure translate to translate her language in Japanese. Finally, the neural text to speech technology was applied to make it sound exactly like White, just speaking in Japanese. This is not the first time that Microsoft has demonstrated its holographic technology. Last year, during the Microsoft Inspire 2018 event, the Microsoft team had remotely collaborated in real-time with a 3D hologram. The demo participants had used advanced hand gestures and voice commands to collectively assess and dissect a 3D hologram of the Rigado Cascade IoT gateway. The Azure text-to-speech allows users to convert their custom voice into natural human-like synthesized speech. Thus, this technology gives the ability to converse with anybody, anywhere in the world in real-time, without any language barrier and in their own voice texture. The audience present expressed their amazement during the demo. The seamless technology has also impressed many Twitterati. https://twitter.com/tendaidongo/status/1151567203428384773 https://twitter.com/KamaraSwaby/status/1151528144198705158 https://twitter.com/bobbyschang/status/1151526620362002432 https://twitter.com/_dimpy_/status/1151526775404429312 With Microsoft showcasing its prowess in the field of virtual and augmented reality, it can be expected that devices like 3D cameras, HoloLens headsets might become the new norm in smartphones, video games, and many other applications. Microsoft adds Telemetry files in a “security-only update” without prior notice to users Microsoft introduces passwordless feature in its Windows 10 devices, replaces it with Windows Hello face authentication, fingerprints, or a PIN Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards
Read more
  • 0
  • 0
  • 5156

article-image-firefox-70-will-bring-new-security-measures-to-ensure-user-safety-from-cyberattacks
Savia Lobo
18 Jul 2019
4 min read
Save for later

Firefox 70 will bring new security measures to ensure user safety from cyberattacks

Savia Lobo
18 Jul 2019
4 min read
A few days ago, Firefox made announcements stating that starting from Firefox 70, which is planned to release in October this year, the browser will make two new changes favoring users and keeping them secure. First, it will notify users if their saved logins were part of any data breach. Secondly, it will prompt users if the web page they have landed on is not secure. Notifying users of saved logins that were a part of the data breach Firefox has partnered with popular data breach site, Have I Been Pwned, to notify users if their saved logins were found in data breaches. To start with, Firefox will scan the saved login credentials to see if they were exposed in a data breach listed on Have I been Pwned. If one is found, the user will be alerted and prompted to change their password. To support this, Mozilla will be integrating their independent Firefox Monitor service and the new Firefox Lockwise password manager directly into the Firefox browser. Mozilla will add an alert icon  next to the account profile in Firefox Lockwise, detected as being part of a breach. Clicking on the saved login will open its subpanel that displays an alert that the "Passwords were leaked or stolen" as part of a data breach. Compromised Password Notification in Firefox Lockwise Users will also be provided a “protection report” highlighting data breaches instances their logins were involved in. The current Firefox 69 Nightly builds includes a mockup of the ‘Protection Report’, which will list the type and amount of tracking and unwanted scripts that were blocked over the past 7 days. This mockup report is a mockup and not actual data from your browser. Mozilla to set up “not secure” indicators for all HTTP web pages Mozilla also announced that it will show a “Not secure” indication for all the websites in Firefox, starting with the Firefox 70. As we know, Google already has this feature activated on its browser starting with Chrome 68, which was released last year. Prior to this announcement, Mozilla used to indicate "not secure" only on HTTP pages that contained forms or login fields. “Mozilla argued that since more than 80% of all internet pages are now served via HTTPS, users don't need a positive indicator for HTTPS anymore, but a negative one for HTTP connections”, according to ZDNet. Firefox Developer Johann Hofmann said, "In desktop Firefox 70, we intend to show an icon in the 'identity block' (the left hand side of the URL bar which is used to display security / privacy information) that marks all sites served over HTTP (as well as FTP and certificate errors) as insecure". Mozilla started working on these developments way back in December 2017, when it added flags in the Firefox about:config section. These “flags are still present in the current stable version of Firefox, and users can enable them right now and preview how these indicators will look starting this fall,” according to ZDNet. Sean Wright, and infosec researcher told Forbes, “This is an excellent move by Mozilla and a step in the direction to have a secure by default web”.  He also added, many do not realize the potential implications of using sites over HTTP. “Even publicly accessible sites, even as simple as a blog, could potentially allow attackers to inject their malicious payloads into the site severed to the client. HTTPS can go a long way to prevent this, so any move to try to enforce it is a step in the right direction,” he further added. Wright has also warned the users that if you see you are browsing via an HTTPS site, it does not mean it is fully authentic. These sites may also be phished as hackers can purchase the certificates that mark a website as “secure”. Hence, a user has to be cautious while sharing their credentials online. He warns: “You should still pay close attention to links in emails.” A second zero-day found in Firefox was used to attack Coinbase employees; fix released in Firefox 67.0.4 and Firefox ESR 60.7.2 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment Mozilla launches Firefox Preview, an early version of a GeckoView-based Firefox for Android
Read more
  • 0
  • 0
  • 2408

article-image-linux-mint-19-2-beta-releases-with-update-manager-improved-menu-and-much-more
Amrata Joshi
18 Jul 2019
3 min read
Save for later

Linux Mint 19.2 beta releases with Update Manager, improved menu and much more!

Amrata Joshi
18 Jul 2019
3 min read
This week the team behind Linux Mint announced the release of Linux Mint 19.2 beta, a desktop Linux distribution used for producing a modern operating system. This release is codenamed as Tina. This release comes with updated software and refinements and new features for making the desktop more comfortable to use. What’s new in Linux Mint 19.2 beta? Update Manager The Update Manager now shows how long kernels are supported and users no longer need to install or remove kernels one by one anymore. Users can now queue installations and removals as well as install and remove multiple kernels in one go. A new button called "Remove Kernels" has been added to make for removing obsolete kernels. There is also support for kernel flavors now. The Update Manager will now show a combobox for users to switch between flavors. Improved menu mintMenu, the main application menu, has received many bug fixes and performance improvements. Also,Even the search bar position and the tooltips are now configurable. In this release, the applet icon now supports both icon files and themed icons. Software Manager A loading screen now shows up when the cache is being refreshed in the Software Manager. Software Manager can now share the same cache and can also list the applications which were installed via other means (other than Software Manager). The cache used by the Software Manager has been moved to mint-common and is turned into a Python module that can recognize manually installed software.  New buttons added in the Maintenance section In this release, two new buttons are made available in the "Maintenance" section of the "Software Sources" configuration tool: Add Missing Keys: With the help of this button, users can now scan their repositories and PPAs and download any key that might be missing. Remove duplicate sources: With the help of this button, users can find and fix duplicated definitions in their sources configuration. Read Also: Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released Announcing MATE 1.22 The Mint team also announced that Linux Mint 19.2 will be shipped with MATE 1.22 which now comes with improved stability and bug fixes. MATE is the Linux desktop that started as a fork of GNOME 2 in 2011 due to the poor reception of GNOME 3.  What’s new in MATE 1.22? It comes with support for metacity-3 themes. This release features better-looking window and desktop switchers. MATE 1.22 features systemd support in the session manager. It has support for new compression formats and can easily pause/resume compression/decompression. It seems users are happy with this news. A user commented on the official post, “Hi Mint Team. Great job so far. Looks very smooth – even for a beta. Menu is crazy fast!!!”  Few others are complaining about the graphical glitches they faced. Another user commented, “Hi team and thanks for your latest offering, there is a LOT to like about this and I will provide as much useful feedback as I can, I have had an issue with graphical glitches from Linux Mint 19x Cinnamon.” To know more about this news, check out the official blog post. Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released Is Linux hard to learn? Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32
Read more
  • 0
  • 0
  • 2590

article-image-introducing-ballista-a-distributed-compute-platform-based-on-kubernetes-and-rust
Amrata Joshi
18 Jul 2019
3 min read
Save for later

Introducing Ballista, a distributed compute platform based on Kubernetes and Rust

Amrata Joshi
18 Jul 2019
3 min read
Andy Grove, a software engineer introduced Ballista, a distributed compute platform and in his recent blog post, he explained his journey on this project. Roughly around eighteen months ago, he started the DataFusion project, an in-memory query engine that uses Apache Arrow as the memory model. The aim was to build a distributed compute platform in Rust that can compete with Apache Spark but which turned out to be difficult for him. Grove writes in a blog post, “Unsurprisingly, this turned out to be an overly ambitious goal at the time and I fell short of achieving that. However, some very good things came out of this effort. We now have a Rust implementation of Apache Arrow with a growing community of committers, and DataFusion was donated to the Apache Arrow project as an in-memory query execution engine and is now starting to see some early adoption.” He then took a break from working on Arrow and DataFusion for a couple of months and focused on some deliverables at work.  He then started a new PoC (Proof of Concept) project which was his second attempt at building a distributed platform with Rust. But this time he had the advantage of already having Arrow and DataFusion in his plate. His new project is called Ballista, a distributed compute platform that is based on Kubernetes and the Rust implementation of Apache Arrow.  A Ballista cluster currently comprises of a number of individual pods within a Kubernetes cluster and it can be created and destroyed via the Ballista CLI. Ballista applications can be deployed to Kubernetes with the help of Ballista CLI and they use Kubernetes service discovery for connecting to the cluster. Since there is no distributed query planner yet, Ballista applications must manually build the query plans that need to be executed on the cluster.  To make this project practically work and push it beyond the limit of just a PoC, Grove listed some of the things on the roadmap for v1.0.0: First is to implement a distributed query planner. Then bringing support for all DataFusion logical plans and expressions. User code has to be supported as part of distributed query execution. They plan to bring support for interactive SQL queries against a cluster with gRPC. Support for Arrow Flight protocol and Java bindings. This PoC project will help in driving the requirements for DataFusion and it has already led to three DataFusion PRs that are being merged into the Apache Arrow codebase. It seems there are mixed reviews for this initiative, a user commented on HackerNews, “Hang in there mate :) I really don't think you deserve a lot of the crap you've been given in this thread. Someone has to try something new.” Another user commented, “The fact people opposed to your idea/work means it is valuable enough for people to say something against and not ignore it.” To know more about this news, check out the official announcement.  Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more Introducing Vector, a high-performance data router, written in Rust  
Read more
  • 0
  • 0
  • 3354
article-image-wasmer-introduces-webassembly-interfaces-for-validating-the-imports-and-exports-of-a-wasm-module
Bhagyashree R
18 Jul 2019
2 min read
Save for later

Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module

Bhagyashree R
18 Jul 2019
2 min read
Yesterday, Syrus Akbary, the founder and CEO of Wasmer, introduced WebAssembly interfaces. It provides a convenient s-expression (symbolic expression) text format that can be used to validate the imports and exports of a Wasm module. Why WebAssembly Interfaces are needed? The Wasmer runtime initially supported only running Emscripten-generated modules and later on added support for other ABIs including WASI and Wascap. WebAssembly runtimes like Wasmer have to do a lot of checks before starting an instance. It does that to ensure a WebAssembly module is compliant with a certain Application Binary Interface (Emscripten or WASI). It checks whether the module imports and exports are what the runtime expects, namely the function signatures and global types match. These checks are important for: Making sure a module is going to work with a certain runtime. Assuring a module is compatible with a certain ABI. Creating a plugin ecosystem for any program that uses WebAssembly as part of its plugin system. The team behind Wasmer introduced WebAssembly Interfaces to ease this process by providing a way to validate imports and exports are as expected. This is how a WebAssembly Interface for WASI looks like: Source: Wasmer WebAssembly Interfaces allow you to run various programs with each ABI, such as Nginx (Emscripten) and Cowsay (WASI). When used together with WAPM (WebAssembly Package Manager), you will also be able to make use of the entire WAPM ecosystem to create, verify, and distribute plugins. They have also proposed it as a standard for defining a specific set of imports and exports that a module must have, in a way that is statically analyzable. Read the official announcement by Wasmer. Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more
Read more
  • 0
  • 0
  • 2514

article-image-intels-new-brain-inspired-neuromorphic-ai-chip-contains-8-million-neurons-processes-data-1k-times-faster
Fatema Patrawala
18 Jul 2019
5 min read
Save for later

Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster

Fatema Patrawala
18 Jul 2019
5 min read
On Monday, Intel announced the Pohoiki Beach, a neuromorphic system comprising of 8 million neurons, multiple Nahuku boards and 64 Loihi research chips. The Intel team unveiled this new system at the DARPA Electronics Resurgence Initiative Summit held in Detroit. Intel introduced Loihi in 2017, its first brain inspired neuromorphic research chip. Loihi applies the principles found in biological brains to computer architectures. It enables users to process information up to 1,000 times faster and 10,000 times more efficiently than CPUs for specialized applications like sparse coding, graph search and constraint-satisfaction problems.The Pohoiki Beach is now available for the broader research community and they can experiment with Loihi. “We are impressed with the early results demonstrated as we scale Loihi to create more powerful neuromorphic systems. Pohoiki Beach will now be available to more than 60 ecosystem partners, who will use this specialized system to solve complex, compute-intensive problems,” says Rich Uhlig, managing director of Intel Labs. According to Intel, Pohoiki Beach will enable researchers to efficiently scale novel neural inspired algorithms such as sparse coding, simultaneous localization and mapping (SLAM) and path planning. The Pohoiki Beach system is different in a way because it will demonstrate the benefits of a specialized architecture for emerging applications, including some of the computational problems hardest for the internet of things (IoT) and autonomous devices to support. By using this type of specialized system, as opposed to general-purpose computing technologies, Intel expects to realize orders of magnitude gains in speed and efficiency for a range of real-world applications, from autonomous vehicles to smart homes to cybersecurity. Pohoiki Beach will mark a major milestone in Intel’s neuromorphic research, as it will lay the foundation for Intel Labs to scale the architecture to 100 million neurons later this year. Rich Uhlig says he, “predicts the company will produce a system capable of simulating 100 million neurons by the end of 2019. Researchers will then be able to apply it to a whole new set of applications, such as better control of robot arms.” Ars Technica writes that Loihi, the underlying chip in Pohoiki Beach consists of 130,000 neuron analogs—hardware-wise, this is roughly equivalent to half of the neural capacity of a fruit fly. Pohoiki Beach scales that up to 8 million neurons—about the neural capacity of a zebrafish. But what perhaps is more interesting than the raw computational power of the new neural network is how well it scales. “With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware. Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time,” says Chris Eliasmith, co-CEO of Applied Brain Research and professor at the University of Waterloo As per the IEEE Spectrum, Intel and its research partners are just beginning to test what massive neural systems like Pohoiki Beach can do, but so far the evidence points to even greater performance and efficiency, says Mike Davies, director of neuromorphic research at Intel. “We’re quickly accumulating results and data that there are definite benefits… mostly in the domain of efficiency. Virtually every one that we benchmark…we find significant gains in this architecture,” he says. Going from a single-Loihi to 64 of them is more of a software issue than a hardware one. “We designed scalability into the Loihi chip from the beginning,” says Davies. “The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step.” According to Davies, Loihi can run networks which are immune to catastrophic forgetting and can learn more like humans. He proved this with an evidence of research work done by the Thomas Cleland’s group at Cornell University, that Loihi can achieve one-shot learning. That is, learning a new feature after being exposed to it only once. Loihi can also run feature-extraction algorithms immune to the kinds of adversarial attacks that can confuse image recognition systems. Traditional neural networks don’t really understand the features they’re extracting from an image in the way our brains do. “They can be fooled with simplistic attacks like changing individual pixels or adding a screen of noise that wouldn’t fool a human in any way,” Davies explains. But the sparse-coding algorithms Loihi can run work more like the human visual system and so wouldn’t fall for such shenanigans. This news brings a lot of excitement amongst the community and they are awaiting to see a system that will contain 100 million neurons by the end of this year. https://twitter.com/javiermendonca/status/1151131213576359937 https://twitter.com/DSakya/status/1150988779143880704 Intel discloses four new vulnerabilities labeled MDS attacks affecting Intel chips Intel plans to exit from the 5G smartphone modem business, following the Apple Qualcomm dispute Google researchers present Zanzibar, a global authorization system, it scales trillions of access control lists and millions of authorization requests per second
Read more
  • 0
  • 0
  • 2485

article-image-mozillas-mdn-web-docs-gets-new-react-powered-frontend-which-is-now-in-beta
Bhagyashree R
17 Jul 2019
3 min read
Save for later

Mozilla’s MDN Web Docs gets new React-powered frontend, which is now in Beta

Bhagyashree R
17 Jul 2019
3 min read
On Monday, Kadir Topal, a Senior Product Manager at Mozilla announced that the new React frontend of MDN Web Docs is now in Beta. MDN Web Docs, formerly known as Mozilla Developer Network, is the one-stop for web developer documentation. Mozilla has been working on replacing the jQuery library with React for months now to provide developers a customized MDN experience while still ensuring faster page loading time. MDN has two modes: editing and viewing. While viewing is used by most developers visiting the site, only a small fraction of them use the editing mode. This is why the team broke these two use cases into different domains. You can access the editing mode on wiki.developer.mozilla.org and the viewing mode on beta.developer.mozilla.org. The team plans to decommission beta.developer.mozilla.org after the testing phase is complete. The editing mode will continue to be served by the old frontend wiki.developer.mozilla.org. The discussion on this decision started earlier this year. While many praised it for this move, many felt that as a promoter of web standards it shouldn’t overlook web components for a custom framework. A developer commented on MDN’s GitHub repository, “As a user, I would like to see Mozilla that uses web standards to promote web standards. As a developer, I would like to see Mozilla and their developers using web standards and promote them. I don't need to see the nth React project.” Another developer commented, “The message that the No. 1 resource for Web development is ditching the same Web technologies it advocates, would be as disastrous as that, implicitly claiming a defeat for the Web, hence seppuku in the long term for the platform nobody would care much anymore.” In its support a developer remarked, “At the end of the day, none of us should care what MDN uses - we should care that the devs who have put so much effort into building a resource that has massively contributed to our own education and will continue to do so on a daily basis are productive and happy.” David Flanagan, one of the developers behind this project, affirmed that this decision was purely pragmatic. Flanagan shared that the MDN team is very tiny and they only had occasional help from volunteer contributions. Choosing React for MDN’s frontend may bring more contributors, he believed. He said, “Fundamentally, I'm asking you all to trust us. We're hoping to do great things this year with MDN, and I think that the vast majority of our users will love what we do. Thank you for reading this far, and thank you for your passion about web standards and MDN.” The team is now seeking developers’ feedback on this release. In case of any issue, you can file a bug, reply on Discourse, or also contact Topal on Twitter. Mozilla announces a subscription-based service for providing ad-free content to users Mozilla launches Firefox Preview, an early version of a GeckoView-based Firefox for Android Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features
Read more
  • 0
  • 0
  • 4639
article-image-lxd-3-15-releases-with-a-switch-to-dqlite-1-0-branch-new-hardware-vlan-and-mac-filtering-on-sr-iov-and-more
Vincy Davis
17 Jul 2019
5 min read
Save for later

LXD 3.15 releases with a switch to dqlite 1.0 branch, new hardware VLAN and MAC filtering on SR-IOV and more!

Vincy Davis
17 Jul 2019
5 min read
A few days ago, the Linux Daemon (LXD) team announced the release of LXD 3.15. The major highlight of the release is the transition of LXD to the dqlite 1.0 branch, which will yield better performance and reliability to cluster users and standalone installations.  Linux Daemon (LXD) is a next-generation system container manager which uses Linux containers. It’s a free software, written in Go and developed under the Apache 2 license. LXD 3.15 explores new features including hardware VLAN and MAC filtering on SR-IOV, new storage-size option for lxd-p2c, Ceph FS storage backend for custom volumes and more. It also includes many major improvements including DHCP lease handling, cluster heartbeat handling, and bug fixes. What’s new in LXD 3.15? Hardware VLAN and MAC filtering on SR-IOV The security.mac_filtering and vlan properties are now available to SR-IOV devices. This will prevent MAC spoofing from the container as it will directly control the matching SR-IOV options on the virtual function. It will also perform hardware filtering at the VF level, in case of VLANs. New storage-size option for lxd-p2c A new --storage-size option has been added in LXD 3.15. When this option is used along with   --storage, it allows specifying the desired volume size to use for the container. Ceph FS storage backend for custom volumes Ceph FS is used as a storage driver for LXD and its support is limited to custom storage volumes. Its support includes size restrictions and native snapshot when the server, server configuration, and client kernel support those features. Ceph FS also allows attaching the same custom volume to multiple containers at the same time, even if they’re located on different hosts. IPv4 and IPv6 filtering IPv4 and IPv6 filtering (spoof protection) enable multiple containers to share the same underlying bridge, without worrying about spoofing the address of other containers, hijacking traffic or causing connectivity issues. Read Also: Internet governance project (IGP) survey on IPV6 adoption, initial reports Major improvements in LXD 3.15 Switch to dqlite 1.0 After a year of running all the LXD servers on the original implementation of distributed sqlite database, LXD 3.15 has finally switched to its 1.0 branch. This transition reduces the number of external dependencies, CPU usage and memory usage for the database. It also makes it easier to debug issues and integrate better with more complex database operations when running clusters.  Reworked DHCP lease handling In the previous versions, LXD’s handling of DHCP was pretty limited. With LXD 3.15, LXD will itself be able to issue DHCP requests to the dnsmasq server based on what’s currently in the DHCP lease table. This allows the user to manually release a lease when a container’s configuration is altered or a container is deleted, all without ever needing to restart dnsmasq. Reworked cluster heartbeat handling With LXD 3.15, the internal heartbeat (the list of database nodes) extends to include the most recent version information from the cluster as well as the status of all cluster members. This means that only the cluster leader will have to retrieve the data and the remaining members will get a consistent view of everything within 10s. Some of the Bug fixes in LXD 3.15 Linker flags have been updated. Path to the host’s communication socket has been fixed: doc/devlxd Basic install instructions have been added: doc/README Translations from weblate has been updated: i18n Unused arg from setNetworkRoutes has been removed: lxd/containers Unit tests have been updated: lxd/db Developers are happy with the new features and improvements included in LXD 3.15. A user on Reddit says, “The IPv4 and IPv6 spoof protection filters is going to make a few people very happy. As well as ceph FS support as RBD doesn't like sharing volumes with multiple host.” Some users were comparing LXD with Docker, where mostly all preferred the former over the latter. A Redditor gave a detailed comparison of the two platforms. The comment read, “The high-level difference is that Docker is for "application containers" and LXD is for "system containers". For Docker that means things like, say, your application process being PID 1, and generally being forced to do things the "Docker way".  “LXD, on the other hand, provides flexibility to use containers the way you want to. This means containers end up being closer to your development environment, e.g. by using systemd if you want it; they can be ephemeral like Docker, but only if you want to”, the user further added.  “So, LXD provides containers that are closer in feel to a regular installation or VM, but with the performance benefit of containers. You can even use LXD containers as Docker hosts, which is what I often do.” For the complete list of updates, head over to the LXD 3.15 release notes. LXD 3.11 releases with configurable snapshot expiry, progress reporting, and more LXD 3.8 released with automated container snapshots, ZFS compression support and more! Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port
Read more
  • 0
  • 0
  • 1622

article-image-eu-commission-opens-an-antitrust-case-against-amazon-on-grounds-of-violating-eu-competition-rules
Fatema Patrawala
17 Jul 2019
3 min read
Save for later

EU Commission opens an antitrust case against Amazon on grounds of violating EU competition rules

Fatema Patrawala
17 Jul 2019
3 min read
Today European Commission has issued a formal antitrust investigation to assess whether Amazon’s use of sensitive data from independent retailers who sell on its marketplace is in breach of EU competition rules. https://twitter.com/EU_Competition/status/1151428097847287808 Commissioner Margrethe Vestager, in charge of competition policy, said: "European consumers are increasingly shopping online. E-commerce has boosted retail competition and brought more choice and better prices. We need to ensure that large online platforms don't eliminate these benefits through anti-competitive behaviour. I have therefore decided to take a very close look at Amazon's business practices and its dual role as marketplace and retailer, to assess its compliance with EU competition rules.” The Commission has noticed that Amazon while providing a marketplace for competitive sellers collects data about the activity on its platform. Based on the preliminary fact-finding, Amazon appears to use competitively sensitive information – about marketplace sellers, their products and transactions on the marketplace. As a part of its in-depth investigation the Commission will look into: the standard agreements between Amazon and marketplace sellers, which allow Amazon's retail business to analyse and use third party seller data. In particular, the Commission will focus on whether and how the use of accumulated marketplace seller data by Amazon as a retailer affects competition. the role of data in the selection of the winners of the “Buy Box” and the impact of Amazon's potential use of competitively sensitive marketplace seller information on that selection. The “Buy Box” is displayed prominently on Amazon and allows customers to add items from a specific retailer directly into their shopping carts. Winning the “Buy Box” seems key for marketplace sellers as a vast majority of transactions are done through it. If proven, the practices under investigation may breach the EU competition rules on anticompetitive agreements between the company under Article 101 of the Treaty on the Functioning of the European Union (TFEU).    Source: EU Commission Commissioner Margrethe Vestager hinted for months that she wanted to escalate a preliminary inquiry into how Amazon may be unfairly using sales data to undercut smaller shops on its Marketplace platform. By ramping up the probe, officials can start to build a case that could ultimately lead to fines or an order to change the way the Seattle-based company operates in the EU. “If powerful platforms are found to use data they amass to get an edge over their competitors, both consumers and the market bear the cost,” said Johannes Kleis of BEUC, the European consumer organization in Brussels. The Commission has already informed Amazon about opening the case proceedings. It will open the investigations on priority basis and there is no legal deadline attached to put an end to this case. The current Chief Economist at the EU Commission approached Sen Elizabeth Warren, who wants to break the big tech, to umpire and build a team to lead this case. https://twitter.com/TomValletti/status/1151430006209482752 To know more about this news, you can check out the official EU Commission page. Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws Amazon is the next target on EU’s antitrust hitlist Amazon workers protest on its Prime day, demand a safe work environment and fair wage  
Read more
  • 0
  • 0
  • 2031