Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-ubers-new-family-of-ai-algorithms-sets-records-on-pitfall-and-solves-the-entire-game-of-montezumas-revenge
Natasha Mathur
28 Nov 2018
6 min read
Save for later

Uber’s new family of AI algorithms sets records on Pitfall and solves the entire game of Montezuma’s Revenge

Natasha Mathur
28 Nov 2018
6 min read
Uber’s AI research team introduced Go-Explore, a new family of algorithms, capable of achieving scores over 2,000,000 on Atari game Montezuma’s Revenge and an average score of over 21,000 on Atari 2600 game Pitfall, earlier this week. This is the first time when any learning algorithm has managed to score above 0 in Pitfall. Go-explore outshines the other state of the art algorithms on Montezuma’s revenge and Pitfall by two orders of magnitude and 21,000 points. Go-Explore uses the human domain knowledge but isn’t entirely dependent on it. This highlights the ability of Go-Explore to score well despite very little prior knowledge. For instance, Go-Explore managed to score over 35,000 points on Montezuma’s Revenge, with zero domain knowledge. As per the Uber team, “Go-Explore differs radically from other deep RL algorithms and could enable rapid progress in a variety of important, challenging problems, especially robotics. We, therefore, expect it to help teams at Uber and elsewhere increasingly harness the benefits of artificial intelligence”. A common challenge One of the most challenging and common problems associated with Montezuma’s Revenge and Pitfall is that of a “sparse reward” faced during the game exploration phase. Sparse reward refers to few reliable reward signals or feedback that could help the player complete the stage and advance within the game quickly. To make things even more complicated, any rewards that are offered during the game are usually “deceptive”, meaning that it misleads the AI agents to maximize rewards in a short period of time, instead of focusing on something that could make them jump the game level (eg; hitting an enemy nonstop, instead of working towards getting to the exit). Now, usually, to tackle such a problem, researchers add “intrinsic motivation” (IM) to agents, meaning that they get rewarded on reaching new states within the game. Adding IM has helped the researchers successfully tackle the sparse reward problems in many games, but they still haven’t been able to do so in the case of Montezuma’s Revenge and Pitfall. Uber’s Solution: Exploration and Robustification According to the Uber team, “a major weakness of current IM algorithms is detachment, wherein the algorithms forget about promising areas they have visited, meaning they do not return to them to see if they lead to new states. This problem would be remedied if the agent returned to previously discovered promising areas for exploration”. Uber researchers have come out with a method that separates the learning in Agents into two steps: exploration and robustification.                                             Exploration and Robustification Exploration Go-Explore builds up an archive of multiple different game states called “cells” along with paths leading to these states. It selects a particular cell from the archive, goes back to that cell, and then explores from that cell. For all cells that have been visited (including new cells), if the new trajectory is better (e.g. higher score), then its chosen to reach that cell. This helps GoExplore remember and return to promising areas for exploration (unlike intrinsic motivation), avoid over-exploring, and also makes them less susceptible to “deceptive” rewards as it tries to cover all the reachable states. Results of the Exploration phase Montezuma’s Revenge: During the exploration phase of the algorithm, Go-Explore reaches an average of 37 rooms and solves level 1 (comprising 24 rooms, not all of which need to be visited) 65 percent of the time in Montezuma’s Revenge. The previous state of the art algorithms could explore only 22 rooms on average. Pitfall: Pitfall requires significant exploration and is much harder than Montezuma’s Revenge since it offers only 32 positive rewards that are scattered over 255 rooms. The complexity of this game is so high, that no RL algorithm has been able to collect even a single positive reward in this game. During the exploration phase of the algorithm, Go explore is able to visit all 255 rooms and manages to collect over 60,000 points. With zero domain knowledge, Go-Explore finds an impressive 22 rooms but does not find any reward. https://www.youtube.com/watch?v=L_E3w_gHBOY&feature=youtu.be Uber AI Labs Robustification If the solutions found via exploration are not robust to noise, you can robustify them, meaning add in domain knowledge using a deep neural network with an imitation learning algorithm, a type of algorithm that can learn a robust model-free policy via demonstrations. Uber researchers chose Salimans & Chen’s “backward” algorithm to get started with, although any imitation learning algorithm would do. “We found it somewhat unreliable in learning from a single demonstration. However, because Go-Explore can produce plenty of demonstrations, we modified the backward algorithm to simultaneously learn from multiple demonstrations ” writes the Uber team. Results of robustification Montezuma’s Revenge: Robustifying the trajectories that are discovered with the domain knowledge version of Go-Explore, it manages to solve the first 3 levels of Montezuma’s Revenge. Now, since,  all levels beyond level 3 in this game are nearly identical, Go-Explore has solved the entire game. “In fact, our agents generalize beyond their initial trajectories, on average solving 29 levels and achieving a score of 469,209! This shatters the state of the art on Montezuma’s Revenge both for traditional RL algorithms and imitation learning algorithms that were given the solution in the form of a human demonstration,” mentions the Uber team. Pitfall: Once the trajectories had been collected in the exploration phase, researchers managed to reliably robustify the trajectories that collect more than 21,000 points. This led to Go-explore outperforming both the state of the art algorithms as well as average human performances, setting an AI record on Pitfall for scoring more than 21,000 points on Pitfall. https://www.youtube.com/watch?v=mERr8xkPOAE&feature=youtu.be Uber AI Labs “Some might object that, while the methods already work in the high-dimensional domain of Atari-from-pixels, it cannot scale to truly high-dimensional domains like simulations of the real world. We believe the methods could work there, but it will have to marry a more intelligent cell representation of interestingly different states (e.g. learned, compressed representations of the world) with intelligent (instead of random) exploration”, writes the Uber team. For more information, check out the official blog post. Uber becomes a Gold member of the Linux Foundation Uber announces the 2019 Uber AI Residency Uber posted a billion dollar loss this quarter. Can Uber Eats revitalize the Uber growth story?
Read more
  • 0
  • 0
  • 2572

article-image-ex-facebook-manager-says-facebook-has-a-black-people-problem-and-suggests-ways-to-improve
Melisha Dsouza
28 Nov 2018
7 min read
Save for later

Ex-Facebook manager says Facebook has a “black people problem” and suggests ways to improve

Melisha Dsouza
28 Nov 2018
7 min read
On 8th November, Mark Luckie, a former strategic partner manager for Facebook, posted an internal memo to Facebook Employees which opined how Facebook is “failing its black employees and its black users.” The memo was sent shortly before he left the company and just days after the New York Times report which threw Facebook under scrutiny for its leadership morals. Facebook and its ‘black people problem’ Mark Luckie, whose job was to handle the firm’s relationship with “influencers” focused on underrepresented voices, detailed a wide range of problems faced both, internally and externally, by the Black Community at Facebook. He pointed out that Blacks are some of the most engaged and active members of Facebook's 2.2 billion-member community- more specifically, 63 percent of African Americans use Facebook to communicate with their family, and 60 percent use it to talk to their friends once a day, compared to 53 and 54 percent of the total U.S. population respectively (according to Facebook’s own research). Yet, many are unable to find a "safe space" for dialogue on the platform, find their accounts suspended indefinitely and their content being removed without notice. Luckie’s memo states: “When determining where to allocate resources, ranking data such as followers, the greatest number of likes and shares, or yearly revenue are employed to scale features and products, the problem with this approach is Facebook teams are effectively giving more resources to the people who already have them. In doing so, Facebook is increasing the disparity of access between legacy individuals/brands and minority communities.” "Facebook can't engender the trust of its black users if it can't maintain the trust of its black employees." In the memo, Luckie congratulated the tech giant for increasing the number of black employees from 2 percent to 4 percent in 2018.  That being said, he went to list down the many issues faced by employees and criticized the firm's human resources department for protecting managers instead of supporting employees in lieu of such incidents. He said, "I've heard far too many stories from black employees of a colleague or manager calling them "hostile" or "aggressive" for simply sharing their thoughts in a manner not dissimilar from their non-black team members, a few black employees have reported being specifically dissuaded by their managers from becoming active in the [internal] Black@ group or doing "black stuff," even if it happens outside of work hours." He pointed out the hypocrisy in the firm where buildings are covered with ‘Black Lives Matter' posters compared to actually appointing more black employees. The existing black employees are often hassled by security and viewed with suspicion by fellow employees. “To feel like an oddity at your own place of employment because of the color of your skin while passing posters reminding you to be your authentic self feels in itself inauthentic” He claimed that Black staffers at Facebook subdue their voices for the fear of risking or jeopardizing their professional relationships and career advancement. After-effects of Mark’s memo Mr Luckie’s comments created waves around social media. What followed was a pattern we are all familiar with: ‘deny and deflect the blame’. First came the public statement, from Facebook spokesman Anthony Harrison: “Over the last few years, we’ve been working diligently to increase the range of perspectives among those who build our products and serve the people who use them throughout the world. The growth in the representation of people from more diverse groups, working in many different functions across the company, is a key driver of our ability to succeed, we want to fully support all employees when there are issues reported and when there may be micro-behaviors that add up. We are going to keep doing all we can to be a truly inclusive company.” As reported by BBC news, the statement was followed by an internal leak, that while Mr Luckie’s post was made public on Tuesday, it had been circulated at Facebook on 8th  November. At that time, Ime Archibong, Facebook’s director of product partnerships responded to the memo. On Tuesday, Mr Luckie posted his response on Twitter, suggesting Facebook’s tone publicly did not necessarily match what was said to him internally. https://twitter.com/marksluckie/status/1067494650259345408 Mr. Luckie seemed to attempt to protect Mr. Archibong’s identity, however, missed out an ‘Ime’ in his tweet. Mr. Archibong- who is also black- has confirmed he wrote the comments. https://twitter.com/_ImeArchibong/status/1067520926114148352 He was disappointed that the conversation was made public, and described Mr Luckie’s note as “pretty self-serving and disingenuous” while accusing him of having a “selfish agenda and not one that has the best intentions of the community and people you likely consider friends at heart”. The whole situation again suggests that Facebook is more concerned with not looking bad, rather than assessing if it is doing bad and what can it do to make its forum more approachable and safe for different members of the community. Mark’s Recommendations to “improve Facebook’s relationship with diverse communities” Mark ends the memo with some recommendations for the company, some of these include: For any team that has one or more people dedicated specifically to diversity, require a strategic plan for how that work will be incorporated into larger goals for the team. Create metrics for other team members to incorporate into their goals as well that ensure representation is everyone's responsibility. Implement data-driven goals to ensure partnerships, product testing, and client support is reflective of the demographics of Facebook. Level up cultural competency training for Operations teams that review reported infractions on Facebook and Instagram. Whenever possible, avoid relying solely on algorithms or AI to triage these problems. Create internal systems for employees to anonymously report microaggressions. This includes using coded language like “lowering the bar” or “hostile,” disproportionately giving lower performance review scores to women and people of color, or discouraging employees from engaging in cultural activities outside of their agreed upon work schedule. If these reported infractions surface a pattern, require the manager and/or team to attend sensitivity training to amend the behavior. Support emerging talent and brands by creating a pipeline of communication and scaled support that allows them to further build with the platform. Establish more regularly-scheduled focus groups with underrepresented communities, particularly the Black and Latino users who over-engage on Facebook and Instagram. Use these conversations to gain insight on how to grow the platform. After Marks memo went viral, many black employees from big tech companies came forward with their own stories of harassment at the workplace, including athlete Leslie Miller who tweeted: https://twitter.com/shaft/status/1067479669593726976 The memo's publication comes on the same day that a Facebook executive was grilled by parliamentary leaders from nine different countries at a special hearing on disinformation in the United Kingdom. You can head over Facebook’s Blog to read the memo in its entirety. NYT Facebook exposé fallout: Board defends Zuckerberg and Sandberg; Media call and transparency report Highlights Outage plagues Facebook, Instagram and Whatsapp ahead of Black Friday Sale, throwing users and businesses into panic Facebook’s outgoing Head of communications and policy takes blame for hiring PR firm ‘Definers’ and reveals more
Read more
  • 0
  • 0
  • 2052

article-image-3-announcements-about-amazon-s3-from-reinvent-2018-intelligent-tiering-object-lock-and-batch-operations
Bhagyashree R
28 Nov 2018
3 min read
Save for later

3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations

Bhagyashree R
28 Nov 2018
3 min read
On day 1 of re:Invent 2018, Amazon announced three additions to its Simple Storage Service (S3): Intelligent-Tiering, Object Lock, and Batch Operations. S3 Intelligent Tiering is introduced to automatically optimize storage costs based on data access patterns. S3 Object lock prevents the deletion or overwriting of an object for a specified amount of time. S3 Batch operations make managing billions of data easier with a single API request. Amazon S3 or Simple Storage Service provides object storage through a web interface. It enables users to store and retrieve any amount of data safely and its easy-to-use management features help you easily organize your data. This service is being used in many use cases such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 Intelligent Tiering for automatic cost optimization Amazon S3 comes with different storage classes designed for different use cases. The storage classes supported by S3 are Standard, Standard-IA, One Zone-IA, and Glacier.  In addition to these, Amazon has added S3 Intelligent Tiering storage class, which is meant for automatically optimizing storage costs when data access patterns change. It consists of two tiers, namely, frequent access and infrequent access. It helps you cut the cost by monitoring your access patterns and moving the data that have not been accessed for consecutive 30 days to the infrequent tier. This object is moved back to the frequent access tier when accessed later. Read the official announcement for more details on Intelligent-Tiering. Amazon S3 Object Lock to prevent object version deletion for a customer-defined retention period Amazon S3 Object Lock is a new feature to allow customers store objects using the write-once-read-many (WORM) model. With this feature, you will be able to prevent the object from being deleted or overwritten for a fixed amount of time or indefinitely. This feature is currently in preview and generally available in all AWS Regions and AWS GovCloud (US) Regions. S3 Object Lock comes with two mechanisms to manage object retention: retention periods and legal holds. A retention period is a fixed period of time during which your object will be WORM protected and can’t be deleted or overwritten. A legal hold is the same as the retention period but with no expiration date. An object version can have both a retention period and a legal hold. Read the official announcement for more details on Object Lock. Amazon S3 Batch Operations for object management Amazon S3 Batch Operations makes managing billion of objects stored in Amazon S3 easier, with a single API request or a few clicks in the S3 Management Console. With this feature, you will be able to do things like copy objects between buckets, replace object tag sets, update access controls, restore objects from Amazon Glacier, and invoke AWS Lambda functions. This feature will be available in all AWS commercial and AWS GovCloud (US) Regions. Read the official announcement for more details on Batch Operations. Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018
Read more
  • 0
  • 0
  • 3265
Visually different images

article-image-facebook-ai-research-and-nyu-school-of-medicine-announces-new-open-source-ai-models-and-mri-dataset-as-part-of-their-fastmri-project
Natasha Mathur
27 Nov 2018
3 min read
Save for later

Facebook AI research and NYU school of medicine announces new open-source AI models and MRI dataset as part of their FastMRI project

Natasha Mathur
27 Nov 2018
3 min read
Facebook AI Research (FAIR) and NYU school of medicine announced yesterday that they're releasing new open source AI research models and data as a part of FastMRI. FastMRI is a new collaborative research project by Facebook and NYU School of medicine, that was announced back in August this year.   FastMRI makes use of artificial intelligence (AI) to make the (MRI) scans up to 10 times faster. By releasing these new AI models and the MRI data, the FastMRI team aims to help improve diagnostic imaging technology, which in turn can increase patients’ access to more powerful and life-saving technology. The latest release explores new AI models, and the first large-scale MRI data set for reconstructing MRI scans. Let’s have a look at these key releases. First large-scale database for MRI scans The fastMRI team has come out with baseline models for ML-based image reconstruction from k-space data subsampled at 4x and 8x scan accelerations. A common challenge faced by AI researchers in the field of MR reconstruction is consistency, as they use a variety of datasets for training AI systems. This is why the latest and the largest open source MRI dataset will help tackle this problem of MR image reconstruction by providing an industry-wide and benchmark ready dataset. This dataset comprises approximately 1.5 million MR images drawn from 10,000 scans, as well as raw measurement data from nearly 1,600 scans. NYU fully anonymized the data set, as well as the metadata and image content manually.  It includes the k-space data collected during scanning. NYU School of Medicine has decided to offer researchers with unprecedented access to data so that they can easily train their models, validate their performance, and get a general idea on how image reconstruction techniques could be used in real-world conditions. The k-space data in this data set is derived from MR devices comprising multiple magnetic coils. It also comprises data simulating the measurements from single-coil machines. AI models, baselines, and results leaderboard FastMRI team mainly focused on two tasks, namely, single-coil reconstruction and multi-coil reconstruction. In both the single-coil and multi-coil deep learning baselines, the AI models are based on u-nets, a convolutional network architecture developed specifically for image segmentation in biomedical applications. U-nets also has a proven track record with an image-to-image prediction. Moreover, a baseline for both classical and non-AI based reconstruction methods has been developed. A separate baseline comprising deep learning models has also been created. Apart from that, FAIR has created a leaderboard for the consistent measurement of MR progress and reconstruction results. The team has already added the baseline models to start with. Researchers can further add improved results as they begin generating and submitting the results to conferences and journals with the help of the fastMRI data set. It will also help the researchers in evaluating their results against the consistent metrics and to figure out how different approaches compare. “Our priority for the next phase of this collaboration is to use the experimental foundations we’ve established — the data and baselines — to further explore AI-based image reconstruction techniques. Additionally, any progress that we make at FAIR and NYU School of Medicine will be part of a larger collaboration that spans multiple research communities” says the FAIR team. For more information, check out the official blog post. Facebook AI researchers investigate how AI agents can develop their own conceptual shared language Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation and hate speech on the platform Babysitters now must pass Perdictim’s AI assessment to be “perfect” to get the job
Read more
  • 0
  • 0
  • 2550

article-image-amazon-reinvent-2018-aws-snowball-edge-comes-with-a-gpu-option-and-more-computing-power
Bhagyashree R
27 Nov 2018
2 min read
Save for later

Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power

Bhagyashree R
27 Nov 2018
2 min read
Amazon re:Invent 2018 commenced yesterday at Las Vegas. This five-day event will comprise of various sessions, chalk talks, and hackathons covering AWS core topics. Amazon is also launching several new products and making some crucial announcements. Adding to this list, yesterday, Amazon announced that AWS Snowball Edge will now come with two options: Snowball Edge Storage Optimized and Snowball Edge Compute Optimized. Snowball Edge Compute Optimized, in addition to more computing power, comes with an optional GPU support. What is AWS Snowball Edge? AWS Snowball Edge is a physical appliance that is used for data migration and edge computing. It supports specific Amazon EC2 instance types and AWS Lambda functions. With Snowball Edge, customers can develop and test in AWS. The applications can then be deployed on remote devices to collect, pre-process, and return the data. Common use cases include data migration, data transport, image collation, IoT sensor stream capture, and machine learning. What is new in Snowball Edge? Snowball Edge will now come in two options: Snowball Edge Storage Optimized: This option provides 100 TB of capacity and 24 vCPUs, well suited for local storage and large-scale data transfer. Snowball Edge Compute Optimized: There are two variations of this option, one is without GPU and the other is with GPU. Both come with 42 TB of S3-compatible storage and 7.68 TB of NVMe SSD storage. You will also be able to run any combination of the instances that consume up to 52 vCPUs and 208 GiB of memory. The main highlight here is the support for an optional GPU. With Snowball Edge with GPU, you can do things like real-time full-motion video analysis and processing, machine learning inferencing, and other highly parallel compute-intensive work. In order to gain access to the GPU, you need to launch an sbe-g instance. You can select the “with GPU” option using the console: Source: Amazon The following are the specifications of the instances: Source: Amazon You can read more about the re:Invent announcements regarding Snowball Edge on AWS website. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition AWS announces more flexibility its Certification Exams, drops its exam prerequisites Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources
Read more
  • 0
  • 0
  • 5428

article-image-babysitters-now-must-pass-perdictims-ai-assessment-to-be-perfect-to-get-the-job
Natasha Mathur
26 Nov 2018
4 min read
Save for later

Babysitters now must pass Perdictim’s AI assessment to be “perfect” to get the job

Natasha Mathur
26 Nov 2018
4 min read
AI is everywhere, and now it's helping parents determine whether a potential babysitter for their toddler is a right fit for hire or not. Predictim is an online service that uses advanced AI to analyze the risk levels attached to a babysitter. It gives you an overall risk score for the babysitter along with complete details on the babysitter by scanning their social media profiles using language processing algorithms. Predictim’s algorithms analyze “billions” of data points dating back to years in a person’s online profile. It then delivers an evaluation within minutes of a babysitter’s predicted traits, behaviors, and areas of compatibility based on her digital history. It uses language-processing algorithms and computer vision to assess babysitters' Facebook, Twitter and Instagram posts for clues about their offline life.  Predictim assesses the babysitters based on four different personality features like bullying/harassment, bad attitude, explicit content, and drug abuse. This is what’s making this service appealing for parents as determining all these details about a potential babysitter is not possible with just a standard background check. “The current background checks parents generally use don’t uncover everything that is available about a person. Interviews can’t give a complete picture. A seemingly competent and loving caregiver with a ‘clean’ background could still be abusive, aggressive, a bully, or worse. That’s where Predictim’s solution comes in”, said Sal Parsa, co-founder, Predictim.   Criticism towards Predictim   Now, although the services are radically transforming how companies approach hiring and reviewing workers, it also poses significant risks. In a post by Drew Harwell, reporter, Washington Post, Predictim depends on black-box algorithms and is not only prone to biases over how an ideal babysitter should behave, look or share (online) but its personality scan results are also not always accurate. The software might misunderstand a person’s personality based on her/his social media use. An example presented by Harwell is that of a babysitter who was flagged for possible bullying behavior. The mother who had hired the babysitter said that she couldn’t figure out if the software was making that analysis based on an old movie quote, song lyric or if it actually found occurrences of bullying language. Moreover, there are no phrases, links or details provided to the parents that indicate the non-credibility of a babysitter. Harwell also points out that hiring and recruiting algorithms have been “shown to hide the kinds of subtle biases that could derail a person's career”.  An example given by Harwell is that of Amazon who scrapped its sexist AI algorithm last month, as it unfairly penalized the female candidates.  Kate Crawford, co-founder, AI Now institute tweeted out against Predictim, calling it “bollocks AI system”:  https://twitter.com/katecrawford/status/1066450509782020098 https://twitter.com/katecrawford/status/1066359192301256706 But, the Predictim team is set on expanding its capabilities. They’re preparing for nationwide expansion as  Sittercity, a popular online babysitter marketplace, is planning to launch a pilot program next year with Predictim's automated ratings on the site's sitter screenings and background checks. They’re also currently looking into gaining the psychometric data via the babysitter’s social media profiles to dig even deeper into the details about a babysitter’s private life. This has raised many privacy-related questions in support of the babysitters it could indirectly force a babysitter to provide the parent with all the personal details of her life to get a job, that she might not be comfortable sharing otherwise.    However, some people think differently and are more than okay asking babysitters for their personal data. An example given by Harwell is of a mother of two, who believes that “babysitters should be willing to share their personal information to help with parents’ peace of mind. A background check is nice, but Predictim goes into depth, really dissecting a person — their social and mental status. 100 percent of the parents are going to want to use this. We all want the perfect babysitter.” Now, despite parents wanting the “perfect babysitter”, the truth of the matter is that Predictim’s AI algorithms are not “perfect” and need to be more efficient so that they don’t project their unfair biases on the babysitters.  Predictim needs to make sure that it caters its services not just for the benefit of the parents but also takes into consideration the needs of babysitters.   Google’s Pixel camera app introduces Night Sight to help click clear pictures with HDR+ Blackberry is acquiring AI & cybersecurity startup, Cylance, to expand its next-gen endpoint solutions like its autonomous cars’ software Facebook AI researchers investigate how AI agents can develop their own conceptual shared language
Read more
  • 0
  • 0
  • 2395
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-satpy-0-10-0-python-library-for-manipulating-meteorological-remote-sensing-data-released
Amrata Joshi
26 Nov 2018
2 min read
Save for later

SatPy 0.10.0, python library for manipulating meteorological remote sensing data, released

Amrata Joshi
26 Nov 2018
2 min read
SatPy is a python library used for reading and manipulating meteorological remote sensing data and writing them to various image/data file formats. Last week, the team at Pytroll announced the release of SatPy 0.10.0. SatPy is responsible for making RGB composites directly from satellite instrument channel data or from higher level processing output. It also  makes data loading, manipulating, and analysis easy. https://twitter.com/PyTrollOrg/status/1066865986953986050 Features of SatPy 0.10.0 This version comes with two luminance sharpening compositors, LuminanceSharpeninCompositor and SandwichCompositor. The LuminanceSharpeninCompositor replaces the luminance via RGB. The SandwichCompositor multiplies the RGB channels with the reflectance. SatPy 0.10.0 comes with check_satpy function for finding missing dependencies. This version also allows writers to create output directories in case, they don't exist. In case of multiple matches, SatPy 0.10.0 helps in improving the handling of dependency loading. This version also supports the new olci l2 datasets used for olci l2 reader. Olci is used for ocean and land processing. Since yaml is the new format for area definitions in SatPy 0.10.0, areas.def has been replaced with areas.yaml In SatPy 0.10.0, filenames are used as strings by File handlers. This version also allows readers to accept pathlib.Path instances as filenames. With this version, it is easier to configure in-line composites. A README document has been added to the setup.py description. Resolved issues in SatPy 0.10.0 The issue with resampling a user-defined scene has been  resolved. Native resampler now works with DataArrays. It is now possible to review subclasses of BaseFileHander. Readthedocs builds are now working. Custom string formatter has been added in this version for lower/upper support. The inconsistent units of geostationary radiances have been resolved. Major Bug Fixes A discrete data type now gets preserved through resampling. Native resampling has been fixed. The slstr reader has been fixed for consistency. Masking in DayNightCompositor has been fixed. The problem with attributes not getting preserved while adding overlays or decorations has now been fixed. To know more about this news, check out the official release notes. Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript Spotify releases Chartify, a new data visualization library in python for easier chart creation Google releases Magenta studio beta, an open source python machine learning library for music artists
Read more
  • 0
  • 0
  • 2604

article-image-linkedin-used-email-addresses-of-18m-non-members-to-buy-targeted-ads-on-facebook-reveals-a-report-by-dpc-ireland
Bhagyashree R
26 Nov 2018
4 min read
Save for later

LinkedIn used email addresses of 18M non-members to buy targeted ads on Facebook, reveals a report by DPC, Ireland

Bhagyashree R
26 Nov 2018
4 min read
In a report published on Friday by Ireland’s Data Protection Commissioner revealed that LinkedIn with an aim to get more people on the platform used email addresses of almost 18 million people to buy targeted ads on Facebook. It has now stopped this practice as a result of the investigation and as a solution has introduced a new feature that asks user’s permission to allow exporting email addresses. What was the DPC's investigation about? The final report by Ms. Helen Dixon, the Data Protection Commissioner shows the conclusions of the audit about LinkedIn’s processing of personal data for the period 1 January – 24 May 2018.  The audit was done after a non-LinkedIn user notified to the DPC that LinkedIn has obtained and used the complainant’s email address for the purpose of targeted advertising on the Facebook platform. This investigation revealed that LinkedIn has processed hashed email addresses of approximately 18 million non-LinkedIn members. LinkedIn implemented several actions to stop the processing of user data for the purposes that gave rise to this complaint. To make sure that LinkedIn is indeed taking right measures to solve these complaints, DPC did the investigation, which revealed: “As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018.” One thing that the report does not reveal is the source of these emails. Other parts of this report list cases such as the inquiry into Facial Recognition usage by Facebook, how WhatsApp and Facebook exchange user data, and the Yahoo security breach that affected 500 million users. What was LinkedIn’s response? Denis Kelleher, the Head of Privacy (EMEA), at LinkedIn told TechCrunch that they have now taken appropriate actions to cease the data breach: “We appreciate the DPC’s 2017 investigation of a complaint about an advertising campaign and fully cooperated. Unfortunately, the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result.” LinkedIn has also introduced a new privacy setting that defaults to blocking other users from exporting your email address. You can find this option under Settings & Privacy -> Privacy -> Who Can See My Email Address?  Source: TechCrunch This step could prevent some spam and give more control to the users over with whom they want to share their email address. But also, according to TechCrunch, this update could upset some users: “But the launch of this new setting without warning or even a formal announcement could piss off users who’d invested tons of time into the professional networking site in hopes of contacting their connections outside of it.” LinkedIn confirmed TechCrunch that this is a newly introduced setting to ensure better privacy of users: “This is a new setting that gives our members even more control of their email address on LinkedIn. If you take a look at the setting titled ‘Who can download your email’, you’ll see we’ve added a more detailed setting that defaults to the strongest privacy option. Members can choose to change that setting based on their preference. This gives our members control over who can download their email address via a data export.” You can read the full report at TechCrunch’s official website. Also, read the report published by DPC for more details. Read Next Creator-Side Optimization: How LinkedIn’s new feed model helps small creators Email and names of Amazon customers exposed due to ‘technical error’; number of affected users unknown Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior”
Read more
  • 0
  • 0
  • 1949

article-image-mlflow-0-8-0-released-with-improved-ui-experience-and-better-support-for-deployment
Savia Lobo
22 Nov 2018
3 min read
Save for later

MLflow 0.8.0 released with improved UI experience and better support for deployment

Savia Lobo
22 Nov 2018
3 min read
MLflow 0.8.0 released with improved UI experience and better support for deployment Last week, the team at Databricks released MLflow 0.8.0. MLflow, an open source platform used for managing end-to-end machine learning lifecycle. It is used for tracking experiments and managing and deploying models from a variety of ML libraries. It is also responsible for packaging ML code in a reusable and reproducible form in order to share the same with other data scientists. MLflow 0.8.0 features In MLflow 0.8.0, the SageMaker and pyfunc server support the ‘split’ JSON format, which helps the client to specify the order of columns. With MLflow 0.8.0, the server can now pass the gunicorn option. This is because as gunicorn uses threads instead of processes and saves memory space. This version also brings in TensorFlow 1.12 support. With this version, there’s no need of loading Keras module at predict time. Major change In MLflow 0.8.0 version, [CLI] mlflow sklearn server has been removed in favor of mlflow pyfunc serve, as it takes the same arguments but works against any pyfunc model. Major improvements in MLflow 0.8.0 This version includes various new features including improved UI experience and support for deploying models directly to the Azure Machine Learning Service Workspace. Improved MLflow UI Experience In this version, the metrics and parameters are by default grouped together in a single tabular column in order to avoid an explosion of columns. The users can customize their view by sorting the parameters and metrics. They can click on each parameter or metric in order to view them in a separate column. This makes the user experience better. The runs which are nested inside other runs can now be grouped by their parent-run. They can also be expanded or collapsed altogether. By calling mlflow.start_run or mlflow.run, a run can be nested. Though MLflow gives each run a UUID by default, one can also now assign a name to a run and also can edit the names. It makes the process easy as it is easier to remember the name than a number. There’s no need to reconfigure the view each time one uses it, as the MLflow UI remembers the filters, sorting and column setup done in browser local storage. Support for Deployment of models to Azure ML Service In this version, the Microsoft Azure Machine Learning deployment tool has been modified for deploying MLflow models packaged as Docker containers. One can use the mlflow.azureml module to package a python_function model into an Azure ML container image. Further, this image can be deployed to the Azure Kubernetes Service (AKS) and the Azure Container Instances (ACI) platforms. Major bug fixes The server works better in this version even when the environment and run files are corrupted. The Azure Blob Storage artifact repo now supports Windows paths. In the previous version, deleting the default experiment caused recreation of the same. But with MLflow 0.8.0 this problem has been fixed. Read more about this news on Databricks’ blog. Introducing EuclidesDB, a multi-model machine learning feature database Google releases Magenta studio beta, an open source python machine learning library for music artists Technical and hidden debts in machine learning – Google engineers’ give their perspective
Read more
  • 0
  • 0
  • 1855

article-image-aws-updates-the-face-detection-analysis-and-recognition-capabilities-in-amazon-rekognition
Natasha Mathur
22 Nov 2018
2 min read
Save for later

AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition

Natasha Mathur
22 Nov 2018
2 min read
The AWS team announced updates to the face detection, analysis, and recognition features in its deep learning-based service, Amazon Rekognition, yesterday, which makes it easy to add images and video analysis to your applications. These updates are now available in Rekognition at no extra cost. Moreover, there is no machine learning experience required. These updates will provide customers with an enhanced ability to detect more faces from the images (even the difficult ones), perform more accurate face matches, as well as obtain the improved age, gender, and emotion attributes for the faces in images. Amazon Rekognition can now detect 40% more faces, and the face recognition feature produces 30% more correct best matches. The rate of false detections has also dropped down by 50%. Additionally, face matches now have more consistent similarity scores that vary across lighting, pose, and appearance, letting the customers use higher confidence thresholds, avoid false matches and reduce human review in identity verification applications. Face detection algorithms usually suffer difficulty when it comes to detecting faces in images with challenging aspects. These challenging aspects include pose variations (caused by head movement or camera movements, difficult lighting (low contrast and shadows, washed out faces), and a blur or occlusion (faces covered by hat, hair, or hands). Pose variation issue is generally encountered in faces that have been captured from acute camera angles (shots taken from above or below a face), shots with a side-on view of a face, or when the subject is looking away. This particular issue is typically seen in social media photos, selfies, or fashion photoshoots. Lighting issue is common in stock photography and at event venues where there isn’t enough contrast between facial features and the background in low lighting. Occlusion is seen in photos with artistic effects (selfies or fashion photos, video motion blur), fashion photography or photos taken from identity documents. With the latest update, Rekognition has become very efficient at handling all the different aspects of challenging images that have been captured in unconstrained environments, announces AWS. For more information, check out the official blog post. “We can sell dangerous surveillance systems to police or we can stand up for what’s right. We can’t do both,” says a protesting Amazon employee AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers Amazon Rekognition can now ‘recognize’ faces in a crowd at real-time
Read more
  • 0
  • 0
  • 2946
article-image-apple-has-quietly-acquired-privacy-minded-ai-startup-silk-labs-reports-information
Sugandha Lahoti
22 Nov 2018
2 min read
Save for later

Apple has quietly acquired privacy-minded AI startup Silk Labs, reports Information

Sugandha Lahoti
22 Nov 2018
2 min read
According to a report by Information, Apple has quietly acquired AI Startup Silk Labs earlier this year. The report of this acquisition has only come out recently. According to PitchBook, a research firm that tracks startup financing, the deal was likely to be a small one for Apple, as Silk Labs only had about a dozen employees and raised approximately $4 million in funding. Google, Amazon, and other companies have been using cloud-based servers for handling most AI processing for mobile devices. This causes user privacy issues as these companies could monitor users’ requests as they come in. Apple, on the other hand, has always been vocal about “selling smartphones and hardware and not user privacy”. What Apple has planned for Silk is unknown, though both companies have in the past expressed interest in building AI systems that operate locally instead of in the cloud. This may have been the reason for its acquisition of Silk Labs. Silk Labs is based in San Mateo, California. It was founded by former Mozilla CTO Andreas Gal and former Mozilla platform engineer Chris Jones along with Michael Vines, who served as Qualcomm Innovation Center's senior director of technology. Silk Labs mostly works in "video and audio intelligence," as well as use cases of edge computing ranging from home security to retail analytics and building surveillance. Silk Labs’s 2016 home monitoring camera called Sense was capable of detecting people, faces, objects, and audio signals. It could also play music based on the user's taste and pair with third-party gadgets like Sonos speakers and smart light bulbs. The distinguishing factor - unlike other AI-based smart home products - was that Sense processed computations on-device and stored data locally to ensure user privacy. However, the project never surfaced and was canceled. Apple may also release its own smart video cameras, following the acquisition. However, Apple will probably use Silk Lab’s tech to upgrade their underlying software and research to build on-device AI for Apple’s existing camera and mobile solutions Tim Cook talks about privacy, supports GDPR for USA at ICDPPC, ex-FB security chief calls him out. Tim Cook criticizes Google for their user privacy scandals but admits to taking billions from Google Search. Apple T2 security chip has Touch ID, Security Enclave, hardware to prevent microphone eavesdropping, amongst many other features!
Read more
  • 0
  • 0
  • 2581

article-image-neo4j-enterprise-edition-is-now-available-under-a-commercial-license
Amrata Joshi
21 Nov 2018
3 min read
Save for later

Neo4j Enterprise Edition is now available under a commercial license

Amrata Joshi
21 Nov 2018
3 min read
Last week, the Neo4j community announced that the Neo4j Enterprise Edition will be available under a commercial license. The source code is available only for the Neo4j Community Edition. The Neo4j Community Edition will continue to be provided under an open source GPLv3 license. According to the Neo4j community, this new change won’t affect any Neo4j open source projects. Also, it won’t create an impact over customers, partners or OEM users operating under a Neo4j subscription license. The Neo4j Desktop users using Neo4j Enterprise Edition under free development license also won’t get affected. It doesn’t impact the members of Neo4j Startup program. The reason for choosing an open core licensing model The idea behind getting Neo4j Enterprise Edition under commercial license was to clarify and simplify the licensing model and remove ambiguity. Also, the community wanted to clear the confusion regarding what they sell and what they open source. Also, the community wanted to clarify about options they offer. The Enterprise Edition source and object code were initially available under multiple licenses. This led to multiple interpretations of these multiple licenses which ultimately created confusion in the open source community, in the buyers, and even in legal reviewers’ minds. According to the Neo4j blog, “ >99% of Neo4j Enterprise Edition code was written by individuals on Neo4j’s payroll – employed or contracted by Neo4j-the-company. As for the fractional <1%... that code is still available in older versions. We’re not removing it. And we have reached out to the few who make up the fractional <1% to affirm their contributions are given proper due.” Developers can use the Enterprise Edition for free by using the Neo4j Desktop for desktop-based development. Startups can benefit through the startup license offered by Neo4j, which is also available now to the startups with up to 20 employees. Data journalists, such as the ICIJ and NBC News can use the Enterprise Edition for free via the Data Journalism Accelerator Program. Neo4j also offers a free license to universities for teaching and learning. To know more about this news, check out Neo4j’s blog. Neo4j rewarded with $80M Series E, plans to expand company Why Neo4j is the most popular graph database Neo4j 3.4 aims to make connected data even more accessible
Read more
  • 0
  • 0
  • 7513

article-image-twitter-ceo-jack-dorsey-slammed-by-users-after-a-photo-of-him-holding-smash-brahminical-patriarchy-poster-went-viral
Natasha Mathur
21 Nov 2018
5 min read
Save for later

Twitter CEO, Jack Dorsey slammed by users after a photo of him holding 'smash Brahminical patriarchy' poster went viral

Natasha Mathur
21 Nov 2018
5 min read
Twitter CEO, Jack Dorsey, stirred up a social media hurricane after a picture of him holding a poster of a woman that said “Smash brahminical patriarchy” went viral. The picture which was first shared by Anna MM Vetticad on Twitter, an award-winning Indian journalist, and author, was later retweeted by Twitter India. https://twitter.com/annavetticad/status/1064084446909997056 Twitter India shared the picture mentioning that it was of a “closed-door discussion” with a group of women journalists and change makers from India. It also mentioned that “It is not a statement from Twitter or our CEO, but a tangible reflection of our company's efforts to see, hear and understand all sides of important public conversations that happen on our service around the world”. https://twitter.com/TwitterIndia/status/1064523207800119296 Soon after the picture was shared, it started to receive heavy backlash from Brahmin nationalists and users over Dorsey slamming Brahmins, members of the highest caste in Hinduism. Mohandas Pai (former Infosys CFO), Rajeev Malhotra (Indian-American author),  and Chitra Subramaniam (Indian journalist and author) are some of the prominent names who have spoken out against the Twitter Chief: https://twitter.com/RajivMessage/status/1064500259714408454 https://twitter.com/chitraSD/status/1064550413599473664 https://twitter.com/TVMohandasPai/status/1064554153626734592 https://twitter.com/neha_ABVP/status/1064936591263592448 In fact, Sandeep Mittal, Joint Secretary, Parliament of India, went ahead to call the picture a “fit case for registration of a criminal case for attempt to destabilize the nation”. https://twitter.com/smittal_ips/status/1064762920016494592 This was Dorsey’s first tour to India, one of Twitter’s fastest growing markets. During his tour, he had already conducted a discussion on Twitter with the students at IIT Delhi, met Dalai Lama, actor Shahrukh Khan, and the Prime Minister of India, Narendra Modi. It was last weekend when the picture was taken during Dorsey’s meet up with a group of journalists, writers, and activists in Delhi to hear about their experiences on Twitter in India. Vijaya Gadde, Twitter’s legal, policy, and trust and safety lead Vijaya, had accompanied Mr. Dorsey to India, and apologized over Twitter, saying that the poster was a “private gift” given to Twitter. No apology has been made by Dorsey so far. https://twitter.com/vijaya/status/1064586313863618560 People stand up in defense of the picture The apology by Vijaya Gadde further sparked anger among the female journalists who were a part of the round-table discussion and a lot of others users. Anna MM Vetticad, who was a part of the picture, tweeted against Gadde’s apology, saying that she’s “sad to see a lack of awareness and concern about the caste issues” and that the picture was not a “private photo”. Vetticad also mentioned that the photo was taken by a Twitter representative and sent for distribution. https://twitter.com/annavetticad/status/1064782090905010176 Another journalist, Rituparna Chatterjee, who was also present during the discussion, tweeted in defense of the picture saying that the posters were brought and given to the Twitter team by Sanghapali Aruna, who raised some important points regarding her Dalit experience on Twitter. She also mentioned that there was no “targeting/specific selection of any group”. https://twitter.com/MasalaBai/status/1064474340392148992 https://twitter.com/MasalaBai/status/1064474709952212998 Sanghapali Aruna, who brought the posters with her talked to ThePrint about how women have been one of the major victims of the Brahminical patriarchy despite it controlling all of us in more ways than one. “The ‘Smash Brahminical patriarchy’ poster which I gifted to Jack Dorsey was questioning precisely this hegemony and concentration of power in the hands of one community. This wasn’t an attempt at hate speech against the Brahmins but was an attempt to challenge the dominance and sense of superiority that finds its origins in the caste system”. Aruna was also greatly disturbed by Gadde’s apology as she mentions that  “Americans do not know of the Indian caste history, they can’t tell one brown person from another. But as an Asian woman, Vijaya should’ve known better”. Public reaction to the photo largely varies. Some people slammed Dorsey and the photo, while others have stood up in support of it. They believe that the poster was a political art piece that represented India’s Dalit lower caste and other religious minorities’ demands to get rid of the gender and caste-based discrimination by the elite Brahmins. https://twitter.com/dalitdiva/status/1064767431061708800 https://twitter.com/GauravPandhi/status/1064790294321905664 https://twitter.com/sandygrains/status/1064753374313144320 https://twitter.com/mihirssharma/status/1064756234702725120 https://twitter.com/AdityaRajKaul/status/1064975045443940352 https://twitter.com/GitaSKapoor/status/1065121168393478145 https://twitter.com/DrSharmisthaDe/status/1064942023940218880 Twitter’s CEO, Jack Dorsey’s Senate Testimony: On Twitter algorithms, platform health, role in elections and more Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee Twitter’s trying to shed its skin to combat fake news and data scandals, says Jack Dorsey
Read more
  • 0
  • 0
  • 2483
article-image-outage-plagues-facebook-instagram-and-whatsapp-ahead-of-black-friday-sale-throwing-users-and-businesses-into-panic
Melisha Dsouza
21 Nov 2018
3 min read
Save for later

Outage plagues Facebook, Instagram and Whatsapp ahead of Black Friday Sale, throwing users and businesses into panic

Melisha Dsouza
21 Nov 2018
3 min read
Yesterday, (on 20th November) Facebook and its subsidiary services- Instagram and Whatsapp- went down for users around UK, Europe, and the US. This is the second time Facebook faced an outage this month.  According to Facebook's site for developers, the outage started around 6 a.m. Eastern time. The outage lasted for 13 hours. 48% Facebook users reported a total blackout, while 35% faced issue with login, and 16% users had issues viewing pictures. In the case of Instagram, 53% of users had problems with their newsfeed, while 33% had issues with login, and 13% with the website. Facebook responded to this outage on Twitter, assuring users that they were working towards resolving the issue. https://twitter.com/facebook/status/1064905103755247621 Facebook's Ads Manager, the tool that lets users create advertisements on its social network, also crashed, just days before businesses use Facebook and Instagram to promote Black Friday sales. This left would-be advertisers unable to create new ad campaigns. One of Facebook’s representatives confirmed the outage to Bloomberg, and stated that people and companies can't create new ads or change their existing campaigns due to the issue. The representative added that advertisements previously launched through the system are still running on Facebook. Many advertisement creators that use Facebook to publish ads were in a confused state once this news was out. https://twitter.com/KayaWhatley/status/1064934308220149760 https://twitter.com/stevekatasi/status/1064977427405901825 Once the service was up and running, Facebook tweeted: https://twitter.com/facebook/status/1065036077486944256 In spite of the official announcement that ‘Facebook was 100 percent up for everyone’, many users complained that the site was not in a completely functional state. Amongst persisting issuers were:  pictures not loading, gifs not loading all the way, broken links in posts, messenger app not functioning properly, pages not loading, accounts not been restored and many more issues. Facebook has not yet reverted back to these complaints. Talking of Facebook issues, Zuckerberg is still on the defensive a report in The New York Times that threw the company under scrutiny for its security breaches from Russian accounts ahead of the 2016 U.S. presidential election and how it deals with controversies it faces.  In an exclusive interview with CNN yesterday, Zuckerberg confirmed that Sheryl Sandberg will continue working for the company and that he won’t be stepping down as chairman. GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Why skepticism is important in computer security: Watch James Mickens at USENIX 2018 argue for thinking over blindly shipping code Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently  
Read more
  • 0
  • 0
  • 2667

article-image-mozilla-introduces-lpcnet-a-dsp-and-deep-learning-powered-speech-synthesizer-for-lower-power-devices
Bhagyashree R
21 Nov 2018
2 min read
Save for later

Mozilla introduces LPCNet: A DSP and deep learning-powered speech synthesizer for lower-power devices

Bhagyashree R
21 Nov 2018
2 min read
Yesterday, Mozilla’s Emerging Technologies group introduced a new project called LPCNet, which is a WaveRNN variant. LPCNet aims to improve the efficiency of speech synthesis by combining deep learning and digital signal processing (DSP) techniques. It can be used for Text-to-Speech (TTS), speech compression, time stretching, noise suppression, codec post-filtering, and packet loss concealment. Why is LPCNet introduced? Many recent neural speech synthesis algorithms have made it possible to synthesize high-quality speech and code high-quality speech at very low bitrate. These algorithms, which are often based on algorithms like WaveNet, give promising results in real-time with a high-end GPU. But LPCNet aims to perform speech synthesis on end-user devices like mobile phones, which generally do not have powerful GPUs and have a very limited battery capacity. We do have some low complexity parametric synthesis models such as low bitrate vocoders, but their quality is a concern. Generally, they are efficient at modeling the spectral envelope of the speech using linear prediction, but no such simple model exists for the excitation. LPCNet aims to show that the efficiency of speaker-independent speech synthesis can be improved by combining newer neural synthesis techniques with linear prediction. What mechanisms does LPCNet use? In addition to linear prediction, it includes the following tricks: Pre-emphasis/de-emphasis filters: These filters allow shaping the noise caused by the μ-law quantization. LPCNet is capable of shaping the μ-law quantization noise to be mostly inaudible. Sparse matrices: LPCNet uses sparse matrices in the main RNN similar to WaveRNN. These block-sparse matrices consist of blocks with size 16x1 to make it easier to vectorize the products. Instead of forcing many non-zero blocks along the diagonal, as a minor improvement, all the weights on the diagonal of the matrices are kept. Input embedding: Instead of feeding the inputs directly to the network, the developers have used an embedding matrix. Embedding is generally used in natural language processing, but using it for μ-law values makes it possible to learn non-linear functions of the input. You can read more in detail about LPCNet on Mozilla’s official website. Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules Mozilla shares why Firefox 63 supports Web Components Mozilla shares how AV1, the new open source royalty-free video codec, works
Read more
  • 0
  • 0
  • 2884