Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-nyt-facebook-expose-fallout-board-defends-zuckerberg-and-sandberg-media-call-and-transparency-report-highlights
Melisha Dsouza
16 Nov 2018
6 min read
Save for later

NYT Facebook exposé fallout: Board defends Zuckerberg and Sandberg; Media call and transparency report Highlights

Melisha Dsouza
16 Nov 2018
6 min read
On Wednesday, New York Times published a report on Facebook that raised questions on the company's way of dealing with the controversies surrounding it, disinformation, the way it treats competitors and critics. The report scathingly pointed out how Facebook denied and deflected the blame it faced, time and again- listing a series of issues faced by the company which affected its users right from 2015. In response to this report, Facebook released a statement on Thursday pointing out inaccuracies in the report by the New York Times. Further on a press call yesterday, Mark Zuckerberg planned, to discuss how the social network manages problematic posts and its community standards. He also released a “community standards” transparency report, on the very same day, listing the actions proactively taken to take down illicit accounts and the struggles that the company still faces. However, the almost 90 minute call mainly ended up focusing on discussions around the New York Times story and what Facebook intends to do in its aftermath. Mark Zuckerberg’s call with the reporters “The reality of running a company of more than 10,000 people is that you’re not going to know everything that’s going on” -Mark Zuckerberg, Facebook’s chief executive and chairman On Thursday, Mark Zuckerberg held a conference call with reporters of top media firms like USA today, Bloomberg, ABC news, Wired and many others to discuss Facebook's latest transparency report, which lists how the company caters to its community standards that govern content on its platform. While addressing questions on how he and Facebook’s COO, Sheryl Sandberg,  dealt with the issues listed in the New York Times report, Mr. Zuckerberg defended the social network, Ms. Sandberg and his own record. In response to the Russian interference, he acknowledged that the company was slow to act, but did not hinder investigation at any point. He stated: "I've said many times we were too slow to spot Russian interference, to suggest we weren't interested in knowing the truth or wanted to hide what we knew or wanted to prevent investigations is simply untrue." This was aligned to Facebook’s board statement on Thursday where the board acknowledged that the two executives responded slowly to Russian interference on Facebook and that directors had pushed them to act faster, but “to suggest they knew about Russian interference and either tried to ignore it or prevent investigations into what had happened was grossly unfair.” As for hiring a PR firm- Definers- who reportedly diverted attention from Facebook’s problems to its rival companies issues, Zuckerberg repeatedly said that he had only learned of Facebook's work with Definers from the NYT report and Sandberg was also previously unaware of the relationship. When asked who was aware, Zuckerberg simply said  "someone on our comms team must have hired them." "As soon as I read it, I looked into whether this is the type of firm we want to be working with, and we stopped working with them," he added. "We certainly never asked them to spread anything that wasn't true." However, as COO, Facebook's corporate communications team is under the purview of Sandberg. In a statement on Facebook late Thursday, Ms. Sandberg  wrote: “I did not know we hired them or about the work they were doing, but I should have.” During the call, Zuckerberg mentioned that Facebook will soon create an independent oversight body to adjudicate appeals on content moderation issues. This analogous to a Supreme court, will be created sometime next year and attempt to bring a balance between the right to free speech while keeping people safe around the world. A Blueprint for Content Governance and Enforcement On Thursday, Facebook released its second transparency report listing its advances in proactively identifying hate speech, and the first numbers for bullying, harassment, and child sexual exploitation takedowns .The report emphasizes the company's efforts to remove bad content before users ever see it, while fielding an ever-growing number of requests from governments. In line to establishing an independent body to govern content moderation issues, he wrote “I believe independence is important for a few reasons. First, it will prevent the concentration of too much decision-making within our teams. Second, it will create accountability and oversight. Third, it will provide assurance that these decisions are made in the best interests of our community and not for commercial reasons.” Some interesting statistics to note from this report are: From July to September of 2018, Facebook took down far more pieces of unacceptable content. It removed 2.1 million and 8.7 million pieces of content from the category of bullying and harassment and child sexual exploitation and nudity, respectively. It removed 1.23 billion pieces of spam and closed 754 million fake accounts in the past quarter. Facebook says these are mostly spam, although it’s periodically removed accounts linked to political propaganda campaigns. Facebook removed 15.4 million pieces of violent content between June and September of 2018. Facebook has also become better at removing this content before users report it, claiming to proactively find more than 96 percent of the material, compared to around 71 percent last year. Facebook is still fielding government requests for user data, which has increased around 26 percent between the last half of 2017 and the first half of 2018. Facebook has made progress at deploying thousands of newly hired reviewers and artificial-intelligence tools, to enforce its community standards more aggressively. They have managed to catch 95 percent of nudity, fake accounts and graphic violence before users report it to Facebook. Public’s Reaction The New York Times reported that, in Washington, Republicans and Democrats threatened to restrain Facebook through competition laws. They also plan to open investigations into possible campaign finance violations. Shareholders ramped up calls to oust Mr. Zuckerberg as Facebook’s chairman while activists filed a complaint to the Federal Trade Commission about the social network’s privacy policies and condemned Ms. Sandberg, the chief operating officer, for overseeing a campaign to secretly attack opponents. Mr. Zuckerberg said on the conference call that he was not willing to step down as chairman. Jessica Guynn, a reporter for USA Today, started an interesting thread on twitter where she stresses on the point that Mark Zuckerberg is denying allegations in the Times story and instead is stressing on solutions to divert people’s attention from the problems. https://twitter.com/jguynn/status/1063148779212169216 Jessica also proded Mark on the topic of being the right person to lead Facebook. To which he replied “ We are doing the right things to fix the issues. I am fully committed to getting this right.” You can head over to the New York Times for a complete coverage of this news. What is Facebook hiding? New York Times reveals Facebook’s insidious crisis management strategy Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior” Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media
Read more
  • 0
  • 0
  • 2246

article-image-elasticsearch-6-5-is-here-with-cross-cluster-replication-and-jdk-11-support
Natasha Mathur
16 Nov 2018
4 min read
Save for later

Elasticsearch 6.5 is here with cross-cluster replication and JDK 11 support

Natasha Mathur
16 Nov 2018
4 min read
The Elastic team released version 6.5.0 of their open source distributed, RESTful search and analytics engine, Elasticsearch, earlier this week. Elasticsearch 6.5.0 explores features such as cross-cluster replication, new source-only snapshots, SQL/ODBC changes, and security features, among others. Elasticsearch is a search engine based on Lucene library that provides a distributed, multitenant-capable full-text search engine with an HTTP web interface as well as schema-free JSON documents. Let’s now discuss these features in Elasticsearch 6.5.0. Cross-cluster replication Elasticsearch 6.5.0 comes with cross-cluster replication which is a Platinum-level feature for Elasticsearch. Cross-cluster replication allows you to create an index in a local cluster to follow an index in a remote cluster or automatically-follow indices in a remote cluster matching a pattern. New source-only snapshots Elasticsearch 6.5 comes with a new source-only snapshot that allows you to store a minimal amount of information (namely, the _source and index metadata). This enables the indices to be rebuilt through a reindex operation when necessary. What’s great about this is that it creates up to 50% reduction in disk space of the snapshots. However, they can take a longer time to restore (in full) as you’ll need to do a reindex to make them searchable. SQL/ODBC changes An initial (alpha status) ODBC driver has been added for Elasticsearch 6.5.0. Since ODBC is supported by many BI tools, it makes it easy to connect Elasticsearch to a lot of your favourite 3rd party tools giving you the speed, flexibility, and power of full-text search and relevance. Other than that, few new functions and capabilities have also been added to Elasticsearch’s SQL capabilities. These include ROUND, TRUNCATE, IN, MONTHNAME, DAYNAME, QUARTER, CONVERT, as well as a number of string manipulation functions such as CONCAT, LEFT, RIGHT, REPEAT, POSITION, LOCATE, REPLACE, SUBSTRING, and INSERT. You can now also query across indices, with different mappings, given that the mapping types are compatible. New scriptable token filters Elasticsearch 6.5 introduces new scriptable token filters namely, predicate and conditional. The predicate token filter allows you to remove tokens that don’t match a script.  The conditional token filter builds on the idea of scriptable token filters but lets you apply other token filters matching a script. These let you manipulate the data you’re indexing without requiring to write a Java plugin. Moreover, Elasticsearch 6.5 also comes with a new text type, called annotated_text. This new annotated_text type allows you to use markdown-like syntax to then link to different entities in applications using natural language processing. JDK 11 and G1GC Elasticsearch 6.5 offers support for JDK 11. Other than that, Elasticsearch 6.5 also supports the G1 garbage collector on JDK 10+. Security and Audit Logging Elasticsearch 6.5 comes with two new security features, namely, authorization realms and audit logging. Authorization realms enable an authenticating realm to delegate the task of pulling the user information (with the username, the user’s roles, etc) to one or more other realms. Audit logging is a new, completely structured format, where all attributes are named, meaning each log entry is a one-line JSON document and each one of these are printed on a separate line. These attributes are ordered like in any other normal log entry. Multi-bucket analysis A multi-metric machine learning job analyzes multiple time series together. Elasticsearch 6.5 introduces multi-bucket analysis for machine learning jobs. Here, features from multiple contiguous buckets are used for anomaly detection. The final anomaly score includes a combination of values from both the “standard” single-bucket analysis and the new multi-bucket analysis. Additionally, Elasticsearch 6.5, comes with an experimental find file structure API which aims to help discover the structure of a text file. It attempts to read the file and on succeeding returns statistics about the common values of the detected fields and mappings that can be used for ingesting the file into Elasticsearch. For more information, check out the official Elasticsearch 6.5 blog. Dejavu 2.0, the open source browser by ElasticSearch, now lets you build search UIs visually Search company Elastic goes public and doubles its value on day 1 How does Elasticsearch work? [Tutorial]
Read more
  • 0
  • 0
  • 3440

article-image-microsoft-amplifies-focus-on-conversational-ai-acquires-xoxco-shares-guide-to-developing-responsible-bots
Bhagyashree R
16 Nov 2018
5 min read
Save for later

Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots

Bhagyashree R
16 Nov 2018
5 min read
On Wednesday, Microsoft shared that it has signed an agreement to acquire XOXCO, an Austin-based software developer with a focus on bot design. In another announcement, it shared a set of guidelines formulated to help developers build responsible bots or conversational AI. Microsoft acquires conversational AI startup, XOXCO Microsoft has shared its intent to acquire XOXCO. The software product design and development company has been working on conversation AI since 2013. They have developed products like Botkit which provide development tools and the Howdy bot for Slack that enables users to schedule meetings. With this acquisition, Microsoft aims to democratize AI development. “The Microsoft Bot Framework, available as a service in Azure and on GitHub, supports over 360,000 developers today. With this acquisition, we are continuing to realize our approach of democratizing AI development, conversation, and dialog, and integrating conversational experiences where people communicate,” reads the post. Throughout this year, the tech giant acquired many companies which have contributed to AI development. For example, Semantic Machines in May, Bonsai in July, and Lobe in September. XOXCO is another company added to this list enabling Microsoft to get more closer to its goal of “making AI accessible and valuable to every individual and organization, amplifying human ingenuity with intelligent technology.” Read more about the acquisition on Microsoft’s official website. Building responsible bots with Microsoft’s guidelines Nowadays, conversational AI is being used to automate communication, query solving, and create personalized customer experiences at scale. With this increasing adoption, it is important to build conversational AI that is responsible and trustworthy. The team at ICS.ai are a Microsoft Inner Circle partner that provide transformational AI solutions for the public sector in the United Kingdom. Their Smart Chat AI Assistant offers that achieves human parity performance, with over 90% of queries answered correctly. The 10 guidelines formulated by Microsoft aims to help developers do exactly that: [box type="shadow" align="" class="" width=""] Articulate the purpose of your bot and take special care if your bot will support consequential use cases. Be transparent about the fact that you use bots as part of your product or service. Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence. Design your bot so that it respects relevant cultural norms and guards against misuse. Ensure your bot is reliable. Ensure your bot treats people fairly. Ensure your bot respects user privacy. Ensure your bot handles data securely. Ensure your bot is accessible. Accept responsibility.[/box] Some of them are described below: “Articulate the purpose of your bot and take special care if your bot will support consequential use cases.” Before starting any design work, carefully analyze the benefits your bot will provide to the users or the entity deploying the bot. Ensuring that your bot’s design is ethical is very important, especially when it is likely to affect the well-being of the user such as in consequential use cases. These use cases include access to services such as healthcare, education, employment, and financing. “Be transparent about the fact that you use bots as part of your product or service.” Users should be aware that they are interacting with a bot. Nowadays, designers can equip their bots with “personality” and natural language capabilities. This is why it is important to convey to the users that they are not interacting with another person and some aspects of their interaction are being performed by a bot. Also, users should be able to easily find information about the limitations of the bot, including the possibility of errors and the consequences of these errors. “Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.” In cases where a human judgment is required, provide a means or ready access to a human moderator, particularly if your bot deals with consequential matters. Bots should have the ability to transfer a conversation to a human moderator as soon as the user asks. Users will quickly lose trust in the technology and in the company that has deployed it if they feel trapped or alienated by a bot. “Design your bot so that it respects relevant cultural norms and guards against misuse.” Bots should have built-in safeguards and protocols to handle misuse and abuse. Since bots can now have a human-like persona, it is crucial that they interact respectfully and safely with users. Developers can use machine learning techniques and keyword filtering mechanisms to enable the bot to detect and respond appropriately to sensitive or offensive input from users. “Ensure your bot is reliable.” Bots need to be reliable for the function it aims to perform. As a developer, you should take into account that since AI systems are probabilistic they will not always give the correct answer. That is why establish reliability metrics and review them periodically. The performance of AI-based systems may vary over time as the bot is rolled out to new users and in new contexts, developers must continually monitor its reliability. Read the full document: Responsible bots: 10 guidelines for developers of conversational AI Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Satya Nadella reflects on Microsoft’s progress in areas of data, AI, business applications, trust, privacy and more. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report
Read more
  • 0
  • 0
  • 2177
Visually different images

article-image-microsoft-announces-container-support-for-azure-cognitive-services-to-build-intelligent-applications-that-span-the-cloud-and-the-edge
Bhagyashree R
16 Nov 2018
3 min read
Save for later

Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge

Bhagyashree R
16 Nov 2018
3 min read
On Wednesday, Microsoft announced the preview of Azure Cognitive Services containers, which will make it possible to build intelligent applications that span the cloud and the edge. Azure Cognitive Services allows developers to easily add cognitive features such as object detection, vision recognition, and language understanding into their applications. With containerization, developers are able to build large AI systems that are scalable, reliable, and consistent in a way that supports better data governance. It is a way of software distribution in which an application or service is packaged so that it can be deployed in a container host with little or no modification. Following are the advantages of container support for Azure Cognitive Services: Build portable and scalable intelligent apps Containerisation will allow customers to use Azure Cognitive Services capabilities wherever the data resides. The applications will be able to perform functionalities like facial recognition, OCR, or text analytics without sending data to the cloud. Irrespective of where the apps are running (edge or in Azure), they will be portable and scalable with great consistency. Flexibility to deploy AI capabilities Everyday, huge volumes of data are generated across organizations, which demands a flexible way to deploy AI capabilities in a variety of environments. Deploying Cognitive Services in containers allows customers to analyze information close to the physical world where the data resides. This helps in delivering real-time insights and immersive experiences that are highly responsive and contextually aware. Build one app architecture optimized for both cloud and edge Container support for Cognitive Services allows customers to build one application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality. Customers can now choose when to upgrade the AI models deployed in their solutions. They can also test new model version before deploying them in production in a consistent way, whether running on the edge or in Azure. Many companies are already leveraging these advanced capabilities. One such company is www.ics.ai, based in the UK. Their Microsoft based AI solutions are built specifically for local governments, higher education institutions and regional county councils. Andy Vargas, Intel VP of Software and Services said: “Azure Cognitive Services containers give you more options on how you grow and deploy AI solutions, either on or off premises, with consistent performance. You can scale up as workload intensity increases or scale out to the edge.” Read more about container support for Azure Cognitive Services on Microsoft's website. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence How Microsoft 365 is now using artificial intelligence to smartly enrich your content in OneDrive and SharePoint Microsoft announces ‘Decentralized Identity’ in partnership with DIF and W3C Credentials Community Group
Read more
  • 0
  • 0
  • 2474

article-image-uber-posted-a-billion-dollar-loss-this-quarter-can-uber-eats-revitalize-the-uber-growth-story
Amrata Joshi
15 Nov 2018
4 min read
Save for later

Uber posted a billion dollar loss this quarter. Can Uber Eats revitalize the Uber growth story?

Amrata Joshi
15 Nov 2018
4 min read
Uber’s sales are slowing down as the company is spending more in its food delivery business. The quarterly loss surged to $1.1B. The revenue in the third quarter growth of Uber rose 38 percent from the previous year. But it is almost half of what the growth rate was six months back, though there is a 5% increase from the previous quarter. Uber has withdrawn from the foreign markets where it faced heavy losses. It has already withdrawn it services from Austin, Denmark, Budapest, and Bulgaria. Expensive ventures like the self-driving trucks also were responsible for the losses and Uber has shuttered them too. Uber has been investing a lot towards becoming a major player in autonomous cars. It has also agreed to buy vehicles from Volvo to start off self-driving taxis. The series of missteps and scandals over executive misconduct could also possibly be one of the reasons behind the sales getting affected. At the Code Conference, Dara Khosrowshahi said, “We have to compete against the economy for drivers.” He further added, “We have three million driver partners around the world, and there are some that are disgruntled. All of them, I think, wanna make more money, but fundamentally, they get to be their own bosses, and they get to work on their own terms. In general, I think driver earnings are going up, and we have an increased time and distance in certain places.” Uber Eats’ healthy appetite Despite this quarter’s massive losses, the company seems to be quite excited about its food delivery business, Uber Eats. Uber Eats is turning out to be the fastest-growing meal delivery service in the U.S. According to data from Second Measure, in nine out of the 22 most populous U.S. cities, people are now spending more on Uber Eats than on any other food delivery services. CEO, Dara Khosrowshahi said onstage at Code Conference at Rancho Palos Verdes, California, “Eats is an exploding business in a good way. It’s now at a $6 billion bookings run rate, growing over 200 percent. Eats is only in 250 cities on a global basis and it’s got 350 cities to go, to catch up to our rides business.” Uber Eats was launched in several cities last year. It is now profitable in 27 of 108 cities worldwide. Uber Eats dominated in three Texas cities, six months ago, namely, Houston, Austin, and Dallas. It has beaten out DoorDash in Fort Worth, GrubHub in El Paso, and Postmates in Phoenix. It has also beaten out Amazon Restaurants in Amazon’s home city of Seattle! The food delivery companies usually fall under one of the two categories. The first one is aggregators, which collect restaurant options and menus through an online portal for customers. They usually require restaurants to handle delivery themselves. An example of this category is GrubHub. The second one is full delivery services, which take orders through an online portal and also, deliver the food for restaurants. The restaurants have to fork out a fixed percentage of an order as a fee, while also, customers pay a fee to the delivery service. Postmates and UberEats are examples of this category. Uber Eats have an advantage over GrubHub as it’s just not an aggregator. It also has the benefit of existing driver networks around the country from its parent company Uber, unlike Postmates. There is some uncertainty about the profitability of Uber’s core ride-hailing business as food is boosting Uber’s gross revenue but it’s shrinking the company’s margins. Only time will tell if Uber Eats can really make a difference to the company’s economy! Read more about this news on the official website of Bloomberg. Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Uber’s Head of corporate development, Cameron Poetzscher, resigns following a report on a 2017 investigation into sexual misconduct Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop?
Read more
  • 0
  • 0
  • 2372

article-image-googles-pixel-camera-app-introduces-night-sight-to-help-click-clear-pictures-with-hdr
Amrata Joshi
15 Nov 2018
3 min read
Save for later

Google’s Pixel camera app introduces Night Sight to help click clear pictures with HDR+

Amrata Joshi
15 Nov 2018
3 min read
Yesterday, Pixel camera app launched a new feature, Night Sight to help in clicking sharp, clean photographs in very low light. It works on both, the main and selfie cameras of all three generations of Pixel phones. Also, it does not require an additional tripod or flash. How HDR+ helps Night Sight [caption id="attachment_24169" align="aligncenter" width="696"]  Image source: Google AI Blog[/caption] Night Sight features works because of HDR+. HDR+ uses computational photography for producing clearer photographs. When the shutter button is pressed, HDR+ captures a rapid burst of pictures, then quickly combines them into one. It improves results in both high dynamic range and low-light situations. It reduces the impact of read noise and shot noise thereby improving SNR (Signal to Noise Ratio) in dim lighting. To keep the photographs sharp even if the hand shakes or the subject moves, Pixel camera app uses short exposures. The pieces of frames which aren’t well aligned, get rejected. This lets HDR+ to produce sharp images even while there is excessive light. The Pixel camera app works well in both the situations, dim light or excessive light exposure. The default picture-taking mode on Pixel phones uses a zero-shutter-lag (ZSL) protocol, which limits exposure time. As soon as one opens the camera app, it starts capturing image frames and stores them in a circular buffer. This circular buffer constantly erases old frames to make room for new ones. When the shutter button is pressed, the camera sends the most recent 9 or 15 frames to the HDR+ or Super Res Zoom software. The image is captured exactly at the right moment. That’s why it is called zero-shutter-lag. No matter how dim the scene is, HDR+ limits exposures to at most 66ms, allowing the display rate of at least 15 frames per second. Night Sight uses positive-shutter-lag (PSL), for dimmer scenes where longer exposures are needed. The app uses motion metering to measure recent scene motions and chooses an exposure time to minimize the blur effect. How to use Night Sight? The Night Sight feature can't operate in complete darkness, there should be at least some light. Night Sight works better in uniform lighting than harsh lighting. Users can tap on various objects, then move the exposure slider, to increase exposure. If it’s very dark and the camera can’t focus, tap on the edge of a light source or on a high-contrast edge. Keep very bright light sources out of the field of view to avoid lens flare artifacts. The Night Sight feature has already created some buzz around. But the major drawback is that it can’t work in complete darkness. Also, since the learning-based white balancer is trained for Pixel 3, it will be less accurate on older phones. This app works better with the newer phone than the older ones. Read more about this news on Google AI Blog. The DEA and ICE reportedly plan to turn streetlights to covert surveillance cameras, says Quartz report Facebook is at it again. This time with Candidate Info where politicians can pitch on camera ‘Peekaboo’ Zero-Day Vulnerability allows hackers to access CCTV cameras, says Tenable Research
Read more
  • 0
  • 0
  • 2079
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-deepmasterprints-master-key-fingerprints-made-by-a-neural-network-can-now-fake-fingerprints
Prasad Ramesh
15 Nov 2018
3 min read
Save for later

DeepMasterPrints: ‘master key’ fingerprints made by a neural network can now fake fingerprints

Prasad Ramesh
15 Nov 2018
3 min read
New York University researchers have found a way to generate artificial fingerprints that can be used to create fake fingerprints. They do this by using a neural network. They have presented their work in a paper titled DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution. The vulnerability in fingerprint sensors Fingerprint recognition systems are vulnerable to dictionary attacks based on MasterPrint. MasterPrints are like master keys that can match with a large number of fingerprints. Such work was done previously at feature level, but now this work dubbed as DeepMasterPrints has much higher attack accuracy with the capacity to generate complete images. The method demonstrated in the paper is Latent Variable Evolution which is based on training a Generative Adversarial Network (GAN) on a set of real fingerprint images. Then a stochastic search is then used to search for latent input variables to the generator network. This can increase the accuracy of impostor matches assessed by a fingerprint recognizer. Small fingerprint sensors pose a risk Aditi Roy, one of the authors of the paper exploited an observation. Smartphones have small areas for fingerprint recording and recognition. Hence the whole fingerprint is not recorded in them at once, they are partially recorded and authenticated. Also, some features among fingerprints are more common than others. She then demonstrated that MasterPrints can be obtained from real fingerprint images or be synthesized. With this exploit, 23% of the subjects could be spoofed in the used dataset at a 0.1% false match rate. The generated DeepMasterPrints was able to spoof 77% of the subjects at a 1% false match rate. This shows the danger of using small fingerprint sensors. For a DeepMasterPrint a synthetic fingerprint image needed to be created that can fool a fingerprint matcher. A condition was that the matcher should also match that fingerprint image to different identities in addition to realizing that the image is a fingerprint. The paper presents a method for creating DeepMasterPrint using a neural network that learns to generate fingerprint images. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is used for searching the input space of the trained neural network. The ideal fingerprint image is then selected. Conclusion Partial fingerprint images can be generated that can be used for launching dictionary attacks against a fingerprint verification system. A GAN network is trained over a dataset of fingerprints, then LVE searches the latent variables of the generator network for a fingerprint image that maximize the matching chance. This matching is only successful when a large number of different identities are involved, meaning specific individual attacks are not so likely. The use of inked images and sensor images show that the system is robust and independent of artifacts and datasets. For more details, read the research paper. Tesla v9 to incorporate neural networks for autopilot Alphabet’s Waymo to launch the world’s first commercial self driving cars next month UK researchers have developed a new PyTorch framework for preserving privacy in deep learning
Read more
  • 0
  • 0
  • 4619

article-image-what-is-facebook-hiding-new-york-times-reveals-facebooks-insidious-crisis-management-strategy
Melisha Dsouza
15 Nov 2018
9 min read
Save for later

What is Facebook hiding? New York Times reveals Facebook’s insidious crisis management strategy

Melisha Dsouza
15 Nov 2018
9 min read
Today has been Facebook’s worst day in its history. As if the plummeting stocks that closed on  Wednesday at just $144.22.were not enough, Facebook is now facing backlash on its leadership morales. Yesterday, the New York Times published a scathing expose on how Facebook wilfully downplayed its knowledge of the 2016 Russian meddling of US elections via its platform. In addition, it also alleges that over the course of two years, Facebook has adopted a ‘delay, deny and deflect’ strategy under the shrewd leadership of Sheryl Sandberg and the disconnected from reality, Facebook CEO, Mark Zuckerberg, to continually maneuver through the chain of scandals the company has been plagued with. In the following sections, we dissect the NYT article and also loo at other related developments that have been triggered in the wake of this news. Facebook, with over 2.2 billion users globally, has accumulated one of the largest-ever repositories of personal data, including user photos, messages and likes that propelled the company into the Fortune 500. Its platform has been used to make or break political campaigns, advertising business and reshape the daily life around the world. There have been constant questions raised on the security of this platform and all credit goes to the various controversies surrounding Facebook since well over two years. While Facebook’s response to these scandals (“we should have done better”) have not convinced many, Facebook has never been considered ‘knowingly evil’ and continued enjoyed the benefit of the doubt. The Times article now changes that. Crisis management at Facebook: Delay, deny, deflect The report by the New York Times is based on anonymous interviews with more than 50 people, including current and former Facebook executives and other employees, lawmakers and government officials, lobbyists and congressional staff members. Over the past few years, Facebook has grown, so has the hate speech, bullying and other toxic content on the platform.  It hasn't fully taken responsibility for what users posted turning a blind eye and carrying on as it is- a platform and not a Publisher. The report highlights the dilemma Facebook leadership faces while deciding on candidate Trump’s statement on Facebook in 2015 calling for a “total and complete shutdown” on Muslims entering the United States. After a lengthy discussion, Mr. Schrage (a prosecutor whom Ms. Sandberg had recruited)  concluded that Mr. Trump’s language had “not violated Facebook’s rules”. Mr. Kaplan (Facebook’s Vice President of global public policy) argued that Mr. Trump was an important public figure, and shutting down his account or removing the statement would be perceived as obstructing free speech leading to a conservative backlash. Sandberg decided to allow the poston Facebook. In the spring of 2016, Mr. Alex Stamos (Facebook’s former security chief) and his team discovered Russian hackers probing Facebook accounts for people connected to the presidential campaign along with Facebook accounts linked to Russian hackers who messaged journalists to share information from the stolen emails. Mr. Stamos directed a team to scrutinize the extent of Russian activity on Facebook. By January 2017, it was clear that there was more to the Russian activity on Facebook. Mr. Kaplan believed that if Facebook implicated Russia further,  Republicans would “accuse the company of siding with Democrats” and pulling  down the Russians’ fake pages would offend regular Facebook users as having been deceived. To summarize their findings, Mr. Zuckerberg and Ms. Sandberg released a  blog post  on 6th September 2017. The post had little information on fake accounts or the organic posts created by Russian trolls gone viral on Facebook. You can head over to New York Times to read in depth about what went on in the company post reported scandals. What is also surprising, is that instead of offering a clear explanation to the matters at hand, the company was more focused on taking a stab at those who make statements against Facebook. Take for instance , Apple CEO Tim Cook who criticized Facebook in an MSNBC interview  and called facebook a service that traffics “in your personal life.” According to the Times, Mark Zuckerberg has reportedly told his employees to only use Android Phones in lieu of this statement. Over 70 human rights group write to Zuckerberg Fresh reports have now emerged that the Electronic Frontier Foundation, Human Rights Watch, and over 70 other groups have written an open letter to Mark Zuckerberg  to adopt a clearer “due process” system for content takedowns.  “Civil society groups around the globe have criticized the way that Facebook’s Community Standards exhibit bias and are unevenly applied across different languages and cultural contexts,” the letter says. “Offering a remedy mechanism, as well as more transparency, will go a long way toward supporting user expression.” Zuckerberg rejects facetime call for answers from five parliaments “The fact that he has continually declined to give evidence, not just to my committee, but now to an unprecedented international grand committee, makes him look like he’s got something to hide.” -DCMS chair Damian Collins On October 31st, Zuckerberg was invited to give evidence before a UK parliamentary committee on 27th November, with politicians from Canada co-signing the invitation. The committee needed answers related to Facebook “platform’s malign use in world affairs and democratic process”. Zuckerberg rejected the request on November 2nd.  In yet another attempt to obtain answers, MPs from Argentina, Australia, Canada, Ireland and the UK  joined forces with UK’s Digital, Culture, Media and Sport committee requesting a facetime call with Mark Zuckerberg last week. However, in a letter to DCMS, Facebook declined the request, stating: “Thank you for the invitation to appear before your Grand Committee. As we explained in our letter of November 2nd, Mr. Zuckerberg is not able to be in London on November 27th for your hearing and sends his apologies.” The letter does not explain why Zuckerberg is unavailable to speak to the committee via a video call. The letter summarizes a list of Facebook activities and related research that intersects with the topics of election interference, political ads, disinformation and security.  It makes no mention of the company’s controversial actions and their after effects. Diverting scrutiny from the matter? According to the NYT report, Facebook reportedly expanded its relationship with a Washington-based public relations consultancy with Republican ties in October 2017 after an entire year dedicated to external criticism over its handling of Russian interference on its social network. The firm last year wrote dozens of articles that criticized facebook’s  rivals Google and Apple while diverting focus from the impact of Russian interference on Facebook  It pushed the idea that liberal financier George Soros was behind a growing anti-Facebook movement, according to the New York Times. The PR team also reportedly pressed reporters to explore Soros' financial connections with groups that protested Facebook at Congressional hearings in July. How are employees and users reacting? According to the Wall Street Journal, only 52 percent of employees say that they're optimistic about Facebook's  future . As compared to 2017, 84 percent were optimistic about working at Facebook. Just under 29,000 workers (of more than 33,000 in total)  participated in the biannual pulse survey. In the most recent poll conducted in October, statistics have fallen-  like its tumbling stock market - as compared to last year's survey. Just over half feel Facebook was making the world a better place which was at 19 percentage last year. 70 percent said they were proud to work at Facebook, down from 87 percent, and overall favorability towards the company dropped from 73 to 70 percent since last October's poll. Around 12 percent apparently plan to leave within a year. Hacker news has comments from users stating that “Facebook needs to get its act together” and “are in need for serious reform”. Some also feel that “This Times piece should be taken seriously by FB, it's shareholders, employees, and users. With good sourcing, this paints a very immature picture of the company, from leadership on down to the users”. Readers have pointed out that Facebook’s integrity is questionable and that  “employees are doing what they can to preserve their own integrity with their friends/family/community, and that this push is strong enough to shape the development of the platform for the better, instead of towards further addictive, attention-grabbing, echo chamber construction.” Facebook’s reply on the New York Times Report Today, Facebook published a post in response to the Time’s report, listing the number of inaccuracies in their post. Facebook asserts that they have been closely following the Russian investigation, along with reasons for not citing Russia’s name in the April 2017 white paper. The company has also addressed the backlash it faced for the “Muslim ban” statement by Trump which was not taken down. Facebook strongly supports Mark and Sheryl in the fight against false news and information operations on Facebook.along with reasons  for Sheryl championing Sex Trafficking Legislation. Finally, in response to the controversy to advising employees to use only Android, they clarified that it was because “it is the most popular operating system in the world”. In response to hiring a PR team Definers, Facebook says that “We ended our contract with Definers last night. The New York Times is wrong to suggest that we ever asked Definers to pay for or write articles on Facebook’s behalf – or to spread misinformation.” We can’t help but notice that again, Facebook is defending itself against allegations but not providing a proper explanation for why it finds itself in controversies time and again. It is also surprising that the contract with Definers abruptly came to an end just before the report went live by the Times. What Facebook has additionally done is emphasized about improved security practices at the company, something which it has been talking about everytime they face a controversy. It is time to stop delaying, denying and deflecting. Instead, atone, accept, and act responsibly. Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior” Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media Facebook GEneral Matrix Multiplication (FBGEMM), high-performance kernel library, open sourced, to run deep learning models efficiently
Read more
  • 0
  • 0
  • 2585

article-image-uber-announces-the-2019-uber-ai-residency
Amrata Joshi
15 Nov 2018
3 min read
Save for later

Uber announces the 2019 Uber AI Residency

Amrata Joshi
15 Nov 2018
3 min read
On Tuesday, Uber announced the 2019 Uber AI Residency. The Uber AI residency was established in 2018. It is a 12-month training program for recent college and master’s graduates, professionals interested in reinforcing their AI skills. It is also for those with quantitative skills, interested in becoming AI researchers at Uber AI Labs or Uber Advanced Technologies Group (ATG). Artificial intelligence at Uber AI is a rapidly growing area across both research and applications, including self-driving vehicles. General AI and applied machine learning through Uber AI, and AI for self-driving cars through Uber ATG are the major areas where AI is growing at Uber. Uber AI The teams at Uber AI are working towards providing and improving services in the fields of computer vision, conversational AI, and sensing and inference from sensor data. Uber AI Labs under Uber AI organization is composed of two main wings which reinforce each other. The two wings are foundational core research and Connections group. They focus on the translation of research into applications for the company. They work in collaboration with the platform and product teams. AI Labs Core The AI Labs Core work on diverse topics including the spectrum from probabilistic programming, Bayesian inference, reinforcement learning, neuroevolution, safety, core deep learning research and artificial intelligence. AI Labs Connections AI Labs Connections transformed Bayesian optimization from a research field into a service for the company. AI Labs Connects has collaborations with teams working on conversational AI, natural language processing, mapping, forecasting, fraud detection, Uber’s Marketplace, and many more. Uber Advanced Technologies Group (ATG) The self-driving vehicle is one of the most ambitious AI applications at Uber. AI helps in perceiving the surrounding environment using multiple sensors, predicting the motion and intent of actors in the near future. The important components of the self-driving technology are creating high definition maps and localizing self-driving vehicles. Also, providing critical data about the vehicle’s environment is equally important. The Residency program The residency program involves the selection of Uber AI Residents across AI Labs in San Francisco and ATG in Toronto and San Francisco. The residents will be given an opportunity to pursue interests across academic and applied research. They will also be meeting with researchers at AI Labs and ATG. The residents will get a chance of working with Uber product and engineering teams to converge on initial project directions. The 2018 residency class is currently working on foundational research projects in deep learning, probabilistic modeling, reinforcement learning, as well as computer vision. Their results have been submitted to top scientific venues, and their contributions also directly impact Uber’s business in partnership with Uber’s technology teams. Applicants can apply from December 10th, 2018 to January 13, 2019, at 11:59 p.m. EST. Apply here. Read more about this news on the official page of Uber Engineering. Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Uber’s Head of corporate development, Cameron Poetzscher, resigns following a report on a 2017 investigation into sexual misconduct Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop?
Read more
  • 0
  • 0
  • 2953

article-image-introducing-krispnet-dnn-a-deep-learning-model-for-real-time-noise-suppression
Bhagyashree R
15 Nov 2018
3 min read
Save for later

Introducing krispNet DNN, a deep learning model for real-time noise suppression

Bhagyashree R
15 Nov 2018
3 min read
Last month, 2Hz introduced an app called krisp which was featured on the Nvidia website. It uses deep learning for noise suppression and is powered by krispNet Deep Neural Network. krispNet is trained to recognize and reduce background noise from real-time audio and yields clear human speech. 2Hz is a company which builds AI-powered voice processing technologies to improve voice quality in communications. What are the limitations in the current ways of noise suppression? Many edge devices from phones, laptops, to conferencing systems come with noise suppression technologies. Latest mobile phones come equipped with multiple microphones which helps suppress environmental noise when we talk. Generally, the first mic is placed on the front bottom of the phone to directly capture the user’s voice. The second mic is placed as far as possible from the first mic. After the surrounding sounds are captured by both these mics, the software effectively subtracts them from each other and yields an almost clean voice. The limitations of multiple mics design: Since multiple mics design requires a certain form factor, their application is only limited to certain use cases such as phones or headsets with sticky mics. These designs make the audio path complicated, requiring more hardware and code. Audio processing can only be done on the edge or device side, thus the underlying algorithm is not very sophisticated due to the low power and compute requirement. The traditional Digital Signal Processing (DSP) algorithms also work well only in certain use cases. Their main drawback is that they are not scalable to variety and variability of noises that exist in our everyday environment. This is why 2Hz has come up with a deep learning solution that uses a single microphone design and all the post processing is handled by a software. This allows hardware designs to be simpler and more efficient. How deep learning can be used in noise suppression? There are three steps involved in applying deep learning to noise suppression: Source: Nvidia Data collection: The first step is to build a dataset to train the network by combining distinct noises and clean voices to produce synthetic noisy speech. Training: Next, feed the synthetic noisy speech dataset to the DNN on input and the clean speech on the output. Inference: Finally, produce a mask which will filter out the noise giving you a clear human voice. What are the advantages of krispNet DNN? krispNet is trained with a very large amount of distinct background noises and clean human voices. It is able to optimize itself to recognize what’s background noise and separate it from a human speech by leaving only the latter. While inferencing, krispNet acts on real-time audio and removes background noise. krispNet DNN can also perform Packet Loss Concealment for audio and fill out missing voice chunks in voice calls by eliminating “chopping”. krispNet DNN can predict higher frequencies of a human voice and produce much richer voice audio than the original lower bitrate audio. Read more in detail about how we can use deep learning in noise suppression on the Nvidia blog. Samsung opens its AI based Bixby voice assistant to third-party developers Voice, natural language, and conversations: Are they the next web UI? How Deep Neural Networks can improve Speech Recognition and generation
Read more
  • 0
  • 0
  • 10757
article-image-google-makes-major-inroads-into-healthcare-tech-by-absorbing-deepmind-health
Amrata Joshi
14 Nov 2018
3 min read
Save for later

Google makes major inroads into healthcare tech by absorbing DeepMind Health

Amrata Joshi
14 Nov 2018
3 min read
Yesterday, Google announced that it is absorbing DeepMind Health, a London-based AI lab. In 2014, DeepMind was acquired by Google for £400 million. One of the reasons for DeepMind to join hands with Google in 2014 was the opportunity to use Google’s scale and experience in building billion-user products. Google and DeepMind Health together working on Streams The team at DeepMind introduced Streams in 2017. It was first rolled out at the Royal Free Hospital, where it is primarily used to identify and treat acute kidney injury (AKI). This app provides real-time alerts and information, pushing the right information to the right clinician at the right time. It also brings together important medical information like blood test results in one place. It helps the clinicians at our partner hospitals to spot serious issues while they are on the move. Streams app was developed to help the UK’s National Health Service (NHS). The need for Artificial Intelligence in Streams The team at DeepMind was keen on using AI because of the potential it has to revolutionize the understanding of diseases. AI could possibly help in knowing the root cause of the disease by understanding as to how they develop. This could, in turn, help scientists discover new ways of treatment. The team at DeepMind plans to work on a number of innovative research projects, such as using AI to spot eye disease in routine scans. The goal of DeepMind is to make Streams an AI-powered assistant for nurses and doctors everywhere. This could be possible by combining the best algorithms with intuitive design, all backed up by rigorous evidence. The future of Streams Acute kidney injury (AKI) is responsible for 40,000 deaths in the UK every year. With Streams now powered by the intelligence of teams from DeepMind Health and Google, the scenario might change! Antitrust and privacy concerns Last year, the Royal Free NHS Foundation Trust in London went against data protection rules and gave 1.6 million patient records to DeepMind for a trial. Tension is now increasing for the privacy advocates in the UK because Google is getting its hands on healthcare related information. The data could be misused in the future. Many have given a negative response to this news and are opposing it. As DeepMind had promised before to not share personally identifiable health data with Google, this new move has got many, questioning the intention of DeepMind. https://twitter.com/juliapowles/status/1062417183404445696 https://twitter.com/DeepMind_Health/status/1062389671576113155 https://twitter.com/TomValletti/status/1062457943382245378 Read more about this news on DeepMind’s official blog post. DeepMind open sources TRFL, a new library of reinforcement learning building blocks Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users
Read more
  • 0
  • 0
  • 2468

article-image-google-releases-magenta-studio-beta-an-open-source-python-machine-learning-library-for-music-artists
Melisha Dsouza
14 Nov 2018
3 min read
Save for later

Google releases Magenta studio beta, an open source python machine learning library for music artists

Melisha Dsouza
14 Nov 2018
3 min read
On 11th November, the Google Brain Team released Magenta studio in beta, a suite of free music-making tools using their machine learning models. It is a collection of music plugins built on Magenta’s open source tools and models. These tools are available both as standalone Electron applications as well as plugins for Ableton Live. What is Project Magenta? Magenta is a research project which was started by some researchers and engineers from the Google Brain team with significant contributions from many other stakeholders. The project explores the role of machine learning in the process of creating art and music. It primarily involves developing new deep learning and reinforcement learning algorithms to generate songs, images, drawings, and other materials. It also explores the possibility of building smart tools and interfaces to allow artists and musicians to extend their processes using these models. Magenta is powered by TensorFlow and is distributed as an open source Python library. This library allows users to manipulate music and image data which can then be used to train machine learning models. They can generate new content from these models. The project aims to demonstrate that machine learning can be utilized to enable and enhance the creative potential of all people. If the Magenta studio is used via Ableton, the Ableton Live plugin reads and writes clips from Ableton's Session View. If a user chooses to run the studio as a standalone application, the standalone application reads and writes files from a users file system without requiring Ableton. Some of the demos include: #1 Piano Scribe Many of the generative models in Magenta.js requires the input to be a symbolic representation like Musical Instrument Digital Interface (MIDI). But now, Magenta Converts raw audio to MIDI using Onsets and Frames which  a neural network trained for polyphonic piano transcription. This means that only audio is enough to obtain an output of MIDI in the browser. #2 Beat Blender The Beat Bender is built by Google Creative Lab using MusicVAE. Users can now generate two dimensional palettes of drum beats and draw paths through the latent space to create evolving beats. #3 Tenori-of Users can utilize the Magenta.js to generate drum patterns when they hit the “Improvise” button. This is more like a take on an electronic sequencer. #4 NSynth Super This is machine learning algorithm using deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds. For instance, users can get a sound that’s part flute and part sitar all at once. You can head over to the Magenta Blog for more exciting demos. Alternatively, head over to magenta.tensorflow.org to read more about this announcement. Worldwide Outage: YouTube, Facebook, and Google Cloud go down affecting thousands of users Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more
Read more
  • 0
  • 0
  • 4662

article-image-alphabets-waymo-to-launch-the-worlds-first-commercial-self-driving-cars-next-month
Prasad Ramesh
14 Nov 2018
2 min read
Save for later

Alphabet’s Waymo to launch the world’s first commercial self driving cars next month

Prasad Ramesh
14 Nov 2018
2 min read
Waymo plans to launch the world’s first commercial driverless car service by December. What first began at Google, was rebranded in 2016 and brought directly under the parent company, Alphabet. Waymo already is on road for a small group of 400 families in the Phoenix area and is now expanding with a license in California. They plan to continue expanding, getting licenses in areas as they do. End of last month Waymo acquired a license from the California Department of Motor Vehicles (DMV) to run driverless cars on public roads. Businesses are expected to be the main customers. Waymo gets a permit from California DMV The permit will allow Waymo to drive in both day and night with a speed limit of 65 mph. They state in a blog post: “Our vehicles can safely handle fog and light rain,”. The company has collected data by driving millions of miles through years of driving to train the artificial intelligence system in use. When faced with a situation it does not understand, a self-driving car will wait to understand how to proceed. They also have fleet and rider support with humans to solve any issues that the self-driving car cannot. Waymo has deals with companies like Fiat and Jaguar to make thousands of vehicles driverless. Waymo systems drove millions and billions of miles Other than the 10 million real miles, the Waymo system was also subject to 7 billion simulated miles to make the self-driving tech an experienced driver. There will also be backup drivers in some cars to take over if necessary so that the riders are at ease of mind. John Krafcik, Waymo CEO said to The Wall Street Journal on Tuesday, that this service will be available for customers as well as businesses. What is surprising is that companies like Walmart, Avis Budget Group Inc., and AutoNation Inc. are also interested in this service and are willing to pay for their customers’ rides. For more details, read the Waymo blog post. Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race This self-driving car can drive in its imagination using deep reinforcement learning Tesla is building its own AI hardware for self-driving car
Read more
  • 0
  • 0
  • 2876
article-image-facebook-shares-update-on-last-weeks-takedowns-of-accounts-involved-in-inauthentic-behavior
Bhagyashree R
14 Nov 2018
3 min read
Save for later

Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior”

Bhagyashree R
14 Nov 2018
3 min read
Yesterday, Facebook shared the findings and takedowns of its last week’s investigation regarding inauthentic coordinated behavior. In order to accomplish these takedowns, they worked closely with the government, the security community, and other tech companies. Inauthentic coordinated behavior refers to people or organizations working together to create networks of accounts and Pages to mislead others about who they are, or what they’re doing. What are the findings of Facebook’s investigation? On November 4th, just a few days before the US mid-term elections, Facebook was informed by the US law enforcement about an online activity that they believed was linked to foreign entities. Facebook further investigated and found that around 30 Facebook accounts and 84 Instagram accounts were potentially engaged in an coordinated inauthentic behavior. Facebook in its Election Update said that most of the Facebook Pages associated with these accounts were in French or Russian languages: “Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages, while the Instagram accounts seem to have mostly been in English — some were focused on celebrities, others political debate.” Combined with the takedowns of last Monday, in total, they have managed to remove 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were predominantly created after mid-2017 and have an impressive number of followers: “We found a total of about 1.25 million people who followed at least one of these Instagram accounts, with just over 600,000 located in the US. By comparison, the recent set of accounts that we removed which originated from Iran had around 1 million followers.” On November 6, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created.  Facebook said that they have now blocked these accounts. To give the background on how they are mitigating the misuse of the platform, Facebook mentioned that they partner with external partners like the government or security experts. These partnerships have helped Facebook, especially in the lead-up to last week’s midterm elections. Nathaniel Gleicher, the Head of Cybersecurity Policy, said in his post: “And while we can remove accounts and Pages and prohibit bad actors from using Facebook, governments have additional tools to deter or punish abuse. That’s why we’re actively engaged with the Department of Homeland Security, the FBI, including their Foreign Influence Task Force, Secretaries of State across the US — as well as other government and law enforcement agencies around the world — on our efforts to detect and stop information operations, including those that target elections.” Though removing misleading pages and accounts is a right step towards making the platform free from fake news and preventing its involvement in elections, this could also result in the takedown of legitimate accounts. “Facebook took down the pages of a lot of legit people I know and follow,” said one of the Hacker News users. Head over to Facebook’s newsroom to stay updated on Facebook’s activities to mitigate its misuse. Emmanuel Macron teams up with FB in a bid to fight hate speech on social media Following Google, FB changes its forced arbitration policy for sexual harassment claims A new data breach on FB due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News
Read more
  • 0
  • 0
  • 1859

article-image-emmanuel-macron-teams-up-with-facebook-in-a-bid-to-fight-hate-speech-on-social-media
Savia Lobo
13 Nov 2018
2 min read
Save for later

Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media

Savia Lobo
13 Nov 2018
2 min read
Yesterday, Emmanuel Macron announced in a speech at the Forum on Internet Governance that the French government will establish a joint working group with Facebook. This means that Facebook will allow French regulators inside the company to examine how it combats online hate speech. This collaboration is a result of Macron’s trial project called “smart regulation”, which he intended to extend to other tech leaders such as Google, Apple, and Amazon at the Tech for Good Summit held in May, this year. This six-month experiment starting in early 2019 will allow representatives of the French authorities to access the tools, methods, and staff of the social network responsible for hunting racist and anti-Semitic content, homophobic or sexist and determine if Facebook’s checks on these issues could be improved. Mr. Macron said, “It's a first. And a very innovative experimental approach, which illustrates the cooperative method that I advocate.” According to TechCrunch, “the regulators will look at multiple steps such as how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image”. “It is unclear whether the group will have access to highly-sensitive material such as Facebook’s algorithms or codes to remove hate speech”, according to Reuters report. Nick Clegg, the former British deputy prime minister who is now head of Facebook’s global affairs said, “The best way to ensure that any regulation is smart and works for people is by governments, regulators and businesses working together to learn from each other and explore ideas.” Regulators could introduce widespread regulation without consulting the company. But this process should lead to fine-grained regulation. To know more about this news in detail, head over to TechCrunch and Reuter’s full coverage. Following Google, Facebook changes its forced arbitration policy for sexual harassment claims Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News
Read more
  • 0
  • 0
  • 2260