Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-rights-groups-pressure-google-amazon-and-microsoft-to-stop-selling-facial-surveillance-tech-to-government
Natasha Mathur
16 Jan 2019
4 min read
Save for later

Rights groups pressure Google, Amazon, and Microsoft to stop selling facial surveillance tech to government

Natasha Mathur
16 Jan 2019
4 min read
A group of over 85 coalition groups sent letters to Google, Amazon, and Microsoft, yesterday, urging the companies to not sell their facial surveillance technology to the government. The letter, penned down by the likes of the American Civil Liberties Union (ACLU), the Refugee and Immigrant Center for Education and Legal Services (RAICES), and the Electronic Frontier Foundation (EFF) among others, intends to make it clear to the tech giants about how their decision can deeply impact the safety and trust of its community members. The letter talks about the dangers of facial surveillance technology and how it provides the government with an “unprecedented ability to track who we are, where we go, what we do, and who we know”.  It states that face recognition tech would not only provide the government with the power to target immigrants, religious minorities, and people of colour, but it will also develop a constant fear of being watched by the government among the public. The groups requested the companies to take responsibility for their decision and to commit to not selling face surveillance to the government.“Systems built on face surveillance will amplify and exacerbate historical and existing bias that harms these and other over-policed and over-surveilled communities”, states the letter. In a letter written to Microsoft, the group mentions that Brad Smith, President, Microsoft, acknowledged the dangers of face surveillance in a speech and blog post published in December 2018. But, despite Smith acknowledging the dangers, the letter states that he then proposed “wholly inadequate safeguards” for face surveillance in his blog post. The group does not approve of these safeguards as they believe that it is not enough to stop the government from widespread monitoring and tracking of the public. “Microsoft has a responsibility to do more than speak about ethical principles; it must also act in accordance with those principles”, states the letter. Speaking of the letter written to Google, the group acknowledges the fact that Google announced that it will not sell its facial recognition product unless the dangers associated with the tech are addressed, in December 2018. “By finalizing its commitment not to sell a face surveillance product, Google would also be safeguarding the trust of its workers, shareholders, and customers. It’s time for Google to fully commit to not releasing a face recognition product that could be used by governments”, reads the letter. However, when it comes to Amazon, the group notes that the company has been continually turning deaf ears to the protests and warnings from the consumers, employees, members of Congress, etc, over its facial recognition product. The letter points out that over 150,000 consumers have signed petitions asking Amazon to stop providing Rekognition, its facial recognition service, to governments. Back in October 2018, an anonymous Amazon employee spoke out against Amazon selling Rekognition to the police departments across the world, over a letter. Similarly, a group of seven House Democrats sent a letter to Amazon CEO Jeff Bezos in November 2018, demanding concerns and questions about Rekognition’s possible impact on citizens. Moreover, emails obtained by The Daily Beast in October 2018 showed that officials from Amazon met with ICE to sell its facial recognition technology in June 2018. “By continuing to sell your face surveillance product to government entities, Amazon is gravely threatening the safety of community members, ignoring the protests of its own workers, and undermining public trust in its business”, states the letter. The letter also notes that Amazon’s inaction towards the concerns related to face surveillance is quite contrasting compared to the actions taken by its competitors (Google and Microsoft). “We are at a crossroads with face surveillance, and the choices made by these companies now will determine whether the next generation will have to fear being tracked by the government for attending a protest, going to their place of worship, or simply living their lives”, states the letter. Check out the letters written to Google, Amazon, and Microsoft. Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition ACLU files lawsuit against 11 federal criminal and immigration enforcement agencies for disclosure of information on government hacking
Read more
  • 0
  • 0
  • 5101

article-image-facebook-plans-to-invest-300-million-to-support-local-journalism
Sugandha Lahoti
16 Jan 2019
4 min read
Save for later

Facebook plans to invest $300 million to support local journalism

Sugandha Lahoti
16 Jan 2019
4 min read
On Tuesday, Facebook announced that they will be investing $300 million over a period of 3 years to support local journalism including news programs, partnerships, and content. Facebook has been facing criticisms for spreading fake news and misinformation on its platform and its poor decisions on data and privacy controls. Possibly investing a large amount in a news-based initiative is a way to redeem their image. "People want more local news, and local newsrooms are looking for more support," Campbell Brown, Facebook's vice president in charge of global news partnerships, said in a statement. "That's why today we're announcing an expanded effort around local news in the years ahead. We’re going to continue fighting fake news, misinformation, and low-quality news on Facebook,”, he added. Facebook says that the project is an expansion of Facebook’s previously launched accelerator program for metro newspapers to help with their digital subscription business. The $300 million funding will support local journalists in news-gathering and building sustainable long-term business models. Per a report by Axios, one-third of the money from the program has already been provided to local news non-profits and programs, and Facebook's own local news initiatives. Almost $5 million grant will be given to Pulitzer Center to launch a fund that will support 12 local newsrooms with local in-depth, multimedia reporting projects and an additional $5 million matching gift. $2 million will be provided to Report for America to help place 1,000 journalists in local newsrooms across America over the next five years. Other recipients of the investments include Knight-Lenfest Local News Transformation Fund, the Local Media Association and Local Media Consortium, the American Journalism Project and the Community News Project. Last year, Google also started its own journalism initiative called the Google News Initiative (GNI), investing over $300 million to support the news industry’s biggest needs. Netizens have mixed sentiment for this initiative. Jim Friedlich, Executive Director and CEO of The Lenfest Institute for Journalism, says Facebook and local news are "co-dependent" and calls the investments from Facebook "a sincere effort to help the local news business”. Fran Wills, CEO of the Local Media Consortium, supported the initiative saying, “Facebook is making this investment to help support local media companies, open up new revenue streams that will support local journalism,” However, criticisms were also in huge numbers. Nikki Usher, a George Washington University professor of media studies, said the effort “is a bit of smoke and mirrors because it’s hard to tell what’s really local for Facebook”. Facebook’s effort is “a lot of money in one sense but in another sense it’s not that much, the equivalent of revenues of one large newspaper”, said Dan Kennedy, a journalism professor at Northeastern University. A hacker news user said, “This play to 'local news' is simply a tool to advance their own agenda, they want to own local news and help you feel that FB is all warm and 'local' to your needs.” Another said, “As someone who has been running a local newspaper for the last four years in a town, my trust in Facebook is exactly zero. They have practically monopolized news distribution, helped to destroy the business model of important social service and now they would like to make up for it by giving a fraction of the money back?” https://twitter.com/AnandWrites/status/1085198548717760512 Only time will tell, if news companies welcome these contributions by Facebook or it will still be difficult for the social media platform to resolve the relationships it has with publishers, and tech companies. Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK? Facebook is reportedly rating users on how trustworthy they are at flagging fake news Four 2018 Facebook patents to battle fake news and improve news feed
Read more
  • 0
  • 0
  • 1714

article-image-itif-along-with-google-amazon-and-facebook-bargain-on-data-privacy-rules-in-the-u-s
Savia Lobo
15 Jan 2019
2 min read
Save for later

ITIF along with Google, Amazon, and Facebook bargain on Data Privacy rules in the U.S.

Savia Lobo
15 Jan 2019
2 min read
Yesterday, The Information Technology and Innovation Foundation (ITIF), supported by Google, Amazon, and Facebook, proposed ‘A Grand Bargain on Data Privacy Legislation for America’ along with the lawmakers. According to The Verge, the proposal states that “any new federal data privacy bill should preempt state privacy laws and repeal the sector-specific federal ones entirely.” The proposal highlights a few basic characteristics such as requiring more transparency, data interoperability, and users to opt into the collection of sensitive personal data. “All 50 states have their own laws when it comes to notifying users after a data breach, and ITIF asks for a single breach standard in order to simplify compliance. It also calls to expand the Federal Trade Commission’s authority to fine companies that violate the data privacy law, something industry leaders have asked for in the past”, the Verge reports. The proposal would additionally preempt state laws like California’s new privacy act while revoking other federal privacy legislation laws, which include Health Insurance Portability and Accountability Act (HIPAA) and Family Educational Rights and Privacy Act (FERPA). Alan McQuinn, the ITIF senior policy analyst, said, “Privacy regulations aren’t free—they create costs for consumers and businesses, and if done badly, they could undermine the thriving U.S. digital economy. Any data privacy regulations should create rules that facilitate data collection, use, and sharing while also empowering consumers to make informed choices about their data privacy”. Sen. Richard Blumenthal (D-CT) said, “This proposal would protect no one – it is only a grand bargain for the companies who regularly exploit consumer data for private gain and seek to evade transparency and accountability.” He also said that the proposal simply highlights the fact that Big tech cannot be trusted to write their own rules. To know more about this in detail, visit The Verge. ACLU files lawsuit against 11 federal criminal and immigration enforcement agencies for disclosure of information on government hacking The US to invest over $1B in quantum computing, President Trump signs a law The district of Columbia files a lawsuit against Facebook for the Cambridge Analytica scandal
Read more
  • 0
  • 0
  • 1703
Visually different images

article-image-tensorflow-2-0-to-be-released-soon-with-eager-execution-removal-of-redundant-apis-tf-function-and-more
Amrata Joshi
15 Jan 2019
3 min read
Save for later

TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more

Amrata Joshi
15 Jan 2019
3 min read
Just two months ago Google’s TensorFlow, one of the most popular machine learning platforms celebrated its third birthday. Last year in August, Martin Wicke, engineer at Google, posted the list of what’s expected in TensorFlow 2.0, an open source machine learning framework, on the Google group. The key features listed by him include: This release will come with eager execution. This release will feature more platforms and languages along with improved compatibility. The deprecated APIs will be removed. Duplications will be reduced. https://twitter.com/aureliengeron/status/1030091835098771457 The early preview of TensorFlow 2.0 is expected soon. TensorFlow 2.0 is expected to come with high-level APIs, robust model deployment, powerful experimentation for research and simplified API. Easy model building with Keras This release will come with Keras, a user-friendly API standard for machine learning which will be used for building and training the models. As Keras provides various model-building APIs including sequential, functional, and subclassing, it becomes easier for users to choose the right level of abstraction for their project. Eager execution and tf.function TensorFlow 2.0 will also feature eager execution, which will be used for immediate iteration and debugging. The tf.function will easily translate the Python programs into TensorFlow graphs. The performance optimizations will remain optimum and by adding the flexibility, tf.function will ease the use of expressing programs in simple Python. Further, the tf.data will be used for building scalable input pipelines. Transfer learning with TensorFlow Hub The team at TensorFlow has made it much easier for those who are not into building a model from scratch. Users will soon get a chance to use models from TensorFlow Hub, a library for reusable parts of machine learning models to train a Keras or Estimator model. API Cleanup Many APIs are removed in this release, some of which are tf.app, tf.flags, and tf.logging. The main tf.* namespace will be cleaned by moving lesser used functions into sub packages such as tf.math. Few APIs have been replaced with their 2.0 equivalents like tf.keras.metrics, tf.summary, and tf.keras.optimizers. The v2 upgrade script can be used to automatically apply these renames. Major Improvements The queue runners will be removed in this release The graph collections will also get removed. The APIs will be renamed in this release for better usability. For example,  name_scope can be accessed using  tf.name_scope or tf.keras.backend.name_scope. For ease in migration to TensorFlow 2.0, the team at TensorFlow will provide a conversion tool for updating TensorFlow 1.x Python code for using TensorFlow 2.0 compatible APIs. It will flag the cases where code cannot be converted automatically. In this release, the stored GraphDefs or SavedModels will be backward compatible. With this release, the distribution to tf.contrib will no more be in use. Some of the existing contrib modules will be integrated into the core project or will be moved to a separate repository, rest of them will be removed. To know about this news, check out the post by the TensorFlow team on Medium. Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs
Read more
  • 0
  • 0
  • 3857

article-image-foundationdb-open-sources-foundationdb-record-layer-with-schema-management-indexing-facilities-and-more
Natasha Mathur
15 Jan 2019
2 min read
Save for later

FoundationDB open-sources FoundationDB Record Layer with schema management, indexing facilities and more

Natasha Mathur
15 Jan 2019
2 min read
The FoundationDB team announced the open-source release of FoundationDB Record Layer, a relational database technology used by CloudKit,  yesterday. FoundationDB Record Layer is capable of storing the structured data in a similar fashion as a relational database. It comes with features such as schema management, indexing facilities, along with a rich set of query capabilities. The Record Layer is being used by Apple and offers support for the apps and services. Since the Record layer is built on top of FoundationDB, it is capable of inheriting FoundationDB's strong ACID semantics, and performance in a distributed setting. Apart from using its ACID (atomicity, consistency, isolation, durability) semantics, the Record Layer also uses FoundationDB's transactional semantics. This helps it provide features similar to the ones found in the traditional relational database but in a distributed setting. The design and a core feature of the Record layer were built in a way that allows it to scale to millions of concurrent users, diverse ecosystem of client applications, and query access patterns. Apart from that, the Record Layer comes with an ability to balance out the resource consumption across users in a predictable way. A combination of the Record layer and FoundationDB forms the backbone of CloudKit, a framework by Apple that provides an interface to move your data between apps and iCloud containers. Other highlights of the Record Layer include: support for transactional secondary indexing that takes full advantage of the Protocol Buffer data model. a declarative query API that is useful for retrieving data along with a query planner that turns those queries into concrete database operations. a large number of deep extension points that can help users build custom index maintainers and query planning features, allowing them to seamlessly “plug-in” new index types. For more information, check out the official FoundationDB announcement. FoundationDB open sources FoundationDB Document Layer with easy scaling, no sharding, consistency and more FoundationDB 6.0.15 releases with multi-region support and seamless failover management Introducing EuclidesDB, a multi-model machine learning feature database
Read more
  • 0
  • 0
  • 2142

article-image-googlers-for-ending-forced-arbitration-a-public-awareness-social-media-campaign-for-tech-workers-launches-today
Natasha Mathur
15 Jan 2019
4 min read
Save for later

Googlers for ending forced arbitration: a public awareness social media campaign for tech workers launches today

Natasha Mathur
15 Jan 2019
4 min read
There seems to be a running battle between Google and its employees for quite some time now. A group of Google employees announced yesterday that they’re launching a public awareness social media campaign from 9 AM to 6 PM EST today. The group, called, ‘Googlers for ending forced arbitration’ aims to educate people about the forced arbitration policy via Instagram and Twitter where they will also share their experiences about the same with the world.   https://twitter.com/endforcedarb/status/1084813222505410560 The group has researched their fellow tech employees, academic institutions, labour attorneys, advocacy groups, etc as well as the contracts of around 30 major tech companies, as a part of its efforts. They also published a post on Medium, yesterday, stating that “ending forced arbitration is the gateway change needed to transparently address inequity in the workplace”. According to National Association of Consumer Advocates, “In forced arbitration, a company requires a consumer or employee to submit any dispute that may arise to binding arbitration as a condition of employment or buying a product or service. The employee or consumer is required to waive their right to sue, to participate in a class action lawsuit, or to appeal”. https://twitter.com/ODwyerLaw/status/1084893776429178881 Demands for more transparency around Google’s sexual assault policies seems to have become a bone of contention for Google. For instance, two shareholders, namely, James Martin and two other pension funds sued Alphabet’s board members, last week, for protecting the top execs accused of sexual harassment. The lawsuit, which seeks major changes to Google’s corporate governance, also urges for more clarity surrounding Google’s policies. Similarly, Liz Fong Jones, developer advocate at Google Cloud platform, revealed earlier this month, that she’s planning to leave the firm due to Google showing lack of leadership in case of the demands made by employees during the Google walkout. It was back in November 2018 when over 20,000 Google employees organized Google “walkout for real change” and walked out of their offices along with temps and contractors to protest against the discrimination, racism, and sexual harassment encountered within Google. Google employees had made five demands as part of the walkout, including ending forced arbitration for all employees (including temps) in cases of sexual harassment and other forms of discrimination. Now, although Google announced that it's ending its forced Arbitration policy as a response to the walkout (a move that was soon followed by Facebook) back in November, Google employees are not convinced. They argue that the announcement only made up for strong headlines, and did not actually do enough for the employees. The employees mentioned that there were “no meaningful gains for worker equity … nor any actual change in employee contracts or future offer letters (as of this publication, we have confirmed Google is still sending out offer letters with the old arbitration policy)”. Moreover, forced arbitration still exists in Google for cases involving other forms of workplace harassment and discrimination issues that are non-sexual in nature. Google has made the forced arbitration policy optional only for individual cases of sexual assault for full-time employees and still exists for class-action lawsuits and thousands of contractors who work for the company. Additionally, the employee contracts in the US still have the arbitration waiver in effect. “Our leadership team responded to our five original demands with a handful of partial policy changes. The other ‘changes’ they announced simply re-stated our current, ineffective practices or introduced extraneous measures that are irrelevant to bringing equity to the workplace”, mentions the group in a blog post on Medium. Follow the public awareness campaign on the group’s Instagram and Twitter accounts. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”
Read more
  • 0
  • 0
  • 1946
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-ahead-of-eus-vote-on-new-copyright-rules-eff-releases-5-key-principles-to-guide-copyright-policy
Sugandha Lahoti
15 Jan 2019
3 min read
Save for later

Ahead of EU's vote on new copyright rules, EFF releases 5 key principles to guide copyright policy

Sugandha Lahoti
15 Jan 2019
3 min read
The Electronic frontier foundation is taking part in copyright week. Their motto, “Copyright should encourage more speech, not act as a legal cudgel to silence it.” According to EFF, copyright law often belongs in a majority to the media and entertainment industries, with little to no effect on other domains. Following this, EFF has teamed with other organizations to participate in Copyright Week. They talk about five copyright issues which can help build a set of principles for the copyright law. Participating organizations for this year include Association of research libraries, Authors Alliance, Copyright for creativity, Disco, Ifixit, Rstreet, Techdirt, and Wikimedia. For the year 2019, they have highlighted five issues and the whole week they will be releasing blog posts and actions on these issues on their blog and on Twitter. EFF’s copyright issues for this year: [box type="shadow" align="" class="" width=""] Copyright as a Tool of Censorship Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it. Device and Digital Ownership As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it–meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way. Public Domain and Creativity Copyright policy should encourage creativity, not hamper it. Excessive copyright terms inhibit our ability to comment, criticize, and rework our common culture. Safe Harbors Safe harbor protections allow online intermediaries to foster public discourse and creativity. Safe harbor status should be easy for intermediaries of all sizes to attain and maintain. Filters Whether as a result of corporate pressure or regulation, over-reliance on automated filters to patrol copyright infringement presents a danger to free expression on the Internet.[/box] This month EU is all set to vote on new copyright rules. These new copyright laws have received major opposition from Europeans. Per EFF, the Articles 11 and 13, also known as the “censorship machines” rule and the “link tax” rule, have the power to crush small European tech startups and expose half a billion Europeans to mass, unaccountable algorithmic censorship. Per the Article 13 of the law, online platforms would be required to use algorithmic filters to unilaterally determine whether content anyone uploads, from social media posts to videos, infringes copyright, and would penalize companies that allow a user to infringe copyright, but not companies that overblock and censor their users. The outcome will be censorship of massive proportions. The Directive is now in the hands of the European member-states. EFF urges people from Sweden, Germany, Luxembourg, and Poland to contact their ministers to convey their concern about Article 13 and 11. Reddit takes stands against the EU copyright directives; greets EU redditors with ‘warning box’ GitHub updates developers and policymakers on EU copyright Directive at Brussels What the EU Copyright Directive means for developers – and what you can do
Read more
  • 0
  • 0
  • 1863

article-image-att-and-other-telcos-to-suspend-selling-customer-location-data-after-motherboards-investigation-reports-wsj
Bhagyashree R
11 Jan 2019
4 min read
Save for later

AT&T and other telcos to suspend selling customer location data after Motherboard’s investigation, reports WSJ

Bhagyashree R
11 Jan 2019
4 min read
On Thursday, AT&T said in a statement to WSJ that it will stop selling customers’ location data to third-party service following a report published by Motherboard. This recent investigation by Motherboard disclosed that how selling location information by telecom companies can eventually reach the wrong hands putting customer’s privacy and safety in danger. What Motherboard’s investigation revealed? Motherboard’s investigation revealed that many telecommunication companies like T-Mobile, AT&T, and Sprint are selling users’ real-time location data that can ultimately reach the wrong hands like stalkers or criminals. This investigation showed that mobile networks and the data they generate are not as secure as we want them to be. Telecom companies in US sell user’s location data to other companies, who are called location aggregators. These companies can then sell this information to their clients and industries. This basically forms a complex supply chain of data that shares user’s most sensitive data. And, in some cases telecommunication companies, which are the originators, may not even have any idea how this data is being used by the eventual end user. A similar scenario happened in May last year, when Sen. Ron Wyden in a letter to FCC revealed that Securus, Verizon’s indirect corporate customer had used its customer location data to effectively allow officers to spy on millions of Americans. As a reply, Verizon filed a letter saying that it is ending its data-sharing agreement with LocationSmart and Zumigo. Following that AT&T and Sprint also announced that they are cutting ties with location aggregators. How the companies reacted? AT&T in a statement said, “In light of recent reports about the misuse of location services, we have decided to eliminate all location aggregation services — even those with clear consumer benefits. We are immediately eliminating the remaining services and will be done in March.”  John Legere, T-Mobile chief executive tweeted that his organization will also completely end the location aggregator work in March: Sprint responded to WSJ, “Protecting our customers’ privacy and security is a top priority, and we are transparent about that in our Privacy Policy. We do not knowingly share personally identifiable geolocation information except with customer consent or in response to a lawful request such as a validated court order from law enforcement.” A victory for privacy advocates Sen. Ron Wyden believes that telecom companies cannot just put the blame on the third-party companies. He remarked that we need to enforce strong legislations to ensure our data is secure. He said, “Congress needs to pass strong legislation to protect Americans’ privacy and finally hold corporations accountable when they put your safety at risk by letting stalkers and criminals track your phone on the dark web.”  Sen. Kamala D. Harris called for an immediate investigation by the Federal Communications Commission (FCC) against this case. She also feels that this is a major threat to user security, “I’m extraordinarily troubled by reports of this system of repackaging and reselling location data to unregulated third party services for potentially nefarious purposes. If true, this practice represents a legitimate threat to our personal and national security.” One of the users in a MetaFilter discussion said, “The greatest thing that U.S. law needs in the age of privacy concerns is a broadening of the category referred to in the law as "innkeepers and common carriers," which is basically an olde-tyme recognition that there are sorts of private businesses that are central enough to everyday life and serve such a purpose as to need to be held to a higher duty of care than others. Cell phones, social media, and bank records all should fall under this duty of care in any sensible modern society.” FCC grants Google the approval for deploying Soli, their radar-based hand motion sensor Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules Spammy bots most likely influenced FCC’s decision on net neutrality repeal, says a new Stanford study
Read more
  • 0
  • 0
  • 1562

article-image-updates-on-scikit-learn-future-scikit-learn-0-20-talk-by-andreas-mueller
Prasad Ramesh
11 Jan 2019
4 min read
Save for later

Updates on scikit-learn future, scikit-learn 0.20 - talk by Andreas Mueller

Prasad Ramesh
11 Jan 2019
4 min read
Recently Andreas Mueller gave a talk on changes in scikit-learn 0.20 and future releases. He is an Associate Research Scientist at the Data Science Institute, University of Columbia, New York. Dr. Mueller is also a core developer of the scikit-learn library. scikit-learn is a popular machine learning library for the Python programming language. In scikit-learn the data is represented as a 2D NumPy array where a row is a sample and a column is a feature from your dataset. scikit-learn 0.20 is released as of now. Here are the highlights from the talk about the scikit-learn future. Preprocessing changes scikit-learn 0.20 aims to simplify things for user, especially preprocessing. The OneHotEnoder is rewritten to support strings Previously, the OneHotEnoder in scikit-learn only supported integers as a result of which categorical variables were encoded as strings. ColumnTransformer Another feature to help preprocessing is the ColumnTransformer. It is similar to something that previously existed called a feature union. The ColumnTransformer allows developers to apply different transformations or different preprocessing steps to different columns  in a columnar dataset. make_column_transformer makes use of the column names PowerTransformer The basic idea of the PowerTransformer is to do a power transformation that would take the data to some power. The goal is to make the data more normal. Treatment of missing values Now the scalers like StandardScaler, MinMaxScaler, RobustScaler etc allow having missing values in the data. Now you can apply the scikit-learn scalers before filling in or imputing missing values. During fitting, they all ignore the missing values. Imputer is now SimpleImputer and is a simplified version but will also add some more complex model based imputation strategies. MissingIndicator is added which allows you to record when have values been imputed often and if a value is missing it will tell you something about the data point. TransformedTargetRegressor With this, you can transform the target before building the model and after prediction. In terms of absolute error, this is much better than not using target transformation. There is also no systematic skew in the data any more or lesser than before. OpenML dataset loader This replaces ML data which was no longer maintained. OpenML allows you to create tasks on the dataset along with uploading data, you can also upload the results of a problem. Loky - a robust and reusable executor joblib is upgraded and now includes a new tool called Loky. This is an alternative to multiprocessing.pool.Pool and concurrent.features.ProcessPoolExecutor. The replacement was necessary as the old tools were not very robust. It has a deadlock free implementation and consistent spawn behavior. It also fixes the random crashes which happened previously with BLAS/OpenMP libraries. Global config for scikit-learn A global configuration now exists for scikit-learn which you can use with sklearn.config_context or sklearn.set_config either to set a global state or to use a context manager. This supports two options which are to increase speed or reduce memory consumption. Removing the check if an input is valid for large datasets saves on time by setting finite to TRUE. Setting working_memory limits RAM usage. Currently works on in distance computation and nearest neighbor computation. The options can be used like this: set_config(assume_finite=None, working_memory=None) Early stopping for gradient boosting You can stop building the model based on the tolerance and number of iterations you set. For example, the model will stop if for the last five iterations there was no improvement beyond 0.01%. There is something similar for stochastic gradient descent too. Other changes A glossary is added which explains all the terms used in scikit-learn to improve scikit-learn future uses and make the library more welcoming for new users. There are also better default parameters since it was found that most people use algorithms with default parameters. The following changes will be made in the scikit-learn future releases, until then you will receive warnings. For random forests the number of estimators will change from 10 to 100 (in version 0.22) Cross validation will be 5 fold instead of 3 (in version 0.22) In grid search iid will be set to False (in version 0.22) and iid will be removed (in version 0.24) For LogisticRegression defaults, the following changes will happen in sckit-learn 0.22: solver=’lbfgs’ from ‘liblinear’ multiclass=’auto’ from ‘ovr’ You can avoid warnings in your code by setting the parameters yourself explicitly. Python 2.7 and 3.4 support will be dropped in scikit-learn 0.21. If you want to see examples of using the new features and some other useful tips by Dr. Mueller watch the talk on YouTube. Scikit Learn 0.20.0 is here! Machine Learning in IPython with scikit-learn Why you should learn Scikit-learn
Read more
  • 0
  • 0
  • 2476

article-image-researchers-build-a-deep-neural-network-that-can-detect-and-classify-arrhythmias-with-cardiologist-level-accuracy
Bhagyashree R
11 Jan 2019
2 min read
Save for later

Researchers build a deep neural network that can detect and classify arrhythmias with cardiologist-level accuracy

Bhagyashree R
11 Jan 2019
2 min read
A group of researchers from Stanford University and University of California with iRhythm Technologies Inc. and Veterans Affairs Palo Alto Health Care System have build a model that can help in the diagnosis of irregular heart rhythms, also called as arrhythmias. On Monday, the researchers shared their findings in a paper published on Springer Nature: Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Detecting arrhythmias is a pretty easy task for an expert technician or a cardiologist but is known to be quite challenging for computers. With the help of widely available ECG data and deep learning, this study aimed to improve the accuracy and scalability of automated ECG analysis. For this study, the researchers built a 34-layer deep neural network (DNN) and trained it to detect arrhythmia in arbitrary length ECG time series. The model was trained on 91,232 single-lead ECGs from 53,549 patients who used a single-lead ambulatory ECG monitoring device. The network learned to classify noise and the sinus rhythm. Additionally, it also learned to classify and segment twelve arrhythmia types present in the time series. For testing the model, the researchers used an independent test dataset annotated by a consensus committee of board-certified practicing cardiologists. The test dataset used in this study is publicly available at iRhythm Technologies’s GitHub repository. The model did pretty well by achieving an average area under the receiver operating characteristic curve (ROC) of 0.97. Another measure of accuracy was F1, which is a harmonic mean of the positive predictive value and sensitivity. F1 score of the DNN (0.837) exceeded that of average cardiologists (0.780). Researchers introduce a CNN-based system for identifying radioresistant cancer-causing cells Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US Our healthcare data is not private anymore: Study reveals that machine learning can be used to re-identify individuals from physical activity data
Read more
  • 0
  • 0
  • 2971
article-image-shareholders-sue-alphabets-board-members-for-protecting-senior-execs-accused-of-sexual-harassment
Natasha Mathur
11 Jan 2019
5 min read
Save for later

Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment

Natasha Mathur
11 Jan 2019
5 min read
Alphabet shareholder, James Martin, filed a lawsuit yesterday, against Alphabet’s board of directors, Larry Page, Sergey Brin, and Eric Schmidt for covering up the sexual harassment allegations against the former top execs at Google and for paying them large severance packages. As mentioned in the lawsuit, Martin sued the company for breaching its fiduciary duty to shareholders, unjust enrichment, abuse of power, and corporate waste. “The individual defendants breached their duty of loyalty and good faith by allowing the defendants to cause, or by themselves causing, the company to cover up Google’s executives’ sexual harassment, and caused Google to incur substantial damage”, reads the lawsuit. The lawsuit, filed at the San Mateo County court, San Francisco, seeks major changes to Google’s corporate governance. It calls for the non-management shareholders to nominate three new candidates for election to the board, and elimination of current dual class structure of the stock, which in turn, would take away the majority of the voting share from Page and Brin. It wants the former Google executives to repay the severance packages, benefits, and other compensation that they received from Google. Additionally, it also seeks for the Alphabet directors to pay for the punitive damages caused to Alphabet due to their engagement in corporate waste. Apart from the lawsuit filed by Martin, Alphabet’s board got hit with another lawsuit this week, on behalf of the two additional pension funds, the Northern California Pipe Trades Pension Plan and Teamsters Local 272 Labor Management Pension Fund, who own the Alphabet stock. The lawsuit makes similar allegations like the one filed by Martin, accusing Alphabet’s board members of ‘breaching their fiduciary duties by rewarding male harassers’, and ‘hiding the Google+ breach from the public’. The news of Google paying its top execs outsized exit packages first came to light back in October 2018, when the New York Times shared its investigation on sexual misconduct at Google. It alleged that Google had protected Andy Rubin, creator of Android, and Amit Singhal, ex-senior VP, Google search, among other senior execs after they were accused of sexual misconduct. Google reportedly paid $90 million as an exit package to Rubin along with a well-respected farewell. Similarly, Singhal was asked to resign in 2016 after accusations of him groping a female employee at an offsite event surfaced in 2005. As per the NY times report, Singhal received an exit package that paid him millions. However, both, Rubin and Singhal, denied the accusations. As a part of their response to Google’s handling of sexual misconduct, over 20,000 Google employees along with vendors, and contractors organized Google “walkout for real change” and walked out of their offices back in November 2018 to protest against the discrimination, racism, and sexual harassment encountered within Google. The employees laid out five demands as part of the Google walkout, including an end to forced arbitration in case of discrimination and sexual harassment for employees, among others. In response to the walkout, Google eliminated its forced arbitration policy in cases of sexual harassment, a step that was soon followed by Facebook, who also eliminated its forced arbitration policy. Sundar Pichai, CEO, Google, wrote a note that where he admitted that he’s ‘sincerely sorry’ and hopes to bring more transparency around sexual misconduct allegations. The ‘Google walkout for real change’ Medium page responded to the lawsuit today, stating that they agree with the shareholders and “anyone who enables abuse, harassment and discrimination must be held accountable, and those with the most power have the most to account for”. The response also states that currently, a small group of “mostly white” male executives makes decisions at Google that significantly impact workers and the world with “little accountability”. “We have all the evidence we need that Google’s leadership does not have our best interests at heart. We need to change the way the system works, above and beyond addressing the wrongs of those who work within the system,” reads the post. The lawsuit filed by Martin partly relies on non-public evidence i.e. Alphabet’s board meetings in 2014 (concerns Rubin) and 2016 (concerns Singhal), that shows the board members discussing severance packages for Rubin and Singhal. However, this part was heavily redacted from the public on Google’s demand. Both the meetings, the full board meeting along with the leadership development and compensation committee meeting, are covered, in the evidence that shows approved payments to Rubin. The lawsuit states that Google directors agreed to pay Rubin because they wanted to ‘ensure his silence’. This is because Google feared that if they fired him for cause, then he would publicly reveal all the details of sexual harassment and other wrongdoings within Google. Moreover, Google also asked the victims of sexual harassment to keep quiet once they found out that the sexual assault allegations were credible. “When Google covers up harassment and passes the trash, it contributes to an environment where people don’t feel safe reporting misconduct. They suspect that nothing will happen or, worse, that the men will be paid and the woman will be pushed aside”, quotes the lawsuit. For more coverage, check out the full suit filed by Martin and the two pension funds. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration” Richard DeVaul, Alphabet executive, resigns after being accused of sexual harassment
Read more
  • 0
  • 0
  • 2110

article-image-an-ai-startup-now-wants-to-monitor-your-kids-activities-to-help-them-grow-securly
Natasha Mathur
10 Jan 2019
3 min read
Save for later

An AI startup now wants to monitor your kids’ activities to help them grow ‘securly’

Natasha Mathur
10 Jan 2019
3 min read
AI is everywhere and now it’s helping monitor kid’s activities to maintain safety across different schools and prevent school shooting incidents. An AI startup called Securly, co-founded by Vinay Mahadik and Bharath Madhusudan focusses on student safety with the help of features such as web filtering, cyberbullying monitoring, and self-harm alerts. Its cloud-based web filters maintain an age-appropriate internet, monitors bullying and ensures that schools remain CIPA-compliant. Another feature called ‘auditor’ in Securly uses Google’s Gmail service to send alerts when the risk of bullying or self-harm is detected. There’s also a tipline feature that sends anonymous tips to students over phone, text, or email.   The machine learning algorithms used by Securly are trained by safety specialists using safe and unsafe content. So, once the algorithms find any content as disturbing, the 24×7 student safety experts evaluate further context behind the activity and reach out to schools and authorities as needed. Securly raised $16m in Series B round of funding last month. The company managed to raise $24 million and was led by Defy Partners. The company now wants to use these funds to amp up their research and development further in K-12 safety. Moreover, Mahadik is also focussing on technologies that can be used across schools without hampering kids’ privacy. He told Forbes, ”You could say show me something that happened on the playground where a bunch of kids punched or kicked a certain kid. If you can avoid personally identifying kids and handle the data responsibly, some tech like this could be beneficial”. Securly currently has over 2,000 paid school districts using its free Chromebook filtering and email auditing services. However, public reaction to the news isn’t entirely positive as many people are criticizing the startup of shifting the focus from the real issue ( i.e. providing kids with much-needed counseling and psychological help, implementing family counseling programs, etc ) and is instead promoting tracking every kid’s move to make sure they don’t ever falter. Securly is not the only surveillance service that has received heavy criticism. Predictim, an online service that uses AI to analyze risks associated with a babysitter, also got under the spotlight, over the concerns of its biased algorithms and for violating the privacy of the babysitters. https://twitter.com/ashedryden/status/1083084280736202752 https://twitter.com/ashedryden/status/1083087232897028096 https://twitter.com/jennifershehane/status/1083100079123124224 https://twitter.com/dmakogon/status/1083092624410660865 Babysitters now must pass Perdictim’s AI assessment to be “perfect” to get the job Center for the governance of AI releases report on American attitudes and opinions on AI Troll Patrol Report: Amnesty International and Element AI use machine learning to understand online abuse against women
Read more
  • 0
  • 0
  • 3241

article-image-aws-introduces-amazon-documentdb-featuring-compatibility-with-mongodb-scalability-and-much-more
Amrata Joshi
10 Jan 2019
4 min read
Save for later

AWS introduces Amazon DocumentDB featuring compatibility with MongoDB, scalability and much more

Amrata Joshi
10 Jan 2019
4 min read
Today, Amazon Web Services (AWS) introduced a MongoDB compatible Amazon DocumentDB designed to provide performance, scalability, and availability needed while operating mission-critical MongoDB workloads. Customers use MongoDB as a document database to retrieve, store and manage semi-structured data. But it is difficult to build performant, highly available applications that can quickly scale to multiple terabytes and thousands of reads- and writes-per-second because of the complexity that comes with setting up MongoDB clusters at scale. https://twitter.com/nathankpeck/status/1083144657591255043 Amazon DocumentDB uses a fault-tolerant, distributed and self-healing storage system that auto-scales up to 64 TB per database cluster. With AWS Database Migration Service (DMS) users can migrate their MongoDB databases which are on-premise or on Amazon EC2 to Amazon DocumentDB for free (for six months) with no downtime. Features of Amazon DocumentDB Compatibility Amazon DocumentDB, compatible with version 3.6 of MongoDB, also implements the Apache 2.0 open source MongoDB 3.6 API. This implementation is possible by emulating the responses that a MongoDB client expects from a MongoDB server, further allowing users to use the existing MongoDB drivers and tools with Amazon DocumentDB. Scalability Storage in the Amazon DocumentDB can be scaled from 10 GB to 64 TB in increments of 10 GB. With this document database service, users don’t have to preallocate storage or monitor free space. Users can choose between six instance sizes (15.25 GiB (Gibibyte) to 488 GiB of memory) and also create up to 15 read replicas. Storage and compute can be decoupled and one can easily scale each one independently as needed. Performance Amazon DocumentDB stores database changes as a log stream which allows users to process millions of reads per second with millisecond latency. This storage model provides an increase in the performance without compromising data durability and further enhance the overall scalability. Reliability Amazon DocumentDB’s 6-way storage replication provides high availability. It can failover from primary to a replica within 30 seconds and also supports MongoDB replica set emulation such that the applications can quickly handle system failure. Fully Managed Amazon DocumentDB is fully managed with fault detection, built-in monitoring, and failover. Users can set up daily snapshot backups, take manual snapshots, or use either one to create a fresh cluster if necessary. It integrates with Amazon CloudWatch, so users can monitor over 20 key operational metrics for their database instances via the AWS Management Console. Secure Users can encrypt their active data, snapshots, and replicas with the KMS (Key Management Service) key while creating Amazon DocumentDB clusters. In this document database service, authentication is enabled by default. For the security of the database, it also uses network isolation with the help of Amazon VPC. According to Infoworld, this news has given rise to few speculations as AWS isn’t promising that its managed service will work with all applications that use MongoDB. But this move by Amazon has now led to a new rivalry. MongoDB CEO and president Dev Ittycheria told Techcrunch, “Imitation is the sincerest form of flattery, so it’s not surprising that Amazon would try to capitalize on the popularity and momentum of MongoDB’s document model. However, developers are technically savvy enough to distinguish between the real thing and poor imitation. MongoDB will continue to outperform any impersonations in the market.” As reported by Geekwire and Techcrunch, Amazon DocumentDB’s compatibility with MongoDB is unlikely to require commercial licensing from MongoDB. https://twitter.com/tomkrazit/status/1083165858891915264 To know more about Amazon DocumentDB, check out Amazon DocumentDB. US government privately advised by top Amazon executive on web portal worth billions to the Amazon; The Guardian reports Amazon Rekognition faces more scrutiny from Democrats and German antitrust probe Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!
Read more
  • 0
  • 0
  • 2662
article-image-center-for-the-governance-of-ai-releases-report-on-american-attitudes-and-opinions-on-ai
Natasha Mathur
10 Jan 2019
7 min read
Save for later

Center for the governance of AI releases report on American attitudes and opinions on AI

Natasha Mathur
10 Jan 2019
7 min read
Center for the Governance of AI, housed within the University of Oxford, released a report yesterday, titled “Artificial Intelligence: American Attitudes and Trends” based on the findings from a nationally representative survey that they conducted using the survey firm YouGov.  The report talks about the results from the survey that provides an insight into the American public’s attitudes and opinion toward AI as well as AI governance. Let’s have a look at some of the major highlights from the report. Key highlights Majority of the Americans support than oppose AI development The report states that Americans express mixed support towards AI development, with the majority of them supporting the development than opposing it. The survey results showed that a  substantial 41% of the American respondents strongly support the development of AI and a smaller minority (22%) strongly opposes its development. About 28% of the respondents expressed a neutral attitude towards AI development and 10% stated that they do not know. Artificial Intelligence: American Attitudes and Trends Support for AI development varies between gender, race, experience, and education The report states that support of the American public towards AI development is greater among wealthy, educated, male, or those with experience in technology. The Center for the governance of AI performed a multiple linear regression to predict the support of the American public. As per the survey results, a majority of the respondents belonging to the following four subgroups expressed support towards AI development: The ones with four-year college degrees (57%) The ones with an annual household income above $100,000 (59%) Those who have graduated with a computer science or engineering degree (56%) Those with experience in computer science or programming (58%). On the other hand, women (35%), those with a high school degree or less (29%), and the ones with an annual household income below $30,000 (33%) showed less enthusiasm towards AI development. A large majority of Americans want more careful management of AI and robots The report states that a majority of the American public (more than eight in 10) want AI and robot to be carefully managed, while only 6% disagrees. Center for the governance of AI replicated a question from the 2017 Special Eurobarometer, to compare Americans’ attitudes with those of EU residents. They found out that the 82% of those in the U.S  want more careful management of robots and AI, which is not quite far from the EU average, where 88% of the public supports the same notion. Similarly, 6% of Americans don’t support the notion which is quite close to the EU average where 7% disagreed. The report states that a large percentage of respondents in the survey selected the “don’t know” option. Americans consider many AI governance challenges to be important The report states that a majority of the Americans consider AI governance challenges such as prioritizing data privacy and preventing AI-enhanced digital manipulation, etc, of high importance. Respondents of the survey were asked to randomly consider five AI governance challenges out of the given 13. As per the survey results, the AI governance challenges that Americans think most impactful and important for tech companies to tackle include data privacy and AI-enhanced cyber attacks, and surveillance. Artificial Intelligence: American Attitudes and Trends On the other hand, the challenges that are considered on average 7% less likely to be impactful by the Americans include autonomous vehicles, value alignment, bias in using AI for hiring, the U.S.-China arms race, disease diagnosis, and technological unemployment. At last, challenges that are perceived as even more less likely to be impactful includes criminal justice bias and critical AI systems failures. Artificial Intelligence: American Attitudes and Trends Americans see the potential for U.S. China cooperation on certain AI governance challenges As a part of the survey, all the American respondents were assigned three out of five AI governance challenges, on which they see US-China co-operation. The five challenges included: AI cyber attacks against governments, individuals, and organizations. AI-assisted surveillance violates privacy and civil liberties. AI systems that are safe, trustworthy, and aligned with human values. Banning lethal autonomous weapons. Guarantee of a good standard of living for people who are at risk of losing their jobs to automation. The survey results showed that China cooperation on value alignment is perceived to be the most likely (48% mean likelihood) and cooperation to prevent AI-assisted surveillance being the least likely (40% mean likelihood). “In the future, we plan to survey Chinese respondents to understand how they view U.S.-China cooperation on AI and what governance issues they think the two countries could collaborate on”, states the report. Americans don’t think labor market disruptions will increase with time As a part of the survey, respondents were asked to select one out of four conditions based on the likelihood of AI and automation creating more jobs than eliminating over the future time frames of 10 years, 20 years, and 50 years. The survey results showed that on average, American public disagrees with the statement “automation and AI will create more jobs than they will eliminate” more than they agree with it. Also, about a quarter of respondents gave “don’t know” responses. However, respondents’ agreement with the statement increased slightly with the future time frame.                                              Artificial Intelligence: American Attitudes and Trends Americans trust the U.S. military, universities, tech firms, and non-gov organizations the most to build AI The report states that Americans put more trust in tech companies and non-governmental organizations than in governments for the development and use of AI. As a part of the survey, respondents were randomly assigned five actors out of 15 that are not well-known to the public such as NATO, CERN, and OpenAI. Respondents were also asked how much confidence they have in each of these actors to build AI and were again randomly assigned five out of 15 actors. As per the results of the survey, Americans consider university researchers and the U.S. military as the most trusted groups to develop AI. Half of the Americans responded with a “great deal” or “fair amount” of confidence for this group. Americans expressed slightly less confidence when it came to tech companies, non-profit organizations, and American intelligence organizations. In general, the American public finds more confidence in non-governmental organizations as opposed to the governmental ones. Artificial Intelligence: American Attitudes and Trends 41% of the American population expressed a “great deal” or even a “fair amount” of confidence in “tech companies,” as compared to the 26% who feel that way about the U.S. federal government. American Public has more trust in intergovernmental research organizations CERN), the Partnership on AI, and non-governmental scientific organizations (e.g., AAAI). Moreover, about one in five respondents in a survey selected a “don’t know” response. The surveys were conducted between June 6 and 14, 2018, where a total of 2,000 American adults (18+) completed the survey. The analysis of the survey was pre-registered on the Open Science Framework.  “Supported by a grant from the Ethics and Governance of AI Fund, we intend to conduct more extensive and intensive surveys in the coming years, including of residents in Europe, China, and other countries”, states the report. AI Now Institute releases Current State of AI 2018 Report IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Troll Patrol Report: Amnesty International and Element AI use machine learning to understand online abuse against women
Read more
  • 0
  • 0
  • 1992

article-image-facebook-app-is-undeletable-on-samsung-phones-and-can-possibly-track-your-movements-reports-bloomberg
Sugandha Lahoti
10 Jan 2019
2 min read
Save for later

Facebook app is undeletable on Samsung phones and can possibly track your movements, reports Bloomberg

Sugandha Lahoti
10 Jan 2019
2 min read
2018 has been really bad for Facebook in terms of privacy lawsuits, and data-stealing allegations surrounding the company. As #DeleteFacebook took rounds on Twitter in the late months of last year, there is another news to add fuel to the fire. According to a report by Bloomberg, Samsung phone users, are unable to delete the Facebook app on their smartphones. Apparently, Nick Winke, a photographer in the Pacific Northwest tried to delete the Facebook app from his Samsung Galaxy S8. He soon found out it was undeletable. He found only an option to "disable," and he wasn’t sure what that meant. This is alarming, because if an application is a permanent feature of a user’s device, can it track user’s digital actions? This has also raised concerns about whether Samsung is monetizing hardware outside of margins through data exploitation by partnering with Facebook. After the news broke out, a lot of people have expressed their concerns on Social media platforms. https://twitter.com/riptari/status/1082926077348069377 https://twitter.com/TomResau/status/1083067919746117638 https://twitter.com/PressXtoJason_/status/1082981989966401544 A Twitter user also expressed concerns over buying a Samsung smartphone. https://twitter.com/APirateMonk/status/1083016272680386560 François Chollet, the author of Keras has termed Facebook as “Phillip Morris combined with Lockheed Martin, but bigger.” https://twitter.com/fchollet/status/1083034900020658176 A Facebook spokesperson has told Bloomberg, that the disabled app doesn’t collect data or send information back to Facebook. They have specified that an app being deletable or not depends on various pre-install deals Facebook has made with phone manufacturers, operating systems and mobile operator. However, they denied specifying exactly how many such pre-install deals Facebook has globally. Samsung also told Bloomberg that they have pre-installed Facebook app on “selected models” with options to disable it, specifying that a disabled app is no longer running. ProPublica shares learnings of its Facebook Political Ad Collector project NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release British parliament publishes confidential Facebook documents that underscore the growth at any cost culture at Facebook
Read more
  • 0
  • 0
  • 1840