Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-french-data-regulator-cnil-imposes-a-fine-of-50m-euros-against-google-for-failing-to-comply-with-gdpr
Sugandha Lahoti
22 Jan 2019
3 min read
Save for later

French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR

Sugandha Lahoti
22 Jan 2019
3 min read
The French data regulator, National Data Protection Commission (CNIL) has imposed a financial penalty on Google for 50M euros for failing to comply with GDPR. After a thorough analysis, CNIL observed that information provided by Google is not easily accessible for users, neither is it always clear or comprehensive. CNIL started this investigation after receiving complaints from None Of Your Business and La Quadrature du Net. They complained about Google “not having a valid legal basis to process the personal data of the users of its services, particularly for ads personalization purposes.” https://twitter.com/laquadrature/status/1087406112582914050 https://twitter.com/NOYBeu/status/1087458762359824385 Following its own investigation, after the complaints, CNIL also found Google guilty of not validly obtaining proper user consent for ad personalization purposes. Per the committee, Google makes it hard for people to understand how their data is being used by using broad and obscure wordings. For example, CNIL says, “in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, Youtube, Google home, Google maps, Play store, Google pictures…) and therefore of the amount of data processed and combined.” Google is also violating GDPR rules when new Android users set up a new phone and follow Android’s onboarding process. The committee found that when an account is created, the user can modify some options associated with the account by clicking on the ‘More options’. However, the display of the ads personalization is pre-ticked. This violates GDPR’s rule of ‘consent being ambiguous’. Furthermore, GDPR states that the consent is “specific” only if it is given distinctly for each purpose. However Google violates it as before creating an account, the user is asked to tick the boxes « I agree to Google’s Terms of Service» and « I agree to the processing of my information as described above and further explained in the Privacy Policy» in order to create the account. Therefore, the user gives his or her consent in full, for all the processing operations purposes carried out by Google. Netizens feel that 50M euros are far too little to pay as a fine for a massive organization like Google. However, a hacker news user counter argued the statement saying that “Google or any other company does not get to just continue their practices, as usual, the fine is pure "punishment" for the bad behavior in the past. Google would gladly pay them if it meant they could continue their anti-competitive practices, it would just be a cost of doing business. But that's not the point of them. The real teeth are in the changes they will be forced to make.” Twitteratis were also in support of CNIL. https://twitter.com/AlexT_KN/status/1087466073161641984 https://twitter.com/mcfslaw/status/1087552151377797120 https://twitter.com/chesterj1/status/1087387249178750983 https://twitter.com/carlboutet/status/1087471877143085056 A Google spokesperson spoke to Techcrunch with the following statement, “People expect high standards of transparency and control from us. We’re deeply committed to meeting those expectations and the consent requirements of the GDPR. We’re studying the decision to determine our next steps.” Googlers launch industry-wide awareness campaign to fight against forced arbitration EU slaps Google with $5 billion fine for the Android antitrust case Google+ affected by another bug, 52M users compromised, shut down within 90 days
Read more
  • 0
  • 0
  • 2311

article-image-ibm-oracle-under-the-scanner-again-for-questionable-hiring-and-firing-policies
Melisha Dsouza
21 Jan 2019
5 min read
Save for later

IBM, Oracle under the scanner again for questionable hiring and firing policies

Melisha Dsouza
21 Jan 2019
5 min read
The Guardian has come forward with reports of Oracle coming under the scanner for payscale discrimination between male and female employees. On the very same day, The Register reported an affidavit has been filed against IBM for hiding the age of employees being laid off from the company from the Department of Labour. Pay scale discrimination at Oracle “Women are getting paid less across the board. These are some of the strongest statistics I’ve ever seen – amazingly powerful numbers.” -Jim Finberg, attorney for the plaintiffs On 18th January, a motion was filed against Oracle in California that alleged the company’s female employees were paid (on average) $13,000 less per year than men doing similar work, The Guardian reports. More than 4,200 women will be represented in this motion after an analysis of payroll data found that women made 3.8% less in base salaries on average, 13.2% less in bonuses, and 33.1% less in stock value as compared to male employees. The analysis also found that the payment disparities exist even for women and men with the same tenure and performance review score in the same job categories! The complaint outlines several instances from Oracle female plaintiffs who noticed the discrepancies in payment either accidentally or by chance. One of the plaintiffs saw a pay stub from a male employee that drew her attention to the wage gap between them, especially since she was the male employee’s trainer. This is not the first time that Oracle is involved in a case like this. The Guardian reports that in 2017, the US Department of Labor (DoL) filed a suit against Oracle alleging that the firm had a “systemic practice” of paying white male workers more than their counterparts in the same job titles. This led to a pay discrimination against women and black and Asian employees. Oracle dismissed these allegations and called them “without merit” stating that its pay decisions were “non-discriminatory and made based on legitimate business factors including experience and merit”. Jim Finberg, the attorney for this suite, said that none of the named plaintiffs worked at Oracle any more. Some of them left due to their frustrations over discriminatory pay. The suite also mentions that disparities in pay scale were caused because Oracle used the prior salaries of new hires to determine their compensation at the company, leading to inequalities in pay. The suit claims that Oracle was aware of its discriminatory pay and “had failed to close the gap even after the US government alleged specific problems.” The IBM Layoff Along similar lines, a former senior executive at IBM alleges in an affidavit filed on Thursday in the Southern District of New York, that her superiors directed her to hide information about the older staff being laid off by the company from the US Department of Labor. Catherine Rodgers, formerly IBM's vice president in its Global Engagement Office was terminated after nearly four decades with IBM. The Register reports that Rodgers said she believes she was fired for raising concerns that IBM was engaged in systematic age discrimination against employees over the age of 40. IBM has previously been involved in controversies of laying off older workers right after the ProPublica report of March 2018 that highlighted this fact. Rodgers, who served as VP in IBM's global engagement office and senior state executive for Nevada had access to all the people to be laid off in her group. She noticed a lot of unsettling statistics like: 1. All of the employees to be laid off from her group were over the age of 50 2. In April 2017, two employees over age 50 who had been included in the layoff, filed a request for financial assistance from the Department of Labor under the Trade Assistance Act. The DoL sent over a form asking Rodgers to state all of the employees within her group who had been laid off in the last three years along with what their ages were. This list was then reviewed with the IBM HR, and Rodgers alleges she was “directed to delete all but one name before I submitted the form to the Department of Labor. 3. Rodgers said that IBM began insisting that older staff came into the office daily. 4. Older workers were more likely to face relocation to new locations across the US. Rodgers says that after she began raising questions she got her first ever negative performance review, in spite of meeting all her targets for the year. Her workload increased without a pay rise. The plaintiffs' memorandum that accompanied the affidavit requests the court to authorize the notification of former IBM employees around the US over 40 years and lost their jobs since 2017 that they can join the legal proceedings against the company. It is bothersome to see some big names of the tech industry displaying such poor leadership morales, should these allegations prove to be true. The outcome of these lawsuits will have a significant impact on the decisions taken by other companies for employee welfare in the coming years. IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever Pwn2Own Vancouver 2019: Targets include Tesla Model 3, Oracle, Google, Apple, Microsoft, and more!
Read more
  • 0
  • 0
  • 3096

article-image-ftc-officials-plan-to-impose-a-fine-of-over-22-5-billion-on-facebook-for-privacy-violations-washington-post-reports
Amrata Joshi
21 Jan 2019
2 min read
Save for later

FTC officials plan to impose a fine of over $22.5 billion on Facebook for privacy violations, Washington Post reports

Amrata Joshi
21 Jan 2019
2 min read
According to a report by Washington Post, last week, Federal Trade Commission (FTC) officials are planning to impose a fine of over $22.5 billion on Facebook post a year of data breaches and revelations of illegal data sharing. Per FTC, Facebook may have violated a legally binding agreement with the government to protect the privacy of users' personal data. Lately, Facebook has been in news for its data breaches and issues related to its users’ data. It has also been in news for its Trump campaigns and its manipulations with the voter details during the U.S. elections. According to the revelations made last year, over 87 million users’ data was given to Cambridge Analytica, a political consulting firm, without users’ consent. Facebook was fined £500,000 as a result of Cambridge Analytica, last October. Last year, lawmakers in the U.S. Congress summoned Mark Zuckerberg, Facebook CEO to testify for the first time where he apologized for the privacy violations. This time Facebook might have to pay more than $22.5 million, the fine which was imposed on Google for tracking users of Apple’s Safari web browser in 2012. It was the greatest fine for violating an agreement with the FTC. This might turn out to be the first major fine against Facebook in the US. The FTC agreement regarding privacy requires Facebook to seek users’ permission, before sharing the data with third parties and also inform the FTC in cases where others misuse that information. Last week, privacy advocates urged FTC to take action against Facebook. Marc Rotenberg, the executive director of the Electronic Privacy Information Center, said, “The agency now has the legal authority, the evidence, and the public support to act. There can be no excuse for further delay.” According to most Reddit users, Facebook should be strictly punished. One of the comments on Reddit reads, “It’d be better if the people responsible of doing it were sent to prison, but a big fine to their company should stop them from doing something like this again.”   Facebook open sources Spectrum 1.0.0, an image processing library for better mobile image production 3 out of 4 users don’t know Facebook categorizes them for ad targeting; with political and racial affinity being some labels: Pew Research A new privacy bill was introduced for creating federal standards for privacy protection aimed at big tech firms like Facebook, Google and Amazon
Read more
  • 0
  • 0
  • 2009
Visually different images

article-image-google-cloud-and-go-jeks-announce-feast-a-new-and-open-source-feature-store-for-machine-learning
Natasha Mathur
21 Jan 2019
3 min read
Save for later

Google cloud and GO-JEK’s announce Feast, a new and open source feature store for machine learning

Natasha Mathur
21 Jan 2019
3 min read
Google Cloud announced the release of Feast, a new open source feature store that helps organizations to better manage, store, and discover new features for their machine learning projects, last week. Feast, a collaboration project between Google Cloud and GO-JEK (an Indonesian tech startup) is an open, extensible, and a unified platform for feature storage. “Feast is an essential component in building end-to-end machine learning systems at GO-JEK. We’re very excited to release it to the open source community,” says Peter Richens, Senior Data Scientist at GO-JEK. It has been developed with an aim to find solutions for common challenges faced by Machine Learning Development teams. Some of these common challenges include: Machine Learning features not being reused (features representing similar business concepts get redeveloped many times when existing work from other teams could have been reused). Feature definitions vary (teams define features differently and many times there is no easy access to the documentation of a feature). Hard to serve up-to-date features (teams are hesitant in using real-time data). Inconsistency between training and serving (training requires historical data, whereas prediction models require the latest values. When data is broken down into various independent systems, it leads to inconsistencies as the systems then require separate tooling). Feast gets rid of these challenges by providing teams with a centralized platform that allows teams to easily reuse the features developed by another team across different projects. Also, as you add more features to the store, it becomes cheaper to build models Feast Apart from that, Feast manages the ingestion of data by unifying it from both batch and streaming sources (using Apache Beam) into the feature warehouse and feature serving stores. Users can then query features in the warehouse using the same set of feature identifiers. It also allows easy access to historical feature data for its users, which in turn, can be used to produce datasets for training models. Moreover,  Feast allows teams to capture documentation, metadata and metrics about features, allowing teams to communicate clearly about these features. Feast aims to be deployable on Kubeflow in the future and would get integrated seamlessly with other Kubeflow components such as a Python SDK for use with Kubeflow's Jupyter notebooks, and Kubeflow Pipelines. This is because Kubeflow focuses on improving packaging, training, serving, orchestration, and evaluation of models. “We hope that Feast can act as a bridge between your data engineering and machine learning teams”, says the Feast team. For more information, check out the official Google Cloud announcement. Watson-CoreML : IBM and Apple’s new machine learning collaboration project Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs Dopamine: A Tensorflow-based framework for flexible and reproducible Reinforcement Learning research by Google
Read more
  • 0
  • 0
  • 3764

article-image-facebook-takes-down-russian-news-agency-sputniks-pages-for-engaging-in-coordinated-inauthentic-behavior
Bhagyashree R
21 Jan 2019
3 min read
Save for later

Facebook takes down Russian news agency, Sputnik’s pages for engaging in “coordinated inauthentic behavior”

Bhagyashree R
21 Jan 2019
3 min read
Last week, Facebook shared that it has removed hundreds of pages, groups, and accounts for engaging in a “coordinated inauthentic behavior” on Facebook and Instagram. These accounts were found to be linked to employees of Sputnik, a news website and radio broadcast service established by a government-owned and operated news agency, Rossiya Segodnya, based in Moscow. https://twitter.com/alexstamos/status/1085914558319841280 Sputnik believes that this step by Facebook is nothing but “practically censorship”. It, in a statement, said, "The decision is clearly political in its nature and is practically censorship — seven pages belonging to our news hubs in neighboring countries have been blocked." It further added, "Sputnik editorial offices deal with news and they do it well. If this blocking is Facebook's only reaction to the quality of the media's work, then we have no questions, everything is clear here. There is still hope that common sense will prevail." The research done by the Digital Forensic Research Lab (DFRLab) revealed that the pages and accounts weren’t limited to news content. Some pages were devoted to travel in Latvia, while some were devoted to fans of the president of Tajikistan. In a blog post, The Digital Forensic Research Lab worte, “Most posts were apolitical, but some, especially in the Baltic States, were sharply political, anti-Western, and anti-NATO.” According to DFRLab the main aim of these pages or accounts was to promote Rossiya Segodnya, “the effect of these activities was promotion of Rossiya Segodnya (the state-run Russian news agency that launched Sputnik) output to a range of special-interest audiences, without stating their background or affiliation.” The main concern was that most of these pages were covert, and did not openly mentioned any connection to Rossiya Segodnya. What Facebook’s research revealed? After this investigation, Facebook took down 364 Facebook Pages and accounts as part of a network, which was found to have originated in Russia and operated in the Baltics, Central Asia, the Caucasus, and Central and Eastern European countries. These accounts were primarily focused on news, or general interest topics like weather, travel, sports, economics, or politicians. Some of the accounts and pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption. This takedown is the latest in a series of actions taken by the social media platform against inauthentic pages, groups, and accounts. In November 2018, Facebook removed 107 Facebook Pages, Groups, and accounts, as well as 41 Instagram accounts. Nina Jankowicz, a Global Fellow at the U.S. government-funded Kennan Institute, tweeted that these posts consisted of inflaming news and targeted individual Ukrainian regions/cities: https://twitter.com/wiczipedia/status/1085877046016860160 She believes that detection of accounts that are linked to a “state-run propaganda arm” should have been easier for Facebook and it should invest more for the early detection of these type of accounts: https://twitter.com/wiczipedia/status/1085877051876429824 To read more in detail, check out Facebook’s original news. Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior” Facebook open sources Spectrum 1.0.0, an image processing library for better mobile image production 3 out of 4 users don’t know Facebook categorizes them for ad targeting; with political and racial affinity being some labels: Pew Research
Read more
  • 0
  • 0
  • 1566

article-image-thank-stanford-researchers-for-puffer-a-free-and-open-source-live-tv-streaming-service-that-uses-ai-to-improve-video-streaming-algorithms
Natasha Mathur
18 Jan 2019
2 min read
Save for later

Thank Stanford researchers for Puffer, a free and open source live TV streaming service that uses AI to improve video-streaming algorithms

Natasha Mathur
18 Jan 2019
2 min read
A team of researchers from Standford University launched a new, free, and open source TV streaming service called Puffer, as part of their non-profit academic research study. It is led by Francis Yan, a doctoral student, Computer Science, Stanford University, Sadjad Fouladi, Hudson Ayers, and Chenzhi Zhu. Puffer uses machine learning to improve video-streaming algorithms. “We are trying to figure out how to teach a computer to design new algorithms that reduce glitches and stalls in streaming video (especially over wireless networks and those with limited capacities, such as in rural areas),” say the researchers. Puffer is mainly focused on three algorithms, namely, “congestion-control” (decides when to send each piece of data), “throughput forecasters” (predicts how long it will take to send a certain amount of data over an Internet connection), and “adaptive-bitrate” (ABR) (algorithms that decide what quality of video to send for best picture quality). The project is limited to only 500 participants at a time. Participants would need to watch TV channels on Puffer and stream them over their Internet connections a few hours each week. As the participants are streaming the TV channels on the Puffer website, it will begin to automatically experiment with different algorithms to control the timing and quality of video sent to them. They will then analyze how the resulting computer-designed algorithm performs and work. Puffer is a free service and doesn’t show any ads. Puffer is capable of only re-transmitting the free over-the-air broadcast TV signals and allows streaming of up to six TV stations. These include CBS (KPIX 5), NBC (KNTV 11), ABC (KGO 7), FOX (KTVU 2), PBS (KQED 9), and Univision (KDTV 14). The  Puffer project has received funding in part by the NSF and DARPA. It has also received support from Google, Huawei, VMware, Dropbox, Facebook, and the Stanford Platform Lab. “Puffer is unique from previous academic studies...we hope that this approach will produce substantial benefits over prior work, but only time will tell”, say the researchers. For more information on Puffer, check out its official website. Researchers introduce a machine learning model where the learning cannot be proved Researchers release unCaptcha2, a tool that uses Google’s speech-to-text API to bypass the reCAPTCHA audio challenge Researchers design ‘AnonPrint’ for safer QR-code mobile payment: ACSC 2018 Conference
Read more
  • 0
  • 0
  • 3309
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-apples-ceo-tim-cook-calls-for-new-federal-privacy-law-while-attacking-the-shadow-economy-in-an-interview-with-time
Amrata Joshi
18 Jan 2019
4 min read
Save for later

Apple’s CEO, Tim Cook calls for new federal privacy law while attacking the ‘shadow economy’ in an interview with TIME

Amrata Joshi
18 Jan 2019
4 min read
Last year we saw some major data breaches and top companies compromising user data. This year naturally the sentiments are strongly inclining towards protecting user’s data privacy. Just two days ago, U.S. Senator introduced a bill titled ‘American Data Dissemination (ADD) Act’ for creating federal standards of privacy protection for big companies including Google, Amazon, and Facebook. The U.S. Congress is yet to pass this bill. Yesterday, Tim Cook, CEO, Apple, asked the U.S. Congress to introduce a national privacy law for securing users’ personal data, while attacking the shadow economy which trades users’ data without their consent. https://twitter.com/guardian/status/1085847219419267073 In a statement to TIME magazine, Mr. Cook said, “Last year, before a global body of privacy regulators, I laid out four principles that I believe should guide legislation” The first one was the right to have personal data minimized. According to this principle, companies should challenge themselves for identifying information from customer data or avoid collecting it in the first place. The second one is the right to knowledge, which states the right to know what data is being collected and why. The third principle is the right to access which states companies should make it easy for users to access, correct and delete their personal data. And lastly, the right to data security, without which trust is not possible. According to Cook, companies that sell data will have to register with the Federal Trade Commission. Users and lawmakers are also unaware of the secondary markets who use personal information of users and fall under shadow economy. He pointed out that few companies are into trading user data and how most of the users are unaware of it. He says, “One of the biggest challenges in protecting privacy is that many of the violations are invisible. For example, you might have bought a product from an online retailer – something most of us have done. But what the retailer doesn’t tell you is that it then turned around and sold or transferred information about your purchase to a ‘data broker’ – a company that exists purely to collect your information, package it and sell it to yet another buyer.” In November, the campaign group Privacy International filed complaints asking regulators to investigate whether the basis of their businesses was working against GDPR, the European privacy regulation. Post which, top data brokers, companies such as Experian, Acxiom, Oracle, and Criteo, came under scrutiny in Europe. Ailidh Callander, Privacy International’s legal officer, said in a press release, “The data broker and ad-tech industries are premised on exploiting people’s data. Most people have likely never heard of these companies, and yet they are amassing as much data about us as they can and building intricate profiles about our lives. GDPR sets clear limits on the abuse of personal data.” Tim Cook called for comprehensive federal privacy legislation in the US for establishing a registry of data brokers, which would let consumers check what data of theirs is getting sold. The users will further have the right to easily remove their data from that market. He writes in the TIME magazine, “I and others are calling on the US Congress to pass comprehensive federal privacy legislation - a landmark package of reforms that protect and empower the consumer.” Tim Cook said in a statement to TIME magazine, “Let’s be clear: you never signed up for that. We think every user should have the chance to say, Wait a minute. That’s my information that you’re selling, and I didn’t consent.” Tim said companies should minimize the amount of data they collect and make an easier way for users to delete it. Tim Cook seems to have hit the chord with the public with this call. https://twitter.com/antoniogm/status/1085968094730674180 https://twitter.com/SecurityBeat/status/1086022312015642625 One of the users commented on Twitter, “You have a first-party relationship with FB/TWTR/etc. They show you ads on their service, you manage your data on it (which can be deleted or de-activated). They have to face whatever user outrage they cause.” Users won’t let their data getting compromised and are much agitated by platforms like Facebook. Few users are even thinking of deactivating their accounts on Facebook. A new privacy bill was introduced for creating federal standards for privacy protection aimed at big tech firms like Facebook, Google and Amazon Project Erasmus: Former Apple engineer builds a user interface that responds to environment light Cyber security researcher withdraws public talk on hacking Apple’s Face ID from Black Hat Conference 2019: Reuters report
Read more
  • 0
  • 0
  • 1994

article-image-tensorflow-team-releases-a-developer-preview-of-tensorflow-lite-with-new-mobile-gpu-backend-support
Natasha Mathur
18 Jan 2019
2 min read
Save for later

TensorFlow team releases a developer preview of TensorFlow Lite with new mobile GPU backend support

Natasha Mathur
18 Jan 2019
2 min read
The TensorFlow team released a developer preview of the newly added GPU backend support for TensorFlow Lite, earlier this week. A full open-source release for the same is planned to arrive later in 2019. The team has been using the TensorFlow Lite GPU inference support at Google for several months now in their products. For instance, using the new GPU backend accelerated the foreground-background segmentation model by over 4x and the new depth estimation model by over 10x vs. Similarly, using GPU backend support for the YouTube Stories and Playground Stickers, the team saw an increase in speed by up to 5-10x in their real-time video segmentation model across a variety of phones. They found out that the new GPU backend is much faster in performance (2-7x times faster) as compared to original floating point CPU implementation for different deep neural network models. The team also notes that GPU speed is most significant on more complex neural network models involving dense prediction/segmentation or classification tasks. For small models the speedup could be less and using CPU would be more beneficial as it would avoid latency costs during memory transfers. How does it work? The GPU delegate first gets initialized once the interpreter::ModifyGraphWithDelegate() is called in Objective-C++ or by calling Interpreter’s constructor with Interpreter.Options in Java. During this process, a canonical representation of the input neural network is built over which a set of transformation rules are applied. After this, the compute shaders are generated and compiled. GPU backend currently makes use of OpenGL ES 3.1 Compute Shaders on Android and Metal Compute Shaders on iOS. Various architecture-specific optimizations are employed while creating compute shaders. After the optimization is complete, the shader programs are compiled and the new GPU inference engine gets ready. Depending on the inference for each input, inputs are moved to GPU if required, shader programs get executed, and outputs are moved to CPU if necessary. The team intends to expand the coverage of operations, finalize the APIs and optimize the overall performance of the GPU backend in the future. For more information, check out the official TensforFlow Lite GPU inference release notes. Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more Google AdaNet, a TensorFlow-based AutoML framework
Read more
  • 0
  • 0
  • 3220

article-image-a-new-privacy-bill-was-introduced-for-creating-federal-standards-for-privacy-protection-aimed-at-big-tech-firms-like-facebook-google-and-amazon
Amrata Joshi
17 Jan 2019
5 min read
Save for later

A new privacy bill was introduced for creating federal standards for privacy protection aimed at big tech firms like Facebook, Google and Amazon

Amrata Joshi
17 Jan 2019
5 min read
Lately, big companies like Google, Amazon, Facebook and few others have come under the light for their data privacy issues. Users blindly accept the current structure of most of the privacy policies, which in turn affects users’ privacy and data. The U.S. lacks a national law that regulates the collection and use of personal data. Yesterday, Marco Rubio (R-Fla.), U.S. Senator introduced a bill titled ‘American Data Dissemination (ADD) Act’ for creating federal standards of privacy protection for big companies like Google, Amazon, and Facebook. ADD act uses Privacy Act of 1974 as its framework and thus provides transparency and accountability from the tech industry while protecting small businesses and start-ups. The Federal Trade Commission (FTC) is yet to make suggestions for regulations based on the Privacy Act of 1974. The FTC needs to give detailed privacy requirements within six months that would be similar to the requirements under the 1974 Privacy Act. This bill requires Congress to pass legislation within two years or the FTC will write the rules itself. In a statement to The Hill, Rubio said, “If Congress does not act on the FTC’s recommendations within two years, my bill gives the FTC authority to issue a final rulemaking based on the Privacy Act framework.” The American Data Dissemination (ADD) Act would let the FTC write recommendations to Congress regarding what privacy rules should look like for commercial services like Facebook, Amazon, Google based on a 1974 law which created rules for federal agencies. According to Rubio, the smaller companies should be exempted from new rules and they are finding a way out. One year after the date on which the Commission has submitted recommendations, the FTC will submit proposed regulations to the appropriate Congress Committees for imposing privacy requirements. If the Congress fails to enact a law based on the recommendations provided by two years then FTC will pass a final rule within 27 months after the date of enactment for imposing privacy requirements. Rubio believes that these recommendations will give Congress a direction for drafting legislation that will protect consumers and capabilities of the internet economy. According to him, any national privacy law must provide clear and consistent protections that consumers and companies can understand, and also the FTC can enforce. He clearly states, “we also cannot tolerate inaction.” According to a post by Axios, Congressional Democrats have given an indication that they will only agree to preempt state laws, and new rules will come into effect in 2020, in California. In a press release, Rubio stated, “There has been a growing consensus that Congress must take action to address consumer data privacy. However, I believe that any efforts to address consumer privacy must also balance the need to protect the innovative capabilities of the digital economy that have enabled new entrants and small businesses to succeed in the marketplace. That is why I am introducing the American Data Dissemination Act, which will protect small businesses and startups while ensuring that consumers are provided with overdue rights and protections. It is critical that we do not create a regulatory environment that entrenches big tech corporations. Congress must act, but it is even more important that Congress act responsibly to create a transparent, digital environment that maximizes consumer welfare over corporate welfare.” According to Rubio, big companies like Facebook, Apple, Amazon, Netflix, Google, and others would welcome regulations that will prevent start-ups and smaller competitors from challenging their dominance. Rubio said in a statement to The Hill, “While we may have disagreements on the best path forward, no one believes a privacy law that only bolsters the largest companies with the resources to comply and stifles our start-up marketplace is the right approach.” The American Data Dissemination (ADD) Act aims to exempt smaller organizations from the new rules which act against the proposed Grand Bargain on Data Privacy Legislation for America which states, “Do not exempt organizations based on size.” The Grand Bargain on Data Privacy Legislation for America bill states, “Include a limited right to rectification for sensitive data collected by critical services.” Even this goes against the ADD act as it stands for consumer data privacy and doesn’t leave any space for sensitive data being accessed. The Grand Bargain on Data Privacy Legislation for America states, “Create data protection rules based on both the type of data and the type of entity collecting the data.” The data is categorized as sensitive and non-sensitive data. But this again works against the ADD act as the bill doesn’t compromise on consumer data. To know more about this news, check out the press release and the bill. Congress passes ‘OPEN Government Data Act’ to make open data part of the US Code ITIF along with Google, Amazon, and Facebook bargain on Data Privacy rules in the U.S. Google Home and Amazon Alexa can no longer invade your privacy; thanks to Project Alias!
Read more
  • 0
  • 0
  • 1798

article-image-red-hat-drops-mongodb-over-concerns-related-to-its-server-side-public-license-sspl
Natasha Mathur
17 Jan 2019
3 min read
Save for later

Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL)

Natasha Mathur
17 Jan 2019
3 min read
It was last year in October when MongoDB announced that it’s switching to Server Side Public License (SSPL). Now, the news of Red Hat removing MongoDB from its Red Hat Enterprise Linux and Fedora over its SSPL license has been gaining attention. Tom Callaway, University outreach Team lead, Red Hat, mentioned in a note, earlier this week, that Fedora does not consider MongoDB’s Server Side Public License v1 (SSPL) as a Free Software License. He further explained that SSPL is “intentionally crafted to be aggressively discriminatory towards a specific class of users. To consider the SSPL to be "Free" or "Open Source" causes that shadow to be cast across all other licenses in the FOSS ecosystem, even though none of them carry that risk”. The first instance of Red Hat removing MongoDB happened back in November 2018 when its RHEL 8.0 beta was released. RHEL 8.0 beta release notes explicitly mentioned that the reason behind the removal of MongoDB in RHEL 8.0 beta is because of SSPL. Apart from Red Hat, Debian also dropped MongoDB from its Debian archive last month due to similar concerns over MongoDB’s SSPL. “For clarity, we will not consider any other version of the SSPL beyond version one. The SSPL is clearly not in the spirit of the DFSG (Debian’s free software guidelines), let alone complimentary to the Debian's goals of promoting software or user freedom”, mentioned Chirs Lamb, Debian Project Leader. Also, Debian developer, Apollon Oikonomopoulos, mentioned that MongoDB 3.6 and 4.0 will be supported longer but that Debian will not be distributing any SSPL-licensed software. He also mentioned how keeping the last AGPL-licensed version (3.6.8 or 4.0.3) without the ability to “cherry-pick upstream fixes is not a viable option”. That being said, MongoDB 3.4 will be a part of Debian as long as it is AGPL-licensed (MongoDB’s previous license). MongoDB’s decision to move to SSPL license was due to cloud providers exploiting its open source code. SSPL clearly specifies an explicit condition that companies wanting to use, review, modify or redistribute MongoDB as a service, would have to open source the software that they’re using. This, in turn, led to a debate among the industry and the open source community, as they started to question whether MongoDB is open source anymore. https://twitter.com/mjasay/status/1082428001558482944 Also, MongoDB’s adoption SSPL forces companies to either go open source or choose MongoDB’s commercial products. “It seems clear that the intent of the license author is to cause Fear, Uncertainty, and Doubt towards commercial users of software under that license” mentioned Callaway. https://twitter.com/mjasay/status/1083853227286683649 MongoDB acquires mLab to transform the global cloud database market and scale MongoDB Atlas MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial] MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more
Read more
  • 0
  • 0
  • 6690
article-image-youtube-bans-dangerous-pranks-and-challenges
Prasad Ramesh
17 Jan 2019
2 min read
Save for later

YouTube bans dangerous pranks and challenges

Prasad Ramesh
17 Jan 2019
2 min read
YouTube updates its policies to ban dangerous pranks and challenges that can be harmful to the victim of a prank or encourages people to partake in dangerous behavior. Pranks and challenges have been around on YouTube for a long time. Many of the pranks are entertaining and harmless, some challenges potentially unsafe like an extreme food eating challenge. Recently, the “Bird Box Challenge” has been popular inspired after the Netflix movie Bird Box. The challenge is to perform difficult tasks, like driving a car, blindfolded. This challenge has received media coverage not for the entertainment value but for the dangers involved. It has caused many accidents where people take this challenge. What is banned on YouTube? In the light of this challenge being harmful and dangerous to lives, YouTube bans certain content by updating its policies page. Primarily, it has banned three kinds of pranks: Challenges that can cause serious danger to life or cause death Pranks that lead the victims to believe that they’re in serious physical danger Any pranks that cause severe emotional distress in children They state in their policies page: “YouTube is home to many beloved viral challenges and pranks, but we need to make sure what’s funny doesn’t cross the line into also being harmful or dangerous.” What are the terms? Other than the points listed above there is no clear or exhaustive list of the kind of activities that are banned. The YouTube moderators may take a call to remove a video. In the next two months, YouTube will be removing any existing content that falls into this radar, however, content creators will not receive a strike. Going forward, any new content that may have objectionable content as per their policies will get the channel a ‘strike’. Three strikes in the span of three months will lead to the channel’s termination. Questionable content includes custom thumbnails or external links that display pornographic, graphic violent, malware, or spam content. So now you are less likely to see videos on driving blindfolded or eating tide pods. Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations Is the YouTube algorithm’s promoting of #AlternativeFacts like Flat Earth having a real-world impact? Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users
Read more
  • 0
  • 0
  • 2127

article-image-its-a-win-for-web-accessibility-as-courts-can-now-order-companies-to-make-their-sites-wcag-2-0-compliant
Sugandha Lahoti
17 Jan 2019
3 min read
Save for later

It’s a win for Web accessibility as courts can now order companies to make their sites WCAG 2.0 compliant

Sugandha Lahoti
17 Jan 2019
3 min read
Yesterday, the ninth circuit court of appeals gave a big win for web accessibility in a case against Domino’s Pizza. In 2016, a blind man filed a federal lawsuit against Domino’s stating that it wasn’t compatible with standard screen reading software which is designed to vocalize text and visual information. This did not allow him to use the pizza builder feature to personalize his order. Per his claim, Domino’s violated the Americans With Disabilities Act and should make its online presence compatible with Web Content Accessibility Guidelines. A blog post published by the National Retail Federation highlights that such lawsuits are on the rise, with 1,053 filed in the first half of last year compared to 814 in all of 2017. All of them voiced how there is a lack of clarity in how the ADA applies to the modern internet. [box type="shadow" align="" class="" width=""]Web Content Accessibility Guidelines (WCAG) is developed through the W3C process with the goal of providing a single shared standard for web content accessibility. The WCAG documents explain how to make web content more accessible to people with disabilities.[/box] Earlier, a lower court ruled in favor of Domino’s and tossed the case out of court. However, the court of appeals court reversed the ruling saying that the ADA covers websites and mobile applications and so the case is relevant. Domino’s argued that there was an absence of regulations specifically requiring web accessibility or referencing the Web Content Accessibility Guidelines. However the appellate judges explained the case was not about whether Domino’s did not comply with WCAG  “While we understand why Domino’s wants DOJ to issue specific guidelines for website and app accessibility, the Constitution only requires that Domino’s receive fair notice of its legal duties, not a blueprint for compliance with its statutory obligations,” U.S. Circuit Judges John B. Owens wrote in a 25-page opinion. The judges' panel said the case was relevant and send it back to the district court. They will talk about whether the Domino’s website and app comply with the ADA mandate to “provide the blind with effective communication and full and equal enjoyment of its products and services.” A Twitter thread by Jared Spool, applauded the court’s actions to apply web accessibility to ADA penalties and talked about the long and short term implications of this news. The first will likely come when insurance companies raise rates for any company that doesn’t meet WCAG compliance. This will create a bigger market for compliance certification firms. Insurance companies will demand certification sign-off to give preferred premiums. This will likely push companies to require WCAG understanding from the designers they hire. In the short term, we’ll likely see a higher demand for UX professionals with specialized knowledge in accessibility. In the long term, this knowledge will be required for all UX professionals. The demand for specialists will likely decrease as it becomes common practice. Also, toolkits, frameworks, and other standard platforms will build accessibility in. This will also reduce the demand for specialists, as it will become more difficult to build things that aren’t accessible. Good, accessible design will become the path of least resistance. You may go through the full appeal from the United States District Court for the Central District of California. EFF asks California Supreme Court to hear a case on government data accessibility and anonymization under CPRA 7 Web design trends and predictions for 2019 We discuss the key trends for web and app developers in 2019 [Podcast]
Read more
  • 0
  • 0
  • 4173

article-image-duckduckgo-now-uses-apple-mapkit-js-for-its-map-and-location-based-searches
Savia Lobo
16 Jan 2019
2 min read
Save for later

DuckDuckGo now uses Apple MapKit JS for its map and location based searches

Savia Lobo
16 Jan 2019
2 min read
DuckDuckGo, an Internet search engine, announced that its DuckDuckGo for mobile and desktop now support  Apple's MapKit JS framework for its map and address-related searches. With the Apple MapKit JS, users can carry out improved searches, additional visual features, and enhanced satellite imagery. They can also have updated maps already in use on billions of Apple devices worldwide. https://twitter.com/DuckDuckGo/status/1085220405462200320 Apple Maps are now embedded in both DuckDuckGo’s private search results for relevant queries and are also available from the ‘Maps’ tab on any search results page. With the new Apple MapKit JS framework DuckDuckGo provides users a combination of mapping and privacy. According to a statement on their blog post, “At DuckDuckGo, we believe getting the privacy you deserve online should be as simple as closing the blinds. Naturally, our strict privacy policy of not collecting or sharing any personal information extends to this integration.” The company promises to not share any personally identifiable information such as IP address to Apple or other third parties. For local searches, where the browser tends to send user’s approximate location information, the information will be discarded by DuckDuckGo immediately after use. According to ZDNet, “DuckDuckGo did not discuss how working with Apple, which the search engine said will result in "a new standard of trust online", was better or worse from a privacy perspective than using data from the OpenStreetMap project as it did previously.” To know more about this in detail, visit DuckDuckGo’s official blog post. DuckDuckGo chooses to improve its products without sacrificing user privacy MIT’s Duckietown Kickstarter project aims to make learning how to program self-driving cars affordable Project Erasmus: Former Apple engineer builds a user interface that responds to environment light  
Read more
  • 0
  • 0
  • 2519
article-image-open-government-data-act-makes-non-sensitive-public-data-publicly-available-in-open-and-machine-readable-formats
Bhagyashree R
16 Jan 2019
2 min read
Save for later

Open Government Data Act makes non-sensitive public data publicly available in open and machine readable formats

Bhagyashree R
16 Jan 2019
2 min read
On Monday, the U.S President, Donald Trump signed the Foundations for Evidence-Based Policymaking (FEBP) Act, which includes the Open, Public, Electronic and Necessary (OPEN) Government Data Act (Title II). In 2017, Data Coalition, an open data trade association, together with eighty-five organizations including businesses, industry groups, and others wrote a letter to express their support for OPEN Government Data Act. This bill passed unanimously by the Senate in 2016, in 2017 it was included FEBP Act as Title II, and in December 2018 it was passed by the Congress before reaching to the president’s desk. What OPEN Government Data Act is about? The federal government has siloed huge amount of public data, which can be instead used to drive private sector innovations and improve government services. FEBP Act aims to change the way the government collects, publishes, and uses non-sensitive public information. According to the OPEN Government Data Act, which is a part of FEBP, government data should be made publicly available in open and machine-readable formats. It also states that the federal government should use open data to improve decision making. Explaining the OPEN Government Data Act, Sarah Joy Hays, Acting Executive Director of the Data Coalition said, “Title II, the OPEN Government Data Act, which our organization has been working on for over three and a half years, sets a presumption that all government information should be open data by default: machine-readable and freely-reusable.” Additionally, this Act requires federal agencies to designate a non-political employee in the agency as the Chief Data Officer (CDO). The qualifications of CDO includes training and experience in data management, governance, collection, analysis, protection, use, and dissemination to protect and de-identify confidential data. A CDO council is also established that will be responsible for promoting and encouraging data sharing agreements between agencies, identify ways in which agencies can improve upon the production of evidence for use in policymaking, and more. ACLU files lawsuit against 11 federal criminal and immigration enforcement agencies for disclosure of information on government hacking The US to invest over $1B in quantum computing, President Trump signs a law The district of Columbia files a lawsuit against Facebook for the Cambridge Analytica scandal
Read more
  • 0
  • 0
  • 2238

article-image-amazon-alexa-ai-researchers-develop-new-method-to-compress-neural-networks-and-preserves-accuracy-of-system
Melisha Dsouza
16 Jan 2019
4 min read
Save for later

Amazon Alexa AI researchers develop new method to compress Neural Networks and preserves accuracy of system

Melisha Dsouza
16 Jan 2019
4 min read
At the 33rd conference of the Association for the Advancement of Artificial Intelligence (AAAI), Amazon Alexa researchers in collaboration with researchers from University of Texas will be presenting a paper that describes a new method for compressing neural networks which will, in turn, increase the performance of the network. Yesterday, on the Amazon Blog, Anish Acharya and Rahul Goel, both applied scientists at Amazon Alexa AI, explained how huge neural networks tend to slow down the performance of a system. The proposed paper called ‘"Online Embedding Compression for Text Classification using Low Rank Matrix Factorization”, includes a method to compress embedding tables that often compromises the NLU network’s performance thus slowing down AI based systems like Alexa. This will help Alexa perform more and more complex tasks in milliseconds. The researchers covered the following topics within the paper: A compression method for deep NLP models to reduce the memory footprint using low-rank matrix factorization of the embedding layer. This lead to accuracy through further fine tuning. They depicted that their method outperformed baselines like fixed-point quantization and offline embedding compression for sentence classification. They provide an analysis of inference time for their method Introduce CALR, a novel learning rate scheduling algorithm for gradient descent based optimization. They further depicted how CALR outperformed other popular adaptive learning rate algorithms on sentence classification. Steps taken to obtain optimal performance of the Network The blog lists in short, the steps taken by the researchers to compress the neural network: A set of pre trained word embeddings called ‘Glove’ was used for this experiment. Glove takes into consideration a words co-occurrence in huge bodies of training data and assesses words’ meanings. The team started with a model initialized with large embedding space, performed a low rank projection of the embedding layer using Singular Value Decomposition (SVD) and continuing training to regain any lost accuracy. The aim of the experiment was integrating the embedding table into the neural network to use task-specific training data. This would not only to fine-tune the embeddings but also customize the compression scheme as well. SVD was used to reduce the embeddings’ dimensionality. This broke down their initial embedding matrix into two smaller embedding matrices with a reduction of parameters to almost 90%. One of these matrices poses as one layer of a neural network and the second matrix as the layer above it. Between the layers are connections with associated “weights.” which can be readjusted by the training process. These determine how much influence the outputs of the lower layer have on the computations performed by the higher one. The paper describes a new procedure for selecting the network’s “learning rate”. They vary the ‘cyclical learning rate’ procedure to  escape the local minima condition that gets introduced. This technique is called the cyclically annealed learning rate, which gives better performance than either the cyclical learning rate or a fixed learning rate. Results and conclusion The system developed by the researchers could shrink a neural network by 90 percent for both LSTM and DAN models, while reducing its accuracy by less than 1%. They compared their model to two alternatives. One in which the embedding table is compressed before network training begins and the other is simple quantization, in which all of the values in the embedding vector are rounded to a limited number of reference values. On testing their approach across a range of compression rates, on different types of neural networks, using different data sets, they found that their system outperformed the other approaches used in the experiment. You can read the research paper for more details on the experiments and acquired results. Researchers build a deep neural network that can detect and classify arrhythmias with cardiologist-level accuracy Researchers design ‘AnonPrint’ for safer QR-code mobile payment: ACSC 2018 Conference Researchers introduce a machine learning model where the learning cannot be proved  
Read more
  • 0
  • 0
  • 3540