Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-facebook-and-google-pressurized-to-work-against-anti-vaccine-trends-after-pinterest-blocks-anti-vaccination-content-from-its-pinboards
Amrata Joshi
21 Feb 2019
5 min read
Save for later

Facebook and Google pressurized to work against ‘Anti-Vaccine’ trends after Pinterest blocks anti-vaccination content from its pinboards

Amrata Joshi
21 Feb 2019
5 min read
U.S. lawmakers are questioning health officials and tech giants on their efforts to combat the harmful anti-vaccine misinformation which is spreading at a fast pace online. This misinformation is also potentially responsible for adding on to the ongoing five measles outbreaks in the US. Pinterest stands against misinformation Pinterest has taken a strong stand against the spread of misinformation related to vaccines. It has blocked all “vaccination” related searches since most results showed scientifically disproven claim that vaccines aren’t safe. On Wednesday the company said it won't return any search results, including pins and boards, for the terms related to vaccinations, whether in favor or against them. It was noted that the majority of shared images on Pinterest cautioned people against vaccinations, despite medical guidelines demonstrating that most vaccines are safe for most people. And the company has been taking an effort since quite some time now. In a statement to CNBC, Pinterest told, “It's been hard to remove this anti-vaccination content entirely, so it put the ban in place until it can figure out a more permanent strategy. It's working with health experts including doctors, as well as the social media analysis company called Storyful to come up with a better solution.” Pinterest has taken steps for blocking content promoting false cancer cures. Pinterest realized that a lot of such content was redirecting users to websites that discouraged them from getting traditional medical treatment, such as an essential oil claiming to be a cure for cancer. A Pinterest spokesperson said, "We want Pinterest to be an inspiring place for people, and there's nothing inspiring about misinformation. That's why we continue to work on new ways of keeping misleading content off our platform and out of our recommendations engine." People have been appreciating this move by the company. https://twitter.com/ekp/status/1098421637194559489 Facebook and Google working against the ‘Anti-Vaccine’ trends Last week, the committee announced that it will hold a hearing on the anti-vaccine subject on 5th March. Even Adam Schiff, a democrat from California, sent letters to Sundar Pichai, CEO at Google and Facebook CEO Mark Zuckerberg. In the letters, Schiff expressed concerns over the outbreaks and regarding the role of tech companies in the spread of medically inaccurate information. Schiff wrote in the letters, “If concerned parents see phony vaccine information in their Facebook newsfeeds or YouTube recommendations, it could cause them to disregard the advice of their children’s physicians and public health experts and decline to follow the recommended vaccination schedule. Repetition of information, even if false, can often be mistaken for accuracy, and exposure to anti-vaccine content via social media may negatively shape user attitudes towards vaccination." Schiff even referenced an article by Guardian and reported that searches on both Facebook and YouTube easily led users to anti-vaccine news. He also expressed his concerns over a report that Facebook is accepting payments for anti-vaccine ads. Last week, an article published by The Daily Beast noted that seven Facebook pages to post and promote anti-vaccine news and targeted women over the age of 25. In an emailed statement to Ars Technica, Facebook said, “We’ve taken steps to reduce the distribution of health-related misinformation on Facebook, but we know we have more to do. We’re currently working on additional changes that we’ll be announcing soon.” According to Facebook, just by simply deleting anti-vaccine perspectives won’t work as an effective solution to the problem. They are thinking about ways to boost the availability of factual information on vaccines and further minimizing the spread of misinformation. In a statement to Bloomberg, Facebook said that it was considering “reducing or removing this type of content from recommendations, including Groups You Should Join, and demoting it in search results, while also ensuring that higher quality and more authoritative information is available." Schiff wrote, “The algorithms which power these services are not designed to distinguish quality information from misinformation or misleading information, and the consequences of that are particularly troubling for public health issues.” Even on YouTube, the first result under a search for "vaccines" is a video showing a “middle ground” debate between supporters of vaccines and the ones against it. The fourth result is an episode of a popular anti-vaccine documentary series called “The Truth About Vaccines” which has around 1.2 million views. Though Google has declined to comment on the letter from Schiff, the company noted that it has been working to improve its recommendation system. It is also making sure that relevant news sources and contextual information are at the top of search results. Schiff also mentioned in his letter that he was happy how Google has already taken steps to improve the situation. He writes, “I was pleased to see YouTube’s recent announcement that it will no longer recommend videos that violate its community guidelines, such as conspiracy theories or medically inaccurate videos, and encourage further action to be taken related to vaccine misinformation.” Last week, Lamar Alexander, chairman of the Senate health committee, along with ranking member Patty Murray (D-Wash.) wrote a letter to the Centers for Disease Control and Prevention and Health and Human Services. The lawmakers inquired about what the health officials were doing for fighting misinformation and help states deal with outbreaks. The lawmaker wrote, “Many factors contribute to vaccine hesitancy, all of which demand attention from CDC and (HHS’ National Vaccine Program Office).” WhatsApp limits users to five text forwards to fight against fake news and misinformation Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation, and hate speech on the platform Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty of spreading hate and misinformation, leads to their ban
Read more
  • 0
  • 0
  • 2209

article-image-openai-team-publishes-a-paper-arguing-that-long-term-ai-safety-research-needs-social-scientists
Natasha Mathur
21 Feb 2019
3 min read
Save for later

OpenAI team publishes a paper arguing that long term AI safety research needs social scientists

Natasha Mathur
21 Feb 2019
3 min read
OpenAI, a non-profit artificial intelligence research firm, published a paper yesterday, arguing that long term AI safety research needs social scientists to make sure that AI alignment algorithms succeed when actual humans are involved. AI alignment (or value alignment) refers to the task of ensuring that AI systems reliably do what humans want them to do. “Since we are trying to behave in accord with people’s values, the most important data will be data from humans about their values”, states the OpenAI team. However, to properly align the advanced AI systems with human values, many uncertainties that are related to the psychology of human rationality, emotion, and biases would have to be resolved. The researchers believe that these can be resolved via experimentation where they train the AI to do what humans want them to do (reliably) by studying humans. This would involve questioning people about what they want from AI, and then training the machine learning models based on this data. Once the models are trained, they can then be optimized to perform well as per these models. But, it’s not that simple. This is because humans can’t be completely relied upon when it comes to answering questions related to their values. “Humans have limited knowledge and reasoning ability, and exhibit a variety of cognitive biases and ethical beliefs that turn out to be inconsistent on reflection”, states the OpenAI team. Researchers believe that different ways that a question is presented can interact differently with human biases, which in turn, can produce either low or high-quality answers. To further solve this issue, researchers have come out with experimental debate comprising only of humans in place of the ML agents. Now, although these experiments will be motivated by ML algorithms, they will not involve any ML systems or need any kind of ML background. OpenAI “Our goal is ML+ML+human debates, but ML is currently too primitive to do many interesting tasks. Therefore, we propose replacing ML debaters with human debaters, learning how to best conduct debates in this human-only setting, and eventually applying what we learn to the ML+ML+human case”, reads the paper. Now, as all of this human debate doesn’t require any machine learning, it becomes a purely social science experiment that is motivated by ML considerations but does not need ML expertise to run. This, in turn, makes sure that the core focus is on the component of AI alignment uncertainty specific to humans. Researchers state that a large proportion of AI safety researchers are focused on machine learning, even though it is not necessarily a sufficient background to conduct these experiments. This is why social scientists with experience in human cognition, behavior, and ethics, are needed for the careful design and implementation of these rigorous experiments. “This paper is a call for social scientists in AI safety. We believe close collaborations between social scientists and ML researchers will be necessary to improve our understanding of the human side of AI alignment and hope this paper sparks both conversation and collaboration”, states the researchers. For more information, check out the official research paper. OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words OpenAI charter puts safety, standards, and transparency first OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners
Read more
  • 0
  • 0
  • 4210

article-image-nestle-disney-fortnite-pull-out-their-youtube-ads-from-paedophilic-videos-as-youtubes-content-regulation-woes-continue
Sugandha Lahoti
21 Feb 2019
3 min read
Save for later

Nestle, Disney, Fortnite pull out their YouTube ads from paedophilic videos as YouTube’s content regulation woes continue

Sugandha Lahoti
21 Feb 2019
3 min read
Youtube faced backlash for another content regulation problem when videos of young children with exposed private parts began surfacing. These videos also displayed advertising from major brands alongside the content, leading to major companies like Nestle, Disney, Fortnite pull these YouTube ads from the identified videos. This issue was first discovered on Sunday, when Matt Watson, a video blogger, posted a 20-minute clip detailing how comments on YouTube were used to identify certain videos in which young girls were in activities that could be construed as sexually suggestive, such as posing in front of a mirror and doing gymnastics. Youtube received major criticism from companies and individuals alike for recommending videos of minors and allowing pedophiles to comment on these posts, with a specific time stamp of the video of when an exposed private part of the young child was visible. YouTube was also condemned for monetizing these videos allowing advertisements for major brands like Alfa Romeo, Fiat, Fortnite, Grammarly, L’Oreal, Maybelline, Metro: Exodus, Peloton and SingleMuslims.com, etc to be displayed on these videos. Companies pull out ads from Youtube Following this news, a large number of companies pulled their advertising spending from YouTube. Grammarly told Wired, “We’re absolutely horrified and have reached out to YouTube to rectify this immediately, we have a strict policy against advertising alongside harmful or offensive content. We would never knowingly associate ourselves with channels like this.” A spokesperson for Fortnite publisher Epic Games told Wired, that it had paused all pre-roll advertising on YouTube. “Through our advertising agency, we have reached out to YouTube to determine actions they’ll take to eliminate this type of content from their service,” Fortnite added. Disney and Nestle have also paused advertising on YouTube. Replying to these accusations, a Youtube spokesperson said in an email, “Any content --including comments -- that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We took immediate action by deleting accounts and channels, reporting illegal activity to authorities and disabling violative comments.” People on Twitter have strongly condemned YouTube’s actions. https://twitter.com/gossip_garden/status/1097396580691234816 https://twitter.com/tsosnierz/status/1097412787603759104 https://twitter.com/justin_ksu/status/1098419470253596679 https://twitter.com/rep_turd/status/1097984363948457984 Youtube also recently updated its algorithm, introducing a new strikes system to make its community guidelines more transparent and consistent. They are introducing more opportunities for everyone to understand Youtube’s policies, a consistent penalty for each strike, and better notifications. Last month, YouTube announced an update regarding YouTube recommendations aiming to reduce the recommendations of videos that promote misinformation and conspiracy theories. YouTube bans dangerous pranks and challenges Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’. Is YouTube’s AI Algorithm evil?
Read more
  • 0
  • 0
  • 2195
Visually different images

article-image-facebook-open-sources-the-elf-opengo-project-and-retrains-the-model-using-reinforcement-learning
Sugandha Lahoti
20 Feb 2019
3 min read
Save for later

Facebook open sources the ELF OpenGo project and retrains the model using reinforcement learning

Sugandha Lahoti
20 Feb 2019
3 min read
Facebook has open sourced it’s ELF OpenGo project and added new features to it. Facebook’s ELF OpenGo is a reimplementation of AlphaGoZero / AlphaZero. Last year in May, ELF OpenGo was released to allow AI researchers to better understand how AI systems learn. This open-source bot had a 20-0 record against top professional Go players and has been widely adopted by the AI research community to run their own Go experiments. Now, the Facebook AI Research team has announced new features and research results related to ELF OpenGo. They have now retrained the model of ELF OpenGo using reinforcement learning and have also released a Windows executable version of the bot, which can be used as a training aid for Go players. A unique archive that shows ELF OpenGo's analysis of 87,000 professional Go games is also released. This will help Go players assess their performance in detail. They are also releasing their data set of 20 million self-play games and the 1,500 intermediate models. Facebook researchers have shared their experiments and learnings of retraining the ELF OpenGo model in a new research paper. The paper details the results of extensive experiments, modifying individual features during evaluation to better understand the properties of these kinds of algorithms. Training ELF OpenGo ELF OpenGo was trained on 2,000 GPUs for 9 days. Post that, the 20-block model was comparable to the 20-block models described in AlphaGo Zero and Alpha Zero. The model was also provided with pretrained superhuman models, the code used to train the models, a comprehensive training trajectory dataset featuring 20 million self-play games, over 1.5 million training mini batches, and auxiliary data. Model behavior during training There is high variance in the model’s strength when compared to other models. This property holds even if the learning rates are reduced. Moves that require significant lookahead to determine whether they should be played, such as “ladder” moves, are learned slowly by the model and are never fully mastered. The model quickly learns high quality moves at different stages of the game. In contrast to the typical behavior of tabular RL, the rate of progression for learning both mid-game and end-game moves is nearly identical. In a Facebook blog post, the team behind this RL model wrote “We're excited that our development of this versatile platform is helping researchers better understand AI, and we're gratified to see players in the Go community use it to hone their skills and study the game. We're also excited to expand last year's release into a broader suite of open source resources” The research paper titled ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero is available on arXiv. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers. Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games FAIR releases a new ELF OpenGo bot with a unique archive that can analyze 87k professional Go games
Read more
  • 0
  • 0
  • 2961

article-image-redis-labs-raises-60-million-in-series-e-funding-led-by-francisco-partners
Natasha Mathur
20 Feb 2019
3 min read
Save for later

Redis Labs raises $60 Million in Series E Funding led by Francisco partners

Natasha Mathur
20 Feb 2019
3 min read
Redis Labs, a California-based computer software startup, announced yesterday that it has raised $60 million in Series E financing round led by a new and leading private equity firm, Francisco Partners. Redis Labs has now raised $146 million overall in funding, including $44 million funding round back in August 2017. “This financing enables us to accelerate our strategy to deliver the fastest and most efficient database to the world and enable instant experiences for any modern application”, said Ofer Bengal, co-founder and CEO at Redis Labs. Other existing investors of Redis Labs such as Goldman Sachs Private Capital Investing, Bain Capital Ventures, Viola Ventures, and Dell Technologies Capital also participated in the funding round. With the new funds, the company now aims to accelerate global go-to-market execution and invest further in the Redis community. Also, it wants to continue its leadership to deliver the highest performing and most efficient database platform for modern applications. Other than that, Francisco Partners’ CIO, David Golob, will be joining the Redis Labs Board of Directors, along with operating partner Eran Gorev who will be joining the team as a board observer, as a part of the new investment. Redis Labs was found in 2011 by Ofer Bengal (CEO) and Yiftach Shoolman (CTO) and is the home of Redis, the world’s most popular in-memory database. Bengal and Schoolman built a NoSQL database platform using native data structures that could serve app requests directly from the memory. Later in 2015, Salvatore Sanfilippo, the original developer of Redis, joined Redis Labs in 2015 to lead its open source development. Redis Labs is also a commercial provider of Redis Enterprise i.e. the world’s fastest database that makes use of the modern in-memory technologies like NVMe (non-volatile memory express) and Persistent Memory. This enables it to offer cost-effective deployment over multiple public clouds and on-premise data centers.  Other than that, Redis Enterprise also comes with a variety of data modeling techniques including Streams, Graph, Document and Machine Learning, with a real-time search engine. Redis Labs has been ranked as a leader in top analyst reports on NoSQL, in-memory databases, operational databases, and database-as-a-service, and is trusted by seven Fortune 10 companies. Moreover, it has also been voted as the most loved database and is rated the most popular database container. “We are thrilled to be partnering with Redis Labs’ team as the company scales up globally to meet the needs of the Internet economy”, said Matt Spetzler, partner and co-Head of Europe at Francisco Partners. Redis 5 is now out RedisGraph v1.0 released, benchmarking proves its 6-600 times faster than existing graph databases Redis Cluster Features Overview
Read more
  • 0
  • 0
  • 2318

article-image-facebooks-ai-chief-at-isscc-talks-about-the-future-of-deep-learning-hardware
Bhagyashree R
19 Feb 2019
4 min read
Save for later

Facebook’s AI Chief at ISSCC talks about the future of deep learning hardware

Bhagyashree R
19 Feb 2019
4 min read
Yesterday, at the ongoing IEEE’s International Solid-State Circuits Conference (ISSCC), Yann LeCun, Facebook AI Research director, presented a paper that touched upon the latest trends and the future of deep learning hardware. ISSCC is a five days event happening in San Francisco, where researchers present the current advances in solid-state circuits and systems-on-a-chip. LeCun in his presentation highlighted several AI trends company should consider in the coming years. Here are some of the highlights from his presentation: Machines should be given some “common sense” With the advancements in deep learning the computer understanding of images, audio, and texts has improved. This has allowed developers to build new applications such as information search and filtering, autonomous driving, real-time language translation, and virtual assistants. These advancements, however, are heavily dependent on supervised learning, which requires human-annotated data or reinforcement learning. LeCun believes that in the next decades, researchers should put their efforts into making machines learn just like humans, by mere observations and occasional actions or in short, by self-supervised manner. To do that, researchers need to find a way to put some level of “common sense” in machines. For this, we need deep learning architectures that are much larger than the one we have currently. LeCun, in his paper Deep Learning Hardware: Past, Present, and Future, wrote, “If self-supervised learning eventually allows machines to learn vast amounts of background knowledge about how the world works through observation, one may hypothesize that some form of machine common sense could emerge.” Empowering machines with human-like capabilities will allow them to make complex decisions. These machines could help in very critical issues like detecting hate speech and inappropriate content on Facebook, enabling virtual assistants to infer context like humans, and more. Ahead of the presentation, LeCun, in an interview with Business Insider said, "There are cases that are very obvious, and AI can be used to filter those out or at least flag for moderators to decide. But there are a large number of cases where something is hate speech but there's no easy way to detect this unless you have a broader context ... For that, the current AI tech is just not there yet." Machine learning chips that can fit everyday devices LeCun is hopeful that in future we will see computer chips that can fit in everyday devices such as vacuum cleaners and lawnmowers. With the machine learning chip incorporated, any device will be able to make smart decisions. For instance, a lawnmower will be able to recognize the difference between weeds and garden roses. Currently, we do have mobile devices with AI built in them to do things like recognizing a user’s face to unlock the device. In the coming years, more work will be put in to make mobile computing chips more sophisticated. LeCun also spoke about the need for hardware specifically designed for deep learning. The current hardware restricts developers to use batches of data in the learning and optimization phase of machine learning models. This will change in the coming years. “If you run a single image, you’re not going to be able to exploit all the computation that’s available to you in a GPU. You’re going to waste resources, basically, so batching forces you to think about certain ways of training neural nets,” he said. A new programming language for deep learning, which is more efficient than Python LeCun believes that deep learning now needs a new programming language which is much more efficient than Python. In an interview with VentureBeat, Yann LeCun said, “There are several projects at Google, Facebook, and other places to kind of design such a compiled language that can be efficient for deep learning, but it’s not clear at all that the community will follow, because people just want to use Python.” He believes that the imaginations of AI researchers and computer scientists tend to be tied to hardware and software tools available. “The kind of hardware that’s available has a big influence on the kind of research that people do, and so the direction of AI in the next decade or so is going to be greatly influenced by what hardware becomes available. It’s very humbling for computer scientists because we like to think in the abstract that we’re not bound by the limitation of our hardware, but in fact, we are.” To know about the other trends LeCun shared, check out the Facebook AI blog. Using deep learning methods to detect malware in Android Applications Researchers introduce a deep learning method that converts mono audio recordings into 3D sounds using video scenes Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US
Read more
  • 0
  • 0
  • 4244
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-splunk-will-no-longer-be-available-for-russian-companies
Prasad Ramesh
19 Feb 2019
2 min read
Save for later

Splunk will no longer be available for Russian companies

Prasad Ramesh
19 Feb 2019
2 min read
Splunk announced that it will no longer be selling its services to Russian companies. There will be no direct sales or via partners. This also includes companies whose headquarters is situated in Russia. However, they will continue to provide support for existing accounts/services. But, any renewals or expansion of accounts will not be entertained. Security researcher @SwiftOnSecurity spotted this first. https://twitter.com/SwiftOnSecurity/status/1097694742706556928 Some users are encouraging the act and also suggesting this be done in China: https://twitter.com/Primed_Mover/status/1097717830580166657 They plan to continue services globally with this exception. The announcement on the Splunk website explains that this decision is effective for: “...opportunities with technical partners, resellers, distributors and vendors. It also applies to business with subsidiaries based in countries outside of Russia whose parent company is in Russia, or who would use the software or services within the territory.” Splunk is a business intelligence tool with a web UI. They also provide Security information and event management. It’s not very clear as to why they decided to drop support for Russia. Malicious cyber crimes like disinformation propaganda and hacks have been traced back to Russia in the past. Perhaps Splunk does not want to be linked to such activities or actors. Splunk leverages AI in its monitoring tools Splunk introduces machine learning capabilities in Splunk Enterprise and Splunk Cloud Why should enterprises use Splunk?
Read more
  • 0
  • 0
  • 3140

article-image-apple-acquires-pullstring-to-possibly-help-apple-improve-siri-and-other-iot-enabled-gadgets
Amrata Joshi
19 Feb 2019
2 min read
Save for later

Apple acquires Pullstring to possibly help Apple improve Siri and other IoT-enabled gadgets

Amrata Joshi
19 Feb 2019
2 min read
Apple is bracing its speed in the AI race after Google and Amazon, which is clear by its latest move of acquiring Pullstring, a San Francisco-based AI startup, Axios reports. Founded in 2011 by a group of ex Pixar executives, Pullstring specializes in helping companies build conversational voice apps such as Alexa and Google Assistant. Pullstring was originally used to power voice apps for toys, including Hello Barbie in 2015. According to a report by Axios, Pullstring might help Apple's Siri in order to better compete with Alexa. It further broadened its service with IoT-enabled gadgets such as Amazon Echo and Google Assistant. The startup has raised around $44 million in venture capital, from firms like Greylock, CRV, True Ventures, Khosla Ventures, and First Round Capital. According to PitchBook, its most recent post-money valuation was just north of $160 million. The deal between Apple and Pullstring is said to be around $30 - $40 million, though it is still not clear how will Apple benefit out of the deal (monetarily). As Apple is still behind both Google and Amazon in terms of adoption of voice apps and being open to developers, this latest move might help the company to overcome this gap. Apple users are hoping that this news turns to be good for them. But only time will tell if it really turns out to be one. https://twitter.com/mordacai/status/1096580336434200576 Apple announces the iOS 12.1.4 with a fix for its Group FaceTime video bug Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Apple reinstates Facebook and Google Developer Certificates, restores the ability to run internal iOS apps  
Read more
  • 0
  • 0
  • 1957

article-image-chinas-huawei-technologies-accused-of-stealing-apples-trade-secrets-reports-the-information
Amrata Joshi
19 Feb 2019
4 min read
Save for later

China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information

Amrata Joshi
19 Feb 2019
4 min read
China’s Huawei Technologies which was recently accused by the U.S. government for stealing trade secrets have again come in light for using tactics to steal Apple’s trade secrets, The Information reports. The tactics include Huawei engineers appealing to Apple's third-party manufacturers and suppliers with promises of big orders. However, instead of using the opportunity to inquire on processes related to Apple’s component production, Huawei used suspicious tactics that tried to reverse engineer technology from Apple and other competitors in the electronics market. Huawei has been trying to obtain technology from rivals, especially from Apple’s suppliers in China. Huawei has also previously copied a popular feature in Apple’s smartwatch. Last year in November, a Huawei engineer got in touch with a supplier that helps in making Apple’s heart rate sensor. The engineer arranged for a meeting on the pretext of offering the supplier a manufacturing contract. The Huawei engineer even emailed the executive a photo of material it was considering for a heart rate sensor and said, “Feel free to suggest a design you already have experience with.” But the supplier didn’t leak out any details regarding the Smart Watch. In a statement to The Information, an Apple executive said, "They were trying their luck, but we wouldn't tell them anything." Apple Watch has been approved by the U.S. Food and Drug Administration and  Huawei’s smartwatch didn’t receive good feedback as users complained about the performance of its heart rate monitor. According to a spokesperson who was interviewed by The Information, “In conducting research and development, Huawei employees must search and use publicly available information and respect third-party intellectual property per our business-conduct guidelines.” Reportedly, Huawei has been adding up to the fight between the U.S. and China. U.S. companies such as Motorola and Cisco Systems have made similar claims against Huawei in civil lawsuits. According to the report by The Information, Chicago-based company, Akhan Semiconductor which helps in making durable smartphone glass, said it cooperated with a federal investigation into a theft of its intellectual property by Huawei. Huawei has been accused of using the prospect of its business relationship with Akhan to acquire samples of its glass, which Huawei took and studied. According to The Information, Huawei encouraged its employees to steal information and post it on an internal company website. The employees were also given an email address where they could send the information. Huawei had a formal program for rewarding employees that steal information. The employees get bonuses that increase based on the confidential value of the information. The company also assured employees they wouldn’t be punished for taking such actions. Huawei was suspected of copying Apple’s connector which was developed in 2016, that made the MacBook Pro hinge thinner. Last year, Huawei’s MateBook Pro showed up a similar component which was made of 13 similar parts assembled in the same manner. A former Apple employee when interviewed at Huawei was constantly asked about Apple’s upcoming products and technological features. The former Apple employee didn’t give any details and stopped interviewing at Huawei. The employee said, “It was clear they were more interested in trying to learn about Apple than they were in hiring me.” People are shocked because of the tactics used these days. A comment on HackerNews reads, “the bar for trade secrets theft is pretty low these days.” Few others think that China is getting targeted by the media. Another comment reads, “Is it just me or does there seem to be a mainstream media narrative trying to stoke the fires of nationalism against China with Huawei being the current lightning rod?” Apple announces the iOS 12.1.4 with a fix for its Group FaceTime video bug Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Apple reinstates Facebook and Google Developer Certificates, restores the ability to run internal iOS apps
Read more
  • 0
  • 0
  • 4193

article-image-gao-recommends-for-a-us-version-of-the-gdpr-privacy-laws
Savia Lobo
18 Feb 2019
2 min read
Save for later

GAO recommends for a US version of the GDPR privacy laws

Savia Lobo
18 Feb 2019
2 min read
Last week, The US Government Accountability Office (GAO) released a report on developing an internet data privacy legislation to enhance consumer protections; the one similar to the EU's General Data Protection Regulation (GDPR). The GAO report was requested by the House Energy and Commerce Committee two years ago and have scheduled a hearing for February 26. During this hearing, the committee will discuss GAO’s findings and the possibility of drafting the US' first federal-level internet privacy law. GAO officials said, “Recent developments regarding Internet privacy suggest that this is an appropriate time for Congress to consider comprehensive Internet privacy legislation.” The GAO officials recommended that the Federal Trade Commission (FTC) should be put in charge of overseeing internet privacy enforcement. GAO investigators cited the Facebook Cambridge Analytica scandal as an example of why a federal-level internet privacy law is important. According to ZDNet, some of the examples include: The dangers to user privacy due to the lack of regulation and oversight in the ever-growing Internet of Things (IoT) sector where devices collect massive amounts of information without users' knowledge. Automakers collecting data from smart cars owners. The lack of federal oversight over companies that collect and resell user information. The lack of protections for mobile users against secret data collection practices. House Energy and Commerce Chairman Frank Pallone, Jr. (D-NJ), the official who requested the report in 2017, said, “This detailed GAO report makes clear that now is the time for comprehensive congressional action on privacy that should include ensuring any agency that oversees consumer privacy has the tools to protect consumers. These recommendations and findings will be helpful as we look to develop privacy legislation in the coming months.” For its report, the GAO committee also analyzed the FTC's previous 101 user internet privacy investigations. It also took into consideration feedback from the private sector, academia, advocacy groups, other government agencies, and nine former FTC and FCC top-ranking officials, including seven former commissioners. To know more about this news in detail, read the complete GAO report. U.S Government Accountability Office (GAO) reports U.S weapons can be easily hacked GDPR complaint claims Google and IAB leaked ‘highly intimate data’ of web users for behavioral advertising French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR
Read more
  • 0
  • 0
  • 2250
article-image-google-ai-researchers-introduce-planet-an-ai-agent-that-can-learn-about-the-world-using-only-images
Natasha Mathur
18 Feb 2019
2 min read
Save for later

Google AI researchers introduce PlaNet, an AI agent that can learn about the world using only images

Natasha Mathur
18 Feb 2019
2 min read
The Google AI team in collaboration with DeepMind announced a new and open source “Deep Planning” Network, called PlaNet, last week. PlaNet is an AI agent that learns a world model using only image inputs and further plans with these models to gain experiences. PlaNet can easily solve a variety of image-based control tasks as well as compete with the advanced model-free agents. The Google AI team is also releasing the source code for the research community to further explore and build upon PlaNet. How does PlaNet work? PlaNet depends on a compact sequence of hidden or latent states. This is known called a latent dynamics model where instead of predicting directly from one image to the next image, the latent state forward is first predicted. “By compressing the images in this way, the agent can automatically learn more abstract representations, such as positions and velocities of objects, making it easier to predict forward without having to generate images along the way”, states the Google AI team. In a latent dynamics model, the information of the input images gets integrated into the hidden states with the help of an encoder network. The hidden state then gets further projected forward to predict future images and rewards. For planning, past images are encoded into the current hidden state, and then the future rewards for multiple action sequences are predicted.  PlaNet agents trained on different image-based control tasks PlaNet agents are trained across a variety of image-based control tasks. These tasks pose different challenges such as partial observability, sparse rewards for catching a ball, etc. Moreover, a single PlaNet agent is trained to solve all six tasks. Without any changes to the hyperparameters, this multi-task agent is able to achieve the same mean performance as individual agents. “We advocate for further research that focuses on learning accurate dynamics models on tasks of even higher difficulty, such as 3D environments and real-world robotics tasks. We are excited about the possibilities that model-based reinforcement learning opens up”, states the Google AI team. For more information, check out the official Google AI PlaNet announcement. Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation Google announces the general availability of a new API for Google Docs
Read more
  • 0
  • 0
  • 2663

article-image-alphabets-sidewalk-labs-proposes-to-get-a-share-of-toronto-taxes-to-build-a-smart-city-there
Amrata Joshi
18 Feb 2019
4 min read
Save for later

Alphabet’s Sidewalk Labs proposes to get a share of Toronto taxes to build a smart city there

Amrata Joshi
18 Feb 2019
4 min read
Alphabet's Sidewalk Labs is proposing that it receives a cut of property taxes, development fees and, increased land value for its work in Toronto, Toronto Star reports. Sidewalk Labs is planning to make Toronto's eastern waterfront into a 4.9-acre smart city which will be affordable and sustainable. In a statement to The Star, Daniel Doctoroff, CEO at Sidewalks Labs, said, “Infrastructure Sidewalk is considering funding would otherwise be unfinanceable. If we are prepared to do that when no one else is, we need to get paid back.” Though this proposal has the potential to generate around C$6 billion ($4.5 billion) to pay for the infrastructure over 30 years but is yet to be approved by the Toronto city and the public. The project plan chalked by Sidewalk Labs is for a light railway transit, 2,500 homes where 40 percent would be below market price. According to the company, the tall-timber factory which they are planning currently will create 4,000 jobs. These will be initially financed by Sidewalk but the company further plans to regain the investment through the various taxes. In an email to The Canadian Press, Sidewalk spokeswoman Keerthana Rang said, “The company does not intend to develop the entire eastern waterfront. Instead, Sidewalk will develop about 15 percent, leaving the rest to be developed “just like any other neighborhood in the city.” Last week, even Amazon scrapped off  its plans to set up its second headquarters in New York after facing a backlash. Just like Amazon's HQ2 project, even Sidewalk Labs has received backlash from locals who are worried about the company's lack of transparency and data privacy concerns. When asked regarding the public backlash, Micah Lasher, head of policy and communications at Sidewalk Labs said he expected people would not pre-judge what the company is proposing, and that public discussion would be an important part of the process. Toronto Mayor John Tory, said, “The City and Waterfront Toronto have not received any formal proposal at this time and no permissions or dispensations have been granted. Any final proposal ... will be given full public scrutiny ... and, ultimately, consideration by Waterfront Toronto and City Council.” But Paula Fletcher, Toronto City Councillor was not much happy with this idea and said, “I was terribly shocked because this was not within the scope of the project. I think it’s a big credibility problem for everybody.” It seems tech companies are mostly opting for PPP (Public Private Partnership) model. Italso looks like it is a proven model as it saves governments from spending a large amount of money. However, according to many, it is not a good plan as it consumes money from people’s pockets. One of the users commented on HackerNews, “The vast majority of public-private partnerships really end up being "public eats the costs, but private entities win all the benefits. It's gotten to the point that the mere existence of the phrase "public-private partnership" is a red flag that something corrupt is happening. (If it was a good faith effort and all above board, then it would all be done publicly in the first place).” Another user commented, “The fundamental problem of any PPP is that a lot can go wrong or change in 20, 50 years. It's impossible to predict and anticipate future economic climate so both parts try to set up adversarial contracts where they are protected as much as possible.” To know more about this news, check out the post by The Star. Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment Alphabet’s Waymo to launch the world’s first commercial self driving cars next month Richard DeVaul, Alphabet executive, resigns after being accused of sexual harassment    
Read more
  • 0
  • 0
  • 1334

article-image-uk-lawmakers-publish-a-report-after-18-month-long-investigation-condemning-facebooks-disinformation-and-fake-news-practices
Sugandha Lahoti
18 Feb 2019
4 min read
Save for later

UK lawmakers publish a report after 18 month long investigation condemning Facebook’s disinformation and fake news practices

Sugandha Lahoti
18 Feb 2019
4 min read
It seems that the bad days for Facebook are never ending. Today, the Digital, Culture, Media and Sport Committee published its final report on Disinformation and ‘fake news. They have touted Facebook’s handling of personal data, and its use for political campaigns, as prime areas for inspection by regulators. This report has been published after UK Parliament committee spent more than 18 months of investigation into Facebook and its privacy practices. The interim report was published in July 2018 which offered the UK government a number of recommendations. The final report offers more recommendations as well as repeats recommendations. The interim report developed a code of ethics, which all tech companies should agree to uphold. For the final report, the members of MP have recommended that platforms should be subject to a Compulsory Code of Ethics that would be overseen by an independent regulator. The companies which fail to meet rules on harmful or illegal content would face hefty fines. The committee was severely critical of Facebook, condemning Mark Zuckerberg for failing to answer the members’ questions, “By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and  the‘International Grand Committee’, involving members from nine legislatures from around the world.”, writes the report. Damian Collins MP, Chair of the DCMS Committee said, “Even if Mark Zuckerberg doesn’t believe he is accountable to the UK Parliament, he is to the billions of Facebook users across the world. Evidence uncovered by my Committee shows he still has questions to answer yet he’s continued to duck them, refusing to respond to our invitations directly or sending representatives who don’t have the right information.” In December 2018, the committee published a report of Facebook internal documents, including e-mails sent between CEO Mark Zuckerberg and other senior executives regarding a company called six4three. The documents revealed that Facebook monetized their valuable user data, allowing apps to use Facebook to grow their network, as long as it increased usage of Facebook, strict limits on possible competitor access and much more. For the final report, the committee has published more evidence from the Six4Three documents. Per the report, this demonstrates “Facebook's aggressive action against certain apps and highlights the link between Friends' data and the financial value of the developers' relationship with Facebook.” Facebook was also condemned for its Russian meddling in elections. The committee has urged the Government to make a statement about the number of investigations being carried out into Russian interference in UK politics. Facebook and other social media platforms should be clear that they have a responsibility to comply with the law and not facilitate illegal activity such as foreign influence, disinformation, funding, voter manipulation and the sharing of data. To summarize, the DCMS committee calls for: Compulsory Code of Ethics for tech companies overseen by independent regulator Regulator given powers to launch legal action against companies breaching code Government to reform current electoral communications laws and rules on overseas involvement in UK elections Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation “We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee concludes. You can go through the full report here. Facebook and the U.S. government are negotiating over Facebook’s privacy issues Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report German regulators put a halt to Facebook’s data gathering activities and ad business
Read more
  • 0
  • 0
  • 2053
article-image-gnu-health-federation-message-and-authentication-server-drops-mongodb-and-adopts-postgresql
Melisha Dsouza
15 Feb 2019
2 min read
Save for later

GNU Health Federation message and authentication server drops MongoDB and adopts PostgreSQL

Melisha Dsouza
15 Feb 2019
2 min read
Just after RedHat announced its plans to drop MongoDB from its Satellite system management solution because of it being licensed under SSPL, GNU has followed suit. Earlier this week, GNU announced its plans move its GNU Health Federation message and authentication server -Thalamus- from MongoDB to PostgreSQL. As listed on the post, the main reason for this switch is because MongoDB decided to change the license of the server to their Server Side Public License (SSPL). Because of this decision, many GNU/Linux distributions are no longer including the Mongodb server. In addition to these reasons, GNU expresses their concerns that even the organizations like the OSI and Free Software Foundation are showing their reluctance to accept this idea. Adding to this hesitation of accepting the license; rejection from a large part of the Libre software community and the immediate end of support from GPL versions of MongoDB has lead to the adoption of PostgreSQL for Thalamus. Dr. Luis Falcon, President of GNU Solidario says that one of the many reasons for choosing PostgreSQL was its JSON(B) support that  provides the flexibility and scalability found in document oriented engines. The upcoming thalamus server will be designed to support PostgreSQL. To stay updated with further progress on this announcement, head over to the GNU blog. GNU Bison 3.3  released with major bug fixes, yyrhs and yyphrs tables, token constructors and more GNU ed 1.15 released! GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation  
Read more
  • 0
  • 0
  • 3014

article-image-facebook-and-the-u-s-government-are-negotiating-over-facebooks-privacy-issues
Amrata Joshi
15 Feb 2019
2 min read
Save for later

Facebook and the U.S. government are negotiating over Facebook’s privacy issues

Amrata Joshi
15 Feb 2019
2 min read
Facebook has been in news for its data breaches and its data sharing practices since quite some time now. Last month, advocacy groups such as Open Market Institute, Color of Change, and the Electronic Privacy Information Center among others, wrote to the Federal Trade Commission, requesting the government to intervene into how Facebook operates. The letter included a list of actions that the FTC could take which including the multibillion-dollar fine and changing the company’s hiring practices. The advocacy group wrote the FTC, “The record of repeated violations of the consent order can no longer be ignored. The company’s (Facebook’s) business practices have imposed enormous costs on the privacy and security of Americans, children, and communities of color, and the health of democratic institutions in the United States and around the world.” According to today’s report by Washington Post, the U.S. government and Facebook are negotiating a settlement over Facebook’s privacy issues that could require the company to pay a multibillion-dollar fine. FTC has been investigating on the revelations on Facebook’s Cambridge Analytica scandal. The investigation is based on whether the sharing of data with Cambridge Analytica and other privacy disputes violated a 2011 agreement with the FTC. According to the Washington Post, the US Federal Trade Commission (FTC) and Facebook haven’t yet agreed on the amount. Facebook has reported  $16.9 billion as its fourth-quarter revenue and a profit of $6.9 billion. An eventual settlement might also incorporate a few changes in how Facebook does business. Currently, Facebook has declined to comment as per the Washington Post report. Facebook’s spokeswoman said, “We have been working with the FTC and will continue to work with the FTC.” To know more about this news, check out the official report by Washington Post. Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities, TechCrunch reports
Read more
  • 0
  • 0
  • 2063