Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-microsoft-acquires-citus-data-with-plans-to-create-a-best-postgres-experience
Melisha Dsouza
25 Jan 2019
3 min read
Save for later

Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’

Melisha Dsouza
25 Jan 2019
3 min read
Yesterday, Microsoft announced that it has acquired Citus Data- a startup that specializes in big data and analytics.Citus  is an extension to the open source database management system PostgreSQL, which transforms PostgreSQL into a distributed database. The start-up was founded in 2011 and apart from their Citus extension, 'Citus Cloud' database as a service powers billions of transactions every day giving rise to the world’s first horizontally scalable relational database which can be run on premises, and as a fully-managed service on the cloud. Citus has varied applications ranging from SaaS companies who run their core applications on Citus Cloud scaling their business on-demand to businesses using Citus to power their real-time analytics dashboards. Citus also states that they have been able to help many Fortune 100 companies migrate to an open, horizontally scalable Postgres ecosystem and have improved developer performance while providing them with scalability to power their workloads without re-architecting their applications. In a blog post, Umur Cubukcu, Ozgun Erdogan, and Sumedh Pathak; co-founders of Citus Data said that as part of Microsoft, “we will stay focused on building an amazing database on top of PostgreSQL that gives our users the game-changing scale, performance, and resilience they need.” Adding to this point, Microsoft admits that “Both Citus and Microsoft share a mission of openness, empowering developers, and choice. And we both love PostgreSQL. We are excited about joining forces, and the value that doing so will create: Delivering to our community and our customers the world’s best PostgreSQL experience.” Acquiring Citus is a step towards Microsoft’s commitment to Open Source technologies as well as enhancing Azure PostgreSQL performance and scalability as customer workloads keep expanding. Earlier this month, DB-Engines conferred the title of DBMS of the Year on PostgreSQL. Microsoft and Citus Data have committed themselves to enable customers in scaling complex multi-tenant SaaS applications and accelerate the time to insight with real-time analytics over huge amounts of data, all with the familiar PostgreSQL tools. Developers have received this news well. Twitter saw many users commenting on the decision being a smart once, since PostgreSQL is well used among developers. https://twitter.com/izotov/status/1088563182006923264 https://twitter.com/satyanadella/status/1088578781663571975 The price of the acquisition was not disclosed. To know more about this announcement, head over to Microsoft's official blog. NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release Microsoft urgently releases Out-of-Band patch for an active Internet Explorer remote code execution zero-day vulnerability Citus Data to donate 1% of its equity to non-profit PostgreSQL organizations    
Read more
  • 0
  • 0
  • 2380

article-image-hadoop-3-2-0-released-with-support-for-node-attributes-in-yarn-hadoop-submarine-and-more
Amrata Joshi
24 Jan 2019
3 min read
Save for later

Hadoop 3.2.0 released with support for node attributes in YARN, Hadoop submarine and more

Amrata Joshi
24 Jan 2019
3 min read
The team at Apache Hadoop released Apache Hadoop 3.2.0, an open source software platform for distributed storage and for processing of large data sets. This version is the first in the 3.2 release line and is not generally available or production ready, yet. What’s new in Hadoop 3.2.0? Node attributes support in YARN This release features Node Attributes that help in tagging multiple labels on the nodes based on their attributes. It further helps in placing the containers based on the expression of these labels. It is not associated with any queue and hence there is no need to queue resource planning and authorization for attributes. Hadoop submarine on YARN This release comes with Hadoop Submarine that enables data engineers for developing, training and deploying deep learning models in TensorFlow on the same Hadoop YARN cluster where data resides. It also allows jobs for accessing data/models in HDFS (Hadoop Distributed File System) and other storages. It supports user-specified Docker images and customized DNS name for roles such as tensorboard.$user.$domain:6006. Storage policy satisfier Storage policy satisfier supports HDFS applications to move the blocks between storage types as they set the storage policies on files/directories. It is also a solution for decoupling storage capacity from compute capacity. Enhanced S3A connector This release comes with support for an enhanced S3A connector, including better resilience to throttled AWS S3 and DynamoDB IO. ABFS filesystem connector It supports the latest Azure Datalake Gen2 Storage. Major improvements jdk1.7 profile has been removed from hadoop-annotations module. Redundant logging related to tags have been removed from configuration. ADLS connector has been updated to use the current SDK version (2.2.7). This release includes LocalizedResource size information in the NM download log for localization. This version of Apache Hadoop comes with ability to configure auxiliary services from HDFS-based JAR files. This release comes with the ability to specify user environment variables, individually. The debug messages in MetricsConfig.java have been improved. Capacity scheduler performance metrics have been added. This release comes with added support for node labels in opportunistic scheduling. Major bug fixes The issue with logging for split-dns multihome has been resolved. The snapshotted encryption zone information in this release is immutable. A shutdown routine has been added in HadoopExecutor for ensuring clean shutdown. Registry entries have been deleted from ZK on ServiceClient. The javadoc of package-info.java has been improved. NPE in AbstractSchedulerPlanFollower has been fixed. To know more about this release, check out the release notes on Hadoop’s official website. Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop? Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop Setting up Apache Druid in Hadoop for Data visualizations [Tutorial]
Read more
  • 0
  • 0
  • 6502

article-image-tensorflow-1-13-0-rc0-releases
Natasha Mathur
24 Jan 2019
3 min read
Save for later

TensorFlow 1.13.0-rc0 releases!

Natasha Mathur
24 Jan 2019
3 min read
The TensorFlow team released the first release candidate of TensorFlow 1.13.0-rc0 yesterday. TensorFlow 1.13.0-rc0 explores major bug fixes, improvements and other changes. Let’s have a look at the major highlights in TensorFlow 1.13.0-rc0. Major improvements In TensorFlow 1.13.0-rc0, TensorFlow Lite has been moved from contrib to core. What this means is that Python modules are now under tf.lite and the source code is now under tensorflow/lite instead of tensorflow/contrib/lite. TensorFlow GPU binaries have now been built against CUDA 10. NCCL has been moved to core in TensorFlow 1.13.0-rc0. Behavioural and other changes Conversion of python floating types to uint32/64 (i.e. matching behaviour of other integer types) in tf.constant has been disallowed in TensorFlow 1.13.0-rc0. Doc consisting of details about the rounding mode used in quantize_and_dequantize_v2 has been updated. The performance of GPU cumsum/cumprod has been increased by up to 300x. Support has been added for weight decay in most TPU embedding optimizers such as AdamW and MomentumW. An experimental Java API has been added for injecting TensorFlow Lite delegates. New support is added for strings in TensorFlow Lite Java API. tf.spectral has been merged into tf.signal for TensorFlow 2.0. Bug fixes tensorflow::port::InitMain() now gets called before using the TensorFlow library. Programs that fail to do this are not portable to all platforms. saved_model.loader.load has been deprecated and is replaced by saved_model.load. Saved_model.main_op has also been deprecated and is replaced by saved_model.main_op in V2. tf.QUANTIZED_DTYPES has been deprecated and is changed to tf.dtypes.QUANTIZED_DTYPES. sklearn imports has been updated for deprecated packages. confusion_matrix op is now exported as tf.math.confusion_matrix instead of tf.train.confusion_matrix. An ignore_unknown argument is added in TensorFlow 1.13.0-rc0 to parse_values that suppresses ValueError for unknown hyperparameter types. Such * Add tf.linalg.matvec convenience function. tf.data.Dataset.make_one_shot_iterator() has been deprecated in V1 and added tf.compat.v1.data.make_one_shot_iterator()`. tf.data.Dataset.make_initializable_iterator() is deprecated in V1, removed it from V2, and added another tf.compat.v1.data.make_initializable_iterator(). The XRTCompile op is can now return the ProgramShape resulted from the XLA compilation as a second return argument. XLA HLO graphs are rendered as SVG/HTML in TensorFlow 1.13.0-rc0. For more information, check out the complete TensorFlow 1.13.0-rc0 release notes. TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] TensorFlow 1.11.0 releases
Read more
  • 0
  • 0
  • 3122
Visually different images

article-image-a-brief-list-of-drafts-bills-in-us-legislation-for-protecting-consumer-data-privacy
Savia Lobo
24 Jan 2019
3 min read
Save for later

A brief list of drafts bills in US legislation for protecting consumer data privacy

Savia Lobo
24 Jan 2019
3 min read
US Lawmakers have initiated drafting privacy regulations and also encouraging the enforcement agencies to build a privacy framework which they can easily follow. Last week, Marco Rubio, U.S. Senator introduced a bill titled ‘American Data Dissemination (ADD) Act’ for creating federal standards of privacy protection for large companies like Google, Amazon, and Facebook. However, this bill largely focuses on data collection and disclosure. Hence, the experts were afraid that this bill would ignore the way companies use customer’s data. Last week, U.S. Senators John Kennedy and Amy Klobuchar introduced the ‘Social Media Privacy and Consumer Rights Act’ that allows the consumers to have more control over their personal data. This legislation aims to improve transparency, strengthen consumers’ recourse options during a data breach and ensure companies are compliant with privacy policies that protect consumers. Another bill, sponsored by Reps. Dutch Ruppersberger, Jim Himes, Will Hurd, and Mike Conaway, was introduced last week to combat theft of U.S. technologies by state actors including China, and to reduce risks to “critical supply chains.” Ruppersberger said they had long suspected Beijing is using its telecom companies to spy on Americans and they knew that China is responsible for up to $600 billion in a theft of U.S. trade secrets. Some reintroduced bills Securing Energy Infrastructure Act A bill titled ‘Securing Energy Infrastructure Act’ was proposed by Sens. Jim Risch, and Angus King. This bill, reintroduced last Thursday, would push the government to explore new ways to secure the electric grid against cyber attacks. This bill unanimously passed the Senate in December but was never put to a vote in the House. Telephone Robocall Abuse Criminal Enforcement and Deterrence Act On 17th January, Sens. John Thune, R-S.D., and Ed Markey, D-Mass., renewed their call to increase punishments for people running robocall scams. The Telephone Robocall Abuse Criminal Enforcement and Deterrence, or TRACED, Act would give the Federal Communications Commission more legal leeway to pursue and prosecute robocallers. Under the bill, telecom companies would also need to adopt tools to sift out robocalls. Thune said, “The TRACED Act holds those people who participate in robocall scams and intentionally violate telemarketing laws accountable and does more to proactively protect consumers who are potential victims of these bad actors.” Federal CIO Authorization Act The Federal CIO Authorization Act, which Reps. Will Hurd, and Robin Kelly, reintroduced on Jan. 4, passed the House unanimously on Tuesday. This bill would elevate the federal chief information officer within the White House chain of command and designate both the federal CIO and federal chief information security officer as presidentially appointed positions. The measure still lacks a Senate counterpart. Lawmakers have also sent letters to different companies including Verizon, T-Mobile, Sprint, and AT&T asking for information on the companies’ data sharing partnerships with third-party aggregators. These companies have time until Jan 30 to respond. Reps. Greg Walden, Cathy McMorris Rodgers, Robert Latta, and Brett Guthrie, wrote, “We are deeply troubled because it is not the first time we have received reports and information about the sharing of mobile users’ location information involving a number of parties who may have misused personally identifiable information.” To know more about these bills in detail, visit the Nextgov website. Russia opens civil cases against Facebook and Twitter over local data laws Harvard Law School launches its Caselaw Access Project API and bulk data service making almost 6.5 million cases available Senator Ron Wyden’s data privacy law draft can punish tech companies that misuse user data
Read more
  • 0
  • 0
  • 1896

article-image-facebook-ai-research-introduces-enhanced-laser-library-that-allows-zero-shot-transfer-across-93-languages
Amrata Joshi
23 Jan 2019
4 min read
Save for later

Facebook AI research introduces enhanced LASER library that allows zero-shot transfer across 93 languages

Amrata Joshi
23 Jan 2019
4 min read
Yesterday, the team at Facebook’s AI research announced that they have expanded and enhanced their LASER (Language-Agnostic SEntence Representations) toolkit to work with more than more than 90 languages, written in 28 different alphabets. This has accelerated the transfer of natural language processing (NLP) applications to many more languages. The team is now open-sourcing LASER and making it as the first exploration of multilingual sentence representations. Currently 93 languages have been incorporated into LASER. LASER achieves the results by embedding all languages together in a single shared space. They are also making the multilingual encoder and PyTorch code freely available and providing a multilingual test set for more than 100 languages. The Facebook post reads, “The 93 languages incorporated into LASER include languages with subject-verb-object (SVO) order (e.g., English), SOV order (e.g., Bengali and Turkic), VSO order (e.g., Tagalog and Berber), and even VOS order (e.g., Malagasy).” Features of LASER Enables zero-shot transfer of NLP models from one language, such as English, to scores of others including languages where training data is limited. Handles low-resource languages and dialects. Provides accuracy for 13 out of the 14 languages in the XNLI corpus. It delivers results in cross-lingual document classification (MLDoc corpus). LASER’s sentence embeddings are strong at parallel corpus mining which establishes a new state of the art in the BUCC, 2018 workshop on building and using comparable Corpora, shared task for three of its four language pairs. It provides fast performance with processing up to 2,000 sentences per second on GPU. PyTorch has been used to implement the sentence encoder with minimal external dependencies. LASER supports the use of multiple languages in one sentence. LASER’s performance improves as new languages get added and the system keeps on learning to recognize the characteristics of language families. Sentence embeddings LASER maps a sentence in any language to a point in a high-dimensional space such that the same sentence in any language will end up in the same neighborhood. This representation could also be a universal language in a semantic vector space. The Facebook post reads, “We have observed that the distance in that space correlates very well to the semantic closeness of the sentences.” The sentence embeddings are used for initializing the decoder LSTM through a linear transformation and are also concatenated to its input embeddings at every time step. The encoder/decoder approach The approach behind this project is based on neural machine translation, an encoder/decoder approach which is also known as sequence-to-sequence processing. LASER uses one shared encoder for all input languages and a shared decoder for generating the output language. LASER uses a 1,024-dimension fixed-size vector for representing the input sentence. The decoder is instructed about which language needs to be generated. As the encoder has no explicit signal for indicating the input language, this method encourages it to learn language-independent representations. The team at Facebook AI-research has trained their systems on 223 million sentences of public parallel data, aligned with either English or Spanish. By using a shared BPE vocabulary trained on the concatenation of all languages, it was possible to benefit  low-resource languages from high-resource languages of the same family. Zero-shot, cross-lingual natural language inference LASER achieves excellent results in cross-lingual natural language inference (NLI). The Facebook’s AI research team considers the zero-shot setting as they train the NLI classifier on English and then apply it to all target languages with no fine tuning or target-language resources. The distances between all sentence pairs are calculated and the closest ones are selected. For more precision, the margin between the closest sentence and the other nearest neighbors is considered. This search is performed using Facebook’s FAISS library. The team outperformed the state of the art on the shared BUCC task by a large margin. The team improved the F1 score from 85.5 to 96.2 for German/English, from 81.5 to 93.9 for French/English, from 81.3 to 93.3 for Russian/English, and from 77.5 to 92.3 for Chinese/English. To know more about LASER, check out the official post by Facebook. Trick or Treat – New Facebook Community Actions for users to create petitions and connect with public officials Russia opens civil cases against Facebook and Twitter over local data laws FTC officials plan to impose a fine of over $22.5 billion on Facebook for privacy violations, Washington Post reports
Read more
  • 0
  • 0
  • 3607

article-image-blizzard-set-to-demo-googles-deepmind-ai-in-starcraft-2
Natasha Mathur
23 Jan 2019
3 min read
Save for later

Blizzard set to demo Google's DeepMind AI in StarCraft 2

Natasha Mathur
23 Jan 2019
3 min read
Blizzard, an American video game development company is all set to demonstrate the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game, tomorrow. “The StarCraft games have emerged as a "grand challenge" for the AI community as they're the perfect environment for benchmarking progress against problems such as planning, dealing with uncertainty and spatial reasoning”, says the Blizzard team. Blizzard had partnered up with DeepMind during the 2016 BlizzCon, where they announced that they’re opening up the research platform for StarCraft II so that everyone in the StarCraft II community can contribute towards advancement in the AI research. Ever since then, much progress has made on the AI research front when it comes to StarCraft II. It was only two months back when, Oriol Vinyals, Research Scientist, Google DeepMind, shared the details of the progress that the AI had made in StarCraft II, states the Blizzard team. Vinyals stated how the AI, or agent, had learned to perform basic macro focused strategies along with defence moves against cheesy and aggressive tactics such as “cannon rushes”. Blizzard also posted an update during BlizzCon 2018, stating that DeepMind had been working really hard at training their AI (or agent) to better understand and learn StarCraft II. “Once it started to grasp the basic rules of the game, it started exhibiting amusing behaviour such as immediately worker rushing its opponent, which actually had a success rate of 50% against the 'Insane' difficulty standard StarCraft II AI”, mentioned the Blizzard team. It has almost become a trend for DeepMind to measure the capabilities of its advanced AI against human opponents in video games. For instance, it made headlines in 2016 when its AlphaGo AI program, managed to successfully defeat Lee Sedol, world champion, in a five-game match. AlphaGo had also previously defeated the professional Go player, Fan Hui in 2015 who was a three-time European champion of the game at the time. Also, recently in December 2018, DeepMind researchers published a full evaluation of its AlphaZero in the journal Science, confirming that it is capable of mastering Chess, Shogi, and Go from scratch. Other examples of AI making its way into advanced game learning includes OpenAI Five, a team of AI algorithms that beat a team of amateur human video game players in Dota 2 – the popular battle arena game, back in June 2018. Later in August, it managed to beat semi-professional players at the Dota 2 game. The demonstration for DeepMind AI in StarCraft II is all set for tomorrow at 10 AM Pacific Time. Check out StarCraft’s Twitch channel or DeepMind’s YouTube channel to learn about other recent developments that have been made. Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads of AI use in healthcare Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet DeepMind open sources TRFL, a new library of reinforcement learning building blocks
Read more
  • 0
  • 0
  • 2681
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-google-may-pull-out-google-news-from-europe-bloomberg-report
Melisha Dsouza
23 Jan 2019
3 min read
Save for later

Google may pull out Google news from Europe: Bloomberg Report

Melisha Dsouza
23 Jan 2019
3 min read
According to a report  published by Bloomberg yesterday, Google may pull its Google News service from Europe. This decision is dependent on a controversial copyright law that is in the process of being finalized. This law that is being worked on by European regulators will give publishers the right to demand money from Google and other web platforms when fragments of their articles show up in news search results or are shared by users. Moreover, these rules would also require Google and Facebook to actively prevent music, videos, and other copyrighted content from appearing on their platforms unless the rights holders grant them a license. On the basis of "a close reading of the rules ", Jennifer Bernal, Google’s public policy manager for Europe, the Middle East, and Africa; says that Google News might quit the continent if regulators are successful in implementing this law. What does this move mean to Google and other Publishers? Google states that its news service does not earn the company any direct revenue. So, pulling Google News out of Europe wouldn’t mean much to the tech giant. However, news publishers would be affected to a certain extent. This is because publishers earn money through advertisements in search results. Passing the law would mean that Google will have to choose the publishers that it would license. Bloomberg points out that since bigger publishers offer a broader range of popular content, smaller competitors are likely to lose out on the license and eventually on the revenue. This is not the first time Google has found itself at crossroads. Bloomberg details a similar incident in 2014 when Google shut its news service after a law was passed requiring Spanish publications to charge aggregators for displaying excerpts of stories. While Google remained financially unaffected by this move, small publishers lost about 13 percent of their web traffic, according to a 2017 study released by the Spanish Association of Publishers of Periodical Publications. While the proposal was scheduled to be finalized on Monday, lawmakers failed to come to an agreement and the legislation has been stalled for now. You can head over to Bloomberg for more insights on this news. Google faces pressure from Chinese, Tibetan, and human rights groups to cancel its censored search engine, Project DragonFly A new privacy bill was introduced for creating federal standards for privacy protection aimed at big tech firms like Facebook, Google and Amazon Google Home and Amazon Alexa can no longer invade your privacy; thanks to Project Alias!
Read more
  • 0
  • 0
  • 1770

article-image-trick-or-treat-new-facebook-community-actions-for-users-to-create-petitions-and-connect-with-public-officials
Amrata Joshi
22 Jan 2019
5 min read
Save for later

Trick or Treat - New Facebook Community Actions for users to create petitions and connect with public officials

Amrata Joshi
22 Jan 2019
5 min read
Yesterday, Facebook launched a petition feature called Community Actions, which enables community users to request changes from their local and national elected officials and government agencies, as reported by TechCrunch. This feature has been rolled out to users across the United States. Users can create a petition, tag public officials or organizations, and also get their friends to support their cause. Supporters can discuss the topic related to a specific petition with fellow supporters on the page, and also create events and fundraisers. Facebook will display the number of supporters behind a Community Action, but users will be able to see the names of those they are friends with or can view pages or public figures. This feature comes with a one-click Support option, which is quite visible on the news feed and it reduces the time required for signing up. This, in turn, helps the organizations and individuals to maximize the size of their community. In a statement to TechCrunch, a Facebook spokesperson said, “Building informed and civically engaged communities is at the core of Facebook’s mission. Every day, people come together on Facebook to advocate for causes they care about, including by contacting their elected officials, launching a fundraiser, or starting a group. Through these and other tools, we have seen people marshal support for and get results on issues that matter to them. Community Action is another way for people to advocate for changes in their communities and partner with elected officials and government agencies on solutions.” Lately, Facebook has been working towards a number of features designed to get people more involved in their communities. Features such as Town Hall, which gives access to local officials and Candidate feature that allows politicians to pitch on camera, are few of the steps in this direction. According to TechCrunch, there are some limits wherein users can’t tag President Donald Trump or Vice President Mike Pence. This might prevent users from expressing themselves and putting up petitions for or against them. Though Facebook will use a combination of user flagging, proactive algorithmic detection, and human enforcers, the new feature might get misused in some way or the other. This feature could be used in a way to pressurize or bully politicians and bureaucrats. A major issue with this feature is that users can’t stand against a Community Action. The discussion feed might not include the negative points as only the supporters can discuss on the thread. But this might also lead trolls to falsely back them and disturb the entire discussion thread. In a statement to TechCrunch, Facebook said, “Users will have to share a Community Action to their own feed with a message of disapproval, or launch their own in protest.” The Community Actions might be used to spread some fake awareness and bring petitions which are not for the well-being of the users. If the support count gets manipulated, it might cause trouble as a wrong petition would get support. For example, a few of the communities could falsely manipulate users by using Facebook groups or message threads so that it would look like there’s much more support for a misleading cause. Another example is if a politician causes a community for backing and further manipulating votes based on their false posts and comment threads. With Facebook’s WhatsApp now working towards preventing the spread of fake news by restricting the forwards to 5 individuals or groups, Facebook’s Community Actions feature might work against it. Users are giving mixed reactions to this news. Some of the users seem to be excited about this new feature. One of the comments on HackerNews reads, “That's interesting. I see that Facebook develops more feature to support different initiatives (FB groups, charity pages) and even petition pages.” Some users think that Facebook would gather users’ data based on political views. This would help the company in organizing various ad campaigns and generating revenue. One of the users commented on HackerNews, “Facebook really doesn't care too much what the petitions are about, but is mostly interested in gathering more data on its users' political beliefs so they can allow domestic and foreign campaign spending groups to better target advertisements meant to change or reinforce those beliefs (or suppress civic participation of those with such beliefs) and increase FB's total share of campaign-related ad spend.” Some of the users don’t trust Facebook anymore and they think that the new features won’t be secure. Another comment on HackerNews reads, “I'm going to have a really hard time taking any new development coming out of Facebook as genuine, honest or non-privacy invasive. I simply do not foresee my opinion of Facebook, Zuckerberg or anyone still working there changing radically in the near future.” Others are not interested in any sort of political engagement on social media as they think the views would be manipulated there. Users are requesting for better sign up or verification process which would help keep the fake accounts away. FTC officials plan to impose a fine of over $22.5 billion on Facebook for privacy violations, Washington Post reports Facebook takes down Russian news agency, Sputnik’s pages for engaging in “coordinated inauthentic behavior” Facebook open sources Spectrum 1.0.0, an image processing library for better mobile image production
Read more
  • 0
  • 0
  • 1649

article-image-russia-opens-civil-cases-against-facebook-and-twitter-over-local-data-laws
Savia Lobo
22 Jan 2019
2 min read
Save for later

Russia opens civil cases against Facebook and Twitter over local data laws

Savia Lobo
22 Jan 2019
2 min read
On Monday, Russian’s popular watchdog, Roskomnadzor said that it opened a civil case against Twitter and Facebook for failing to explain how they plan to comply with local data laws, the Interfax news agency reported. According to Interfax, Facebook and Twitter “have not submitted specific plans and deadlines for the localization of databases of Russian users in the Russian Federation.” Alexander Zharov, Roskomnadzor’s head of the department, said, “companies have a month, after which the regulator will proceed to concrete actions in their attitude.” Roskomnadzor reported that it received responses from Facebook and Twitter to a request for providing information on the localization of Russian data in the territory of the Russian Federation and analyzes them. “Russia has introduced tougher internet laws in the last five years, requiring search engines to delete some search results, messaging services to share encryption keys with security services and social networks to store Russian users’ personal data on servers within the country”, the Reuters reported. On December 17, last year, the ministry sent letters to Twitter and Facebook about the need to comply with legislation on the localization of data storage for Russian users in the Russian Federation. If companies refuse to demand or ignore it, they will be fined 5,000 rubles each, and then they will again be given a period of six months to a year to localize the data, said department head Alexander Zharov. To know more about this news in detail, visit Reuter’s website. Facebook takes down Russian news agency, Sputnik’s pages for engaging in “coordinated inauthentic behavior” Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia FTC officials plan to impose a fine of over $22.5 billion on Facebook for privacy violations, Washington Post reports
Read more
  • 0
  • 0
  • 1581

article-image-announcing-w3c-publishing-working-groups-updated-scope-and-goals
Melisha Dsouza
22 Jan 2019
2 min read
Save for later

Announcing W3C Publishing Working Group’s updated scope and goals

Melisha Dsouza
22 Jan 2019
2 min read
On 18th January, W3C Publishing Working Group published their updated scope and goals The PWG will be focusing on two things: how to define an ordered sequence of web resources, and how to express metadata about that collection of resources. The W3C defines web publishing as: “A web publication is a single logical entity that may be built from numerous web resources, with a defined order to the content-- chapter two always comes after chapter one.” The official documentation states that the team will now work in a very modular fashion “to meet the needs of a particular segment of the industry.” Taking into consideration that user agent should remember where a user stopped reading, allow users to customize the display of the publication; the team will move the descriptions of these affordances, user agent behaviors, and use cases to their Use Cases and Requirements document. Lastly, they will be updating the timeline of their milestones and deliverables to reflect this focus- change . Users can check main WP spec and the WP Explainer in the next few days for more information on the same. Users can look forward to an upcoming specification that will define an audiobook format -usable on both the web and in packaged contexts. Other goals: The HTMLelement now has a semantic meaning. It represents a paragraph-level thematic break to depict a transition to another topic within a section of a reference book. Accessibility information for each of the 26 components in The Australian Government Design System A second, inner border color can be obtained for an element with background-clip. You can head over to W3C’s official documentation for more insights on this news. CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure TensorFlow team releases a developer preview of TensorFlow Lite with new mobile GPU backend support Elixir 1.8 released with new features and infrastructure improvements  
Read more
  • 0
  • 0
  • 3818
article-image-whatsapp-limits-users-to-five-text-forwards-to-fight-against-fake-news-and-misinformation
Amrata Joshi
22 Jan 2019
4 min read
Save for later

WhatsApp limits users to five text forwards to fight against fake news and misinformation

Amrata Joshi
22 Jan 2019
4 min read
Yesterday, Facebook Inc’s WhatsApp decided to put a global limit on the number of times a user can forward a message, as per Reuters’ report. Users will now be blocked from forwarding messages to more than five individuals or groups, according to new rules set by Whatsapp worldwide, in order to fight the spread of fake news and misinformation. Victoria Grand, Facebook’s Global Head for policy programs, announced the policy at an event in Jakarta, yesterday. With 1.5 billion users on the platform, the concern is that Whatsapp forwards could be used to spread fake news via manipulated texts, photos, videos, and audio hoaxes. Initially, users could forward a message to 20 individuals or groups on WhatsApp. After the spread of rumors on social media in July which led to killings and lynching attempts in India, the new policy has been put in place to fight against such fake news on social media. Facebook has previously been used by bad foreign actors to manipulate U.S. elections. Last October, WhatsApp caused trouble in Brazil’s presidential election, wherein, Jair Bolsonaro, the far-right candidate, faced claims of using WhatsApp for spreading falsehoods related to his opponent. It’s a matter of concern, how such platforms are influencing the political scenario. In a statement to the Guardian, Carl Woog, the head of communications at WhatsApp, said, “We settled on five because we believe this is a reasonable number to reach close friends while helping prevent abuse.” A forwarded text is marked in a light grey color which otherwise is much similar to other messages. This means that the segregation done by the team at WhatsApp doesn’t really solve the purpose. According to few critics, “The design strips away the identity of the sender and allows messages to spread virally with little accountability.” WhatsApp took few steps to over the challenges introduced few measures. Last year, the company introduced a feature to label forwarded messages and for removal of a quick-forward button next to images, video and audio clips. According to the report by The Guardian, these measures reduced forwarding by 25% globally and more than that in India, which has one of the highest forwarding rates in the world. Users have raised questions with regards to this news. With the biggest question being, if the fake news gets shared by simply copy-pasting the text, then how will it get monitored in such cases. Few users think that the limit of 5 is still too much and they recommend 2 instead. The idea behind this policy doesn’t look much relevant because a group can have up to 256 users in it.f I the message is forwarded to 5 groups then it is equivalent to sending it to atmost 1,280 users. This is surely not slowing the spread of fake news. One of the users commented, “You can still fwd to 5*256 people.” Others suggest having a blacklist. A comment on Hacker news reads, “You can have a blacklist of message texts that are sent to the apps as hashes.” According to some users, this is a good step taken by the team at Facebook. One of the comments read, “It seems like a valid and useful way to slow the rate of propagation of fake news. Much of the current problem is that fake news spreads faster than moderators can make a decision on it, or journalists can fact-check it. If you can keep it in a "slow burn" phase longer, where it's being forwarded along to a handful of people at a time, it's easier to combat.” Fake news is a danger to democracy. These researchers are using deep learning to model fake news to understand its impact on elections. Facebook COO, Sandberg’s Senate testimony: On combating foreign influence, fake news, and upholding election integrity Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK?
Read more
  • 0
  • 0
  • 1970

article-image-tidb-open-sources-its-mysql-mariadb-compatible-data-migration-tool
Natasha Mathur
22 Jan 2019
2 min read
Save for later

TiDB open sources its MySQL/MariaDB compatible data migration (DM) tool

Natasha Mathur
22 Jan 2019
2 min read
TiDB, an open source cloud-native distributed database, made its data migration platform (DM)  available as open source today. Data Migration (DM) by TiDB is an integrated data synchronization task management platform that provides support for full data migration as well as the incremental data migration from MySQL/MariaDB into TiDB.  It helps reduce the operations cost and also make the troubleshooting process easy. The Data Migration tool by TiDB comes with three major components, namely, DM-master, DM-worker, and dmctl. Data Migration DM Master handles and schedules the operation of all the data synchronization related tasks. It stores the topology information of the DM cluster and keeps a track on the running state of DM worker processes and data synchronization tasks. DM-worker, on the other hand, handles the execution of only specific data synchronization tasks. It manages the storage of configuration information of the data synchronization subtasks and also monitors their running state. The third component in DM tool, called, dmctl is a command line tool that helps control the DM cluster. It creates/updates/drops data synchronization tasks. So, it checks the running state of these tasks, handles any errors that occur during these tasks, and also verifies their configuration correctness. DM is licensed under the Apache License, Version 2.0, allowing users to freely use, and modify the platform. This will also allow users to contribute new features or track any bug fixes to make the platform better for everyone. For more information, check out the official DM tool documentation. FoundationDB open-sources FoundationDB Record Layer with schema management, indexing facilities and more Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL) Facebook open sources Spectrum 1.0.0, an image processing library for better mobile image production
Read more
  • 0
  • 0
  • 3028

article-image-numpy-1-16-is-here-and-its-the-last-release-to-support-python-2-7
Natasha Mathur
22 Jan 2019
3 min read
Save for later

NumPy 1.16 is here and it’s the last release to support Python 2.7

Natasha Mathur
22 Jan 2019
3 min read
Python team released NumPy version 1.16 last week. The latest release explores new features, deprecations, and other improvements. NumPy 1.16 is the last release to support Python 2.7 and it will be maintained as a long term release with the bug fixes until 2020. Let’s have a look at some of the major highlights of this release. New features Integrated squared error (ISE) estimator has been added to histogram in NumPy 1.16. ISE is a non-parametric method that is based on cross-validation. NumPy 1.16 comes with max_rows keyword which has been added for np.loadtxt. This sets the maximum rows for the content to be read after skiprows, as in numpy.genfromtxt. New modulus operator support added for np.timedelta64 operands. These operands may have different units and the return value will always match the type of the operands. NumPy 1.16 offers improved support for the ARM CPUs. They can now accommodate 32 and 64 bit targets, and also big and little-endian byte ordering. The matmul function is now a ufunc, meaning that both the function and the __matmul__ operator can now be overridden by __array_ufunc__. The implementation of matmul function has also been changed and uses the same BLAS routines as numpy.dot. New Deprecations In NumPy 1.16, type dictionaries numpy.core.typeNA and numpy.core.sctypeNA have been deprecated. These type dictionaries were buggy and will be removed in the 1.18 release. Users can make use of `numpy.sctypeDict` instead. The numpy.asscalar function has been deprecated. The numpy.set_array_ops and numpy.get_array_ops functions are also deprecated. The numpy.unravel_index keyword argument dims is deprecated, users can use shape instead. Other improvements and changes NumPy builds can no longer interact with the host machine shell directly in NumPy 1.16. The exec_command has been replaced with subprocess.check_output.   Earlier, a LinAlgError used to be raised during cases when empty matrix/empty matrices (with zero rows and/or columns) were passed in. Now linalg.lstsq, linalg.qr, and linalg.svd can work with empty arrays.   numpy.angle and numpy.expand_dims can now work on ndarray subclasses in NumPy 1.16.     +array is now enabled with raising a deprecation warning for non-numerical arrays Earlier, +array unconditionally returned a copy. NDArrayOperatorsMixin can now implement matrix multiplication. For more information, check out the official release notes. NumPy 1.15.0 release is out! NumPy drops Python 2 support. Now you need Python 3.5 or later. Introducing numpywren, a system for linear algebra built on a serverless architecture
Read more
  • 0
  • 0
  • 2432
article-image-google-faces-pressure-from-chinese-tibetan-and-human-rights-groups-to-cancel-its-censored-search-engine-project-dragonfly
Natasha Mathur
22 Jan 2019
4 min read
Save for later

Google faces pressure from Chinese, Tibetan, and human rights groups to cancel its censored search engine, Project DragonFly

Natasha Mathur
22 Jan 2019
4 min read
A group of Chinese, Tibetan, Uighur Muslims and human rights activists organized campaign demonstrations outside Google’s offices and headquarters in ten different countries around the world, last Friday. The campaign was aimed at urging Google to drop its censored search engine for China, codenamed “Project DragonFly”, last week. Project DragonFly is a censored search engine by Google for China.“The app would restrict searches for forbidden or sensitive topics, including ‘human rights’, ‘democracy’, ‘Tiananmen’ and ‘Tibet’..would also facilitate Chinese state surveillance by linking users’ search history with their telephone numbers”, state the protestors. The campaign consists of a coalition of communities that have suffered persecution by the Chinese government. As a part of the campaign, leaflets were handed out to Google employees and the public, outside of the Google offices, making Google employees and the general public aware of the dangers related to Project DragonFly. Moreover, the organizers have also stated that this will be the first of a series of protests and will continue till the time Google executives confirm that Project DragonFly has been cancelled. This is not the first time when Google’s Project DragonFly has gotten under the spotlight. It has been facing constant criticism from the public, human rights groups, as well as the company’s own employees. In November 2018, around 300 Google employees signed a petition protesting Project DragonFly. “We are Google employees and we join Amnesty International in calling on Google to cancel project DragonFly”, they wrote on Medium. Earlier, a report from the Intercept revealed how internal conversations around Google shut out its legal, privacy, and security teams over Project DragonFly, back in November. The whole project was maintained as a secret from the company during the 18 months of its development. Then in December 2018, the Intercept revealed that “internal dispute” led to Google shutting down its data analysis system used for the search engine. “This had effectively ended the project, sources said, because the company’s engineers no longer had the tools they needed to build it”, states the Intercept. Also, 170 Tibet coalition groups sent a letter to Google CEO Sundar Pichai in August 2018, informing him of the serious human rights risks posed by DragonFly, but they never received a response. “By choosing to develop Dragonfly, Google is sending a clear message that censorship is okay and is endorsing the government of China’s crackdown against freedom of speech, online freedom and other human rights”, states the letter. Moreover, Google hasn’t spoken out directly related to the Project DragonFly. When Sundar Pichai (Google CEO) was asked about Google’s Project Dragonfly during the Congress hearing in December, he said that “us (Google) reaching out and giving users more information has a very positive impact and we feel that calling but right now there are no plans to launch in China”, which was considered quite evasive by many. Google has been surrounded in a barrage of criticism and controversies lately. For instance, last week, a group of Googlers launched a public awareness social media campaign to fight against the forced arbitration policy within Google. Similarly, a group of over 85 coalition groups sent letters to Google, Amazon, and Microsoft, last week, asking them to not sell their facial surveillance technology to the government. Two shareholders sued Alphabet’s board members for protecting senior execs accused of sexual harassment, earlier this month. “It is utterly shameful that Google’s directors are doing China’s dirty work. Google’s directors must urgently take heed of calls from employees and tens of thousands of global citizens demanding that they immediately halt project dragonfly. If they don’t, Google risks irreversible damage to its reputation,” said Gloria Montgomery, Director at Tibet Society. OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? As Pichai defends Google’s “integrity” ahead of today’s Congress hearing, over 60 NGOs ask him to defend human rights by dropping DragonFly 1k+ Google employees frustrated with continued betrayal, protest against Censored Search engine project DragonFly
Read more
  • 0
  • 0
  • 2275

article-image-torrent-paradise-uses-ipfs-for-decentralization-possible-alternative-to-pirate-bay
Melisha Dsouza
22 Jan 2019
2 min read
Save for later

Torrent-Paradise uses IPFS for decentralization, possible alternative to Pirate Bay

Melisha Dsouza
22 Jan 2019
2 min read
A developer knows by the handle ‘Urban Guacamole’ has launched a new version of the Torrent-Paradise, powered with IPFS (Interplanetary File System) that provides decentralized torrent searching. This is in contrary to Pirate Bay that has a centralized nature and has been suffering from regular downtimes. The system works very similar to BitTorrent and makes it possible to download files without the need for a central host. Even though the BitTorrent protocol has a decentralized nature, TorrentFreak(TF) states that the ecosystem surrounding it has some weak spots. Torrent sites that use centralized search engines face outages and takedowns thus disrupting service to users. In a statement to TF, Urban says: “I feel like decentralizing search is the natural next step in the evolution of the torrent ecosystem. File sharing keeps moving in the direction of more and more decentralization, eliminating one single point of failure after another”. Urban further explains that each update of Torrent Paradise is an IPFS hash, so the site is always available as long as someone is seeding it even if the servers are down. Decentralization will help search results to be shared between large numbers of systems. This will help the performance of the site as well as improve stability and privacy. According to betanews, by using IPFS, Torrent-Paradise will free itself from the risk of servers going down and also become resistant to blocking and censorship. A few issues of using IPFS as highlighted in TF is that, it needs to be installed and configured for the server to become a node. Also, IPFS gateways like Cloudflare can allow anyone to access sites such as Torrent-Paradise through a custom URL, however, this doesn't help sharing the site. Another issue is that the site relies on a static index which is only updated once a day rather than being updated in near real-time. The regular Torrent-Paradise website is still accessible to all along with the new ad- free  IPFS version. Torrent-Paradise can possibly be an alternative to Pirate Bay? We will leave that open for discussion! Head over to Torrentfreak for more insights on this news. BitTorrent’s traffic surges as the number of streaming services explode MIDI 2.0 prototyping in the works, 35 years after launch of the first version Hyatt Hotels launches public bug bounty program with HackerOne
Read more
  • 0
  • 0
  • 4246