Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-facebook-researchers-show-random-methods-without-any-training-can-outperform-modern-sentence-embeddings-models-for-sentence-classification
Natasha Mathur
31 Jan 2019
4 min read
Save for later

Facebook researchers show random methods without any training can outperform modern sentence embeddings models for sentence classification

Natasha Mathur
31 Jan 2019
4 min read
A pair of researchers, John Wieting, Carnegie Mellon University, and Douwe Kiela, Facebook AI Research, published a paper titled “No training required: Exploring random encoders for sentence classification”, earlier this week. Sentence embedding refers to a vector representation of the meaning of a sentence. It’s most often created by a transformation of word embeddings using a composition function, which is often nonlinear and recurrent in nature. Most of these word embeddings get initialized using pre-trained embeddings. These sentence embeddings are then used as features for a collection of downstream tasks (that receive data from the server). The paper explores three different approaches for computing sentence representations from pre-trained word embeddings (that use nothing but random parameterizations). It considers two well-known examples of sentence embeddings including, SkipThought (mentioned in Advances in neural information processing systems by Ryan Kiros), and InferSent (mentioned in Supervised learning of universal sentence representations from natural language inference data by Alexis Conneau).  As mentioned in the paper, SkipThought took around one month to train, while InferSent requires large amounts of annotated data. “We examine to what extent we can match the performance of these systems by exploring different ways for combining nothing but the pre-trained word embeddings. Our goal is not to obtain a new state of the art but to put the current state of the art methods on more solid footing”, states the researchers. Approaches used The paper mentions three different approaches for computing sentence representation from pre-trained word embeddings as follows: Bag of random embedding projections (BOREP): In this method, a single random projection is applied in a standard bag-of-words (or bag-of-embeddings) model. A matrix is randomly initialized consisting of a dimension of the projection and dimension of an input word embedding. The values for the matrix are then sampled uniformly. Random LSTMs: This approach makes use of bidirectional LSTMs without any training involved. The LSTM weight matrices and their other corresponding biases get initialized uniformly at random. Echo state networks (ESNs): Echo State Networks (ESNs) were primarily designed and used for sequence prediction problems, where given a sequence X,  a label y is predicted for each step in the sequence. The main goal of using echo state networks is to minimize the error between the predicted yˆ and the target y at each timestep. In the paper, the researchers have diverged from the typical per-timestep ESN setting, and have instead used ESN to produce a random representation of a sentence.  A bidirectional ESN is used where the reservoir states get concatenated for both directions. These states are then pooled over to generate a sentence representation. Results For evaluation purposes, following set of downstream tasks are used: sentiment analysis (MR, SST), question-type (TREC), product reviews (CR), subjectivity (SUBJ), opinion polarity (MPQA), paraphrasing (MRPC), entailment (SICK-E, SNLI) and semantic relatedness (SICK-R, STSB). The three models are evaluated against random sentence encoders, InferSent and SkipThought models. As per the results: When comparing the random sentence encoders, ESNs outperformed BOREP and RandLSTM for all downstream tasks. When compared to InferSent, it shows that the performance gains over random methods are not as phenomenal, despite the fact that InferSent requires annotated data and takes more time to train as opposed to the random sentence encoders that can be applied immediately. For SkipThought, the gain over random methods (which do have better word embeddings) is even smaller. SkipThought took a very long time to train, and in the case of SICK-E, it’s better to use BOREP. ESN also outperforms SkipThought in most of the downstream tasks. “The point of these results is not that random methods are better than these other encoders, but rather that we can get very close and sometimes even outperform those methods without any training at all, from just using the pre-trained word embeddings,” state the researchers. For more information, check out the official research paper. Amazon Alexa AI researchers develop new method to compress Neural Networks and preserves accuracy of system Researchers introduce a machine learning model where the learning cannot be proved Researchers introduce a deep learning method that converts mono audio recordings into 3D sounds using video scenes
Read more
  • 0
  • 0
  • 2888

article-image-stanford-experiment-results-on-how-deactivating-facebook-affects-social-welfare-measures
Sugandha Lahoti
31 Jan 2019
3 min read
Save for later

Stanford experiment results on how deactivating Facebook affects social welfare measures

Sugandha Lahoti
31 Jan 2019
3 min read
Stanford researchers have recently published a research paper, “The Welfare Effects of Social Media” where they conducted an experiment to understand how Facebook affects a range of individuals focusing on US users in the runup to the 2018 midterm election. Reducing screen time has been an important debating topic in recent times. Excess use of social media platforms hampers face-to-face social interactions. At a broader social level, social media platforms may increase political polarization and are also the primary source of spreading fake news and misinformation online. Per the research paper, “Stanford researchers evaluated the extent to which time on Facebook substitutes for alternative online and offline activities. They studied Facebook’s broader political externalities via measures of news knowledge, awareness of misinformation, political engagement, and political polarization. They also analyze the extent to which behavioral forces like addiction and misprediction may cause sub-optimal consumption choices, by looking at how usage and valuation of Facebook change after the experiment.” What was the experiment? In their experiment, Stanford researchers recruited a sample of 2,844 users using Facebook ads. The ad said, “Participate in an online research study about internet browsing and earn an easy $30 in electronic gift cards.” Next, they determined participants willingness-to-accept (WTA) to deactivate their Facebook accounts for a period of four weeks ending just after the election. 58 percent of these subjects with WTA less than $102 were randomly assigned to either a Treatment group that was paid to deactivate or a Control group that was not. Results of the experiment The results were divided into four patterns. Substitution patterns Deactivating Facebook freed up 60 minutes per day for the average person in the Treatment group for other offline activities such as watching television alone and spending time with friends and family. The Treatment group also reported spending 15 percent less time, consuming news. Political externalities Facebook deactivation significantly reduced news knowledge and attention to politics. The Treatment group was less likely to say they follow news about politics or the President, and less able to correctly answer factual questions about recent news events. The overall index of news knowledge fell by 0.19 standard deviations. The overall index of political polarization fell by 0.16 standard deviations. Subjective well-being Deactivation caused small but significant improvements in well-being and self-reported happiness, life satisfaction, depression, and anxiety. The overall index of subjective well-being improved by 0.09 standard deviations. Facebook’s role in society The Treatment group’s reported usage of the Facebook mobile app was about 12 minutes (23 percent) lower than in Control. The post-experiment Facebook use is 0.61 standard deviations lower in Treatment than in Control. The researchers concluded that deactivation caused people to appreciate Facebook’s both positive and negative impacts on their lives. Majority of the treatment group agreed deactivation was good for them, but they were also more likely to think that people would miss Facebook if they used it less. In conclusion, the opposing effects on these specific metrics cancel out, so the overall index of opinions about Facebook is unaffected, mentions the research paper. Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager Facebook has blocked 3rd party ad monitoring plugin tools from the likes of ProPublica and Mozilla that let users see how they’re being targeted by advertisers… Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities.
Read more
  • 0
  • 0
  • 2049

article-image-algorithmwatch-report-7-key-recommendations-for-improving-automated-decision-making-in-the-eu
Natasha Mathur
30 Jan 2019
5 min read
Save for later

AlgorithmWatch report: 7 key recommendations for improving Automated Decision-Making in the EU

Natasha Mathur
30 Jan 2019
5 min read
AlgorithmWatch, non-profit research, and advocacy organization released its report titled ‘Automating Society: Taking Stock of Automated Decision-Making in the EU’, in cooperation with Bertelsmann Stiftung ( a private operating foundation) supported by the Open Society Foundations, yesterday. The report includes findings compiled from 12 EU member states and the level of the EU surrounding the development and application of automated decision-making systems in all the countries. The report is based upon these findings and makes certain recommendations for policymakers in the EU and the Member States parliaments, the EU Commission, national governments, researchers, civil society organizations, and the private sector (companies and business associations). Let’s have a look at some of the key recommendations mentioned in the report. Focus on the application of ADMs that impact society The report states that given the popularity of ‘Artificial Intelligence’ right now, it is important to understand the real current challenges and impact of this tech on our societies. It gives an example of how ‘Predictive analytics’ that is used for determining the maintenance issues on production lines for yogurt, should not be the real concern rather predictive analytics used for tracking human behavior is where the real focus should be. The report states that there is a need for these systems to be democratically controlled in our society using a combination of regulatory tools, oversight mechanisms, and technology. Automated decision-making systems aren’t just a technology The report mentions that considering automated decision-making systems as just a technology while not considering it as a whole shift the debate surrounding questions of accuracy, and data quality. All parts of the framework should be considered while discussing the pros and cons of using a specific ADM application. What this means is that more questions should be asked around: What data does the system use? Is the use of that data legal? what decision-making model is applied to the data? Is there an issue of bias? why do governments even use specific ADM systems? Is automation being used as an option to save money? Empower citizens to adapt to new challenges As per the report, more focus should be put on enhancing citizens’ expertise to help them better determine the consequences of automated decision making. An example presented in the report is that of an English online course called “Elements of Artificial Intelligence” created to support the goal of helping Finnish people understand the challenges in ADM. The course was developed as a private-public partnership but has now become an integral part of the Finnish AI programme. This free course teaches citizens about basic concepts and applications of AI and machine learning with about 100,000 Finns enrolled in the course. Empower public administration to adapt to new challenges Just like empowering citizens is important, there’s also a need to empower the Public administration to ensure a high level of expertise inside its own institutions. This can help them either develop new systems or to oversee outsourced development. The report recommends creating public research institutions in cooperation with universities or public research centers to teach, train, and advise civil servants. Moreover, these institutions should also be created at the EU level to help the member states. Strengthen civil society’s involvement in ADM The report states that there is a lack of civil society engagement and expertise in the field of automated decision making even in some large Member States. As per the report, civil society organizations should assess the consequences of ADM as a specific and relevant policy field in their countries and strategies to address these challenges.  Also, grant-making organizations should develop funding calls and facilitate networking opportunities, along with governments making public funds available to civil society interventions. Don’t look at only data protection for regulatory ideas The report mentions how Article 22 of the General Data Protection Regulation (GDPR) has been under a lot of controversies. According to Article 22, Automated individual decision-making, including profiling, “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. Many people have developed a consensus around it, saying that it’s limited and that the use cases of ADM systems cannot be regulated by data protection. It talks about the importance of discussion around developing governance tools and states that stakeholders should look at creative applications of existing regulatory frameworks such as equal-pay regulation. This would further help them address new challenges such as algorithmically controlled platform work (Gig Economy) and explore the new avenues for regulation of the effects of ADM. Need for a wide range of stakeholders (including civil liberty firms) to develop criteria for good design processes and audits The report mentions that on surveying some of the countries, they found out that governments claim that their strategies involve civil society stakeholders just to bring “diverse voices” to the table. But, the term civil society is not well defined and includes academia, groups of computer scientists or lawyers, think tanks, etc. This leads to important viewpoints getting missed since governments use this broad definition to show that ‘civil society’ is included despite them not being a part of the conversation. This is why it is critical that the organizations focused on rights be included in the debate. For more coverage, check out the official AlgorithmWatch report. 2019 Deloitte tech trends predictions: AI-fueled firms, NoOps, DevSecOps, intelligent interfaces, and more IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Unity introduces guiding Principles for ethical AI to promote responsible use of AI
Read more
  • 0
  • 0
  • 2604
Visually different images

article-image-uber-releases-aresdb-a-new-gpu-powered-real-time-analytics-engine
Natasha Mathur
30 Jan 2019
5 min read
Save for later

Uber releases AresDB, a new GPU-powered real-time Analytics Engine

Natasha Mathur
30 Jan 2019
5 min read
Uber announced the details regarding its new and open source real-time analytics engine, AresDB, yesterday. AresDB, released in November 2018, is Uber’s new solution for real-time analytics that helps unify, simplify, and improve Uber’s real-time analytics database solutions. It makes use of graphics processing units (GPUs) and an unconventional power source to help analytics grow at scale. AresDB’s architecture explores features such as column-based storage with compression (for storage efficiency), real-time ingestion with upsert support (for high data accuracy), and GPU powered query processing (for highly parallelized data processing powered by GPU). Let’s have a look at these key features of AresDB. Column-based storage AresDB stores data in a columnar format. Values in each column get stored as a columnar value vector. Nullness/validity of any values within the columns get stored in a separate null vector, where the validity of each of these values is represented by one bit. There are two types of stores, namely live and archive store, where column-based storage of data takes place. Live Store This is where all the uncompressed and unsorted columnar data (live vectors) gets stored. Data records in these live stores are then further partitioned into (live) batches of configured capacity. The values of each column within a batch are stored as a columnar vector. Validity/nullness of the values in each of these value vectors gets stored as a separate null vector, where the validity of each value is represented by one bit. Archive Store AresDB also stores all the mature, sorted, and compressed columnar data (archive vectors) in an archive store with the help of fact tables (stores an infinite stream of time series events). Records in archive store are also partitioned into batches similar to the live store. However, unlike the live batches, an archive batch is created that contains records of a particular Universal Time Coordinated (UTC) day. Records are then sorted as per the user configured column sort order. Real-time ingestion with upsert support Under the real-time ingestion feature, clients ingest data using the ingestion HTTP API by posting an upsert batch. Upsert batch refers to custom and serialized binary format that helps minimize the space overhead while also keeping the data randomly accessible. Real-time ingestion with upsert support After AresDB receives an upsert batch for ingestion, the upsert batch first gets written to redo logs for recovery. Then the upsert batch gets appended to the end of the redo log, where AresDB identifies and skips “late records” (where event time older than archived cut-off time) for ingestion into the live store. For records that are not “late,” AresDB uses the primary key index that helps locate the batch within the live store. During ingestion, once the upsert batch gets appended to the redo log, “late” records get appended to a backfill queue and other records are applied to the live store. GPU-powered query processing The user needs to use Ares Query Language (AQL) created by Uber to run queries against AresDB. AQL is a time series analytical query language that uses JSON, YAML, and Go objects. It is unlike the standard SQL syntax of SELECT FROM WHERE GROUP BY like other SQL-like languages. AQL offers better programmatic query experience as compared to SQL in JSON format for dashboard and decision system developers. This is because JSON format allows the developers to easily manipulate queries using the code, without worrying about issues such as SQL injection. AresDB manages multiple GPU devices with the help of a device manager that models the GPU device resources in two dimensions, namely, GPU threads and device memory. This helps track GPU memory consumption as processing queries. After query compilation is done, AresDB helps users estimate the number of resources required for the execution of a query. Device memory requirements need to be satisfied before a query is allowed to start. AresDB is currently capable of running either one or several queries on the same GPU device as long as the device is capable of satisfying all of the resource requirements. Future work AresDB is open sourced under the Apache License and is currently being used widely at Uber to power its real-time data analytics dashboards, helping it make data-driven decisions at scale. In the future, the Uber team wants to improve the project by adding new features. These new features include building the distributed design of AresDB to improve its scalability and reduce the overall operational costs. Uber also wants to add developer support and tooling to help developers quickly integrate AresDB into their analytics stack. Other features include expanding the feature set, and Query engine optimization. For more information, check out the official Uber announcement. Uber AI Labs introduce POET(Paired Open-Ended Trailblazer) to generate complex and diverse learning environments and their solutions Uber to restart its autonomous vehicle testing, nine months after the fatal Arizona accident Uber manager warned the leadership team of the inadequacy of safety procedures in their prototype robo-taxis early March, reports The Information
Read more
  • 0
  • 0
  • 3304

article-image-diversity-in-faces-ibm-researchs-new-dataset-to-help-build-facial-recognition-systems-that-are-fair
Sugandha Lahoti
30 Jan 2019
2 min read
Save for later

Diversity in Faces: IBM Research’s new dataset to help build facial recognition systems that are fair

Sugandha Lahoti
30 Jan 2019
2 min read
IBM research has released ‘Diversity in Faces’ (DiF) dataset which will help build better and diverse facial recognition systems by ensuring fairness. The DiF provides a dataset of annotations of 1 million human facial images. This dataset was built using publicly available images from the YFCC-100M Creative Commons data set. Building facial recognition systems that meet fairness expectations, has been a long-standing goal for AI researchers. Most AI systems learn through datasets. If not trained with robust and diverse data sets, accuracy and fairness are at risk. For that reason, AI developers and the research community need to be thoughtful about what data they use for training. With the new DiF dataset, IBM researchers are building a strong, fair, and diverse dataset. The DiF data set does not just measure different faces by age, gender, and skin tone. It also looks at other intrinsic facial features that include craniofacial distances, areas and ratios, facial symmetry and contrast, subjective annotations, and pose and resolution. IBM annotated the faces using 10 well-established and independent coding schemes from the scientific literature. These 10 coding schemes were selected based on their strong scientific basis, computational feasibility, numerical representation, and interpretability. Through thorough statistical analysis, IBM researchers found that the DiF dataset provided a more balanced distribution and broader coverage of facial images compared to previous datasets. Their analysis of the 10 initial coding schemes also provided them with an understanding of what is important for characterizing human faces. In the future, they plan to use Generative Adversarial Networks (GANs) to possibly generate faces of any variety to synthesize training data as needed. They will also find ways (and encourage others as well) to improve on the initial ten coding schemes and add new ones. You can request access to the DiF dataset on IBM website. You can also read the research paper for more information. Admiring the many faces of Facial Recognition with Deep Learning Facebook introduces a fully convolutional speech recognition approach and open sources wav2letter++ and flashlight AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition
Read more
  • 0
  • 0
  • 2677

article-image-facebook-has-blocked-3rd-party-ad-monitoring-plugin-tools-from-the-likes-of-propublica-and-mozilla-that-let-users-see-how-theyre-being-targeted-by-advertisers
Sugandha Lahoti
30 Jan 2019
3 min read
Save for later

Facebook has blocked 3rd party ad monitoring plugin tools from the likes of ProPublica and Mozilla that let users see how they're being targeted by advertisers

Sugandha Lahoti
30 Jan 2019
3 min read
Facebook has blocked plugin tools from third-party websites, like ProPublica, Mozilla and Who Targets Me. These plugins let Facebook users see how they are being targeted by advertisers. This month Facebook inserted code that prevented these plugins to automatically pull ad targeting information. The ad monitoring tools collect data on the adverts a user sees and tells them why the users were targeted. This helps users know about the advertising tactics used by politicians for their benefit. Facebook’s move was heavily criticized by these companies. “Ten days ago, our software stopped working, and efforts to fix it have proved much harder than before,” said WhoTargetsMe co-founder Sam Jeffers. “Facebook is deliberately obfuscating their code. When we have made small changes, they’ve responded with further updates within hours.” “This is very concerning, Investigative groups like ProPublica need access to this information in order to track and report on the opaque and frequently deceptive world of online advertising,” said Sen. Mark Warner, D-Va., who has co-sponsored the Honest Ads Act, which would require transparency on Facebook ads. The Honest Ads Act is expected to be re-introduced in Congress this year and it would require Facebook to publish “a description of the audience targeted by the advertisement.” ProPublica writes that Facebook has also developed another tool that it says will allow researchers to analyze political ads more easily. That tool, called an API, is in “beta” and restricted to a few participants, including ProPublica, who had to sign a nondisclosure agreement about the data provided. “We regularly improve the ways we prevent unauthorized access by third parties like web browser plugins to keep people’s information safe,” Facebook spokesperson Beth Gautier said to ProPublica “This was a routine update and applied to ad blocking and ad scraping plugins, which can expose people’s information to bad actors in ways they did not expect.” Facebook made it clear to ProPublica in a statement that the change was meant to simply enforce its terms of service. Twitterati has also condemned Facebook’s move calling it ‘hostile for journalists’. https://twitter.com/AlexanderAbdo/status/1090297962146729985 https://twitter.com/mcwm/status/1090001769265016832 https://twitter.com/AASchapiro/status/1090281229096833024 https://twitter.com/alfonslopeztena/status/1090318416563580928 Facebook releases a draft charter introducing a new content review board that would filter what goes online. Facebook plans to integrate Instagram, Whatsapp, and Messenger amidst public pressure to regulation or break up Facebook. Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices.
Read more
  • 0
  • 0
  • 1684
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-facebook-releases-a-draft-charter-introducing-a-new-content-review-board-that-would-filter-what-goes-online
Savia Lobo
30 Jan 2019
3 min read
Save for later

Facebook releases a draft charter introducing a new content review board that would filter what goes online

Savia Lobo
30 Jan 2019
3 min read
Yesterday, Facebook released a draft charter that mentions the newly formed external board that will be a body of independent experts who will review Facebook's most challenging content decisions - focusing on important and disputed cases. It will share its decisions transparently and give reasons for them. This charter comes after Mark Zuckerberg wrote about his vision for how content should be governed and enforced on Facebook. The board will also be able to decide on what content goes online or what posts should be removed. Facebook would later accept and implement the board's decisions. This group will comprise of experts with experience in content, privacy, free expression, human rights, journalism, civil rights, safety, and other relevant disciplines. The list of members will always be public. The draft charter raises certain questions and considerations and also puts forth certain approaches giving a base model for the board’s structure, scope, and authority. “It is a starting point for discussion on how the board should be designed and formed. What the draft does not do is answer every proposed question completely or finally”, the draft states. Nick Clegg, VP of Global Affairs and Communications, in his post mentions, “We’ve also identified key decisions that still need to be made, like the number of members, length of terms and how cases are selected.” Facebook looks forward to attaining answers to these questions over the next six months in various workshops around the world including Singapore, Delhi, Nairobi, Berlin, New York, Mexico City, and many more cities. The workshops will bring together experts and organizations on different issues such as technology, democracy, human rights, and so on. Nick writes, “we don’t want to limit input or feedback to a hand-picked group of experts that we work with frequently. We’re interested in hearing from a wide range of organizations, think tanks, researchers and groups who might have proposals for how we should answer these critical questions.” Kate Klonick, Assistant Professor, St. John’s University School of Law, said, “This would be a rare and fascinating, if narrow, devolution of authority.” Visit the draft charter, to know more about this news. Facebook plans to integrate Instagram, Whatsapp, and Messenger amidst public pressure to regulation or break up Facebook Biometric Information Privacy Act: It is now illegal for Amazon, Facebook or Apple to collect your biometric data without consent in Illinois Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices  
Read more
  • 0
  • 0
  • 1566

article-image-mapzen-an-open-source-mapping-platform-joins-the-linux-foundation-project
Amrata Joshi
29 Jan 2019
3 min read
Save for later

Mapzen, an open-source mapping platform, joins the Linux Foundation project

Amrata Joshi
29 Jan 2019
3 min read
Yesterday, the Linux Foundation announced that Mapzen, an open-source mapping platform is now a part of the Linux Foundation project. Mapzen focuses on the core components of map display such as search and navigation. It provides developers with an open software and data sets that are easy to access. It was launched in 2013 by mapping industry veterans in combination with urban planners, architects, movie makers, and video game developers. Randy Meech, former CEO of Mapzen and current CEO of StreetCred Labs, said, “Mapzen is excited to join the Linux Foundation and continue our open, collaborative approach to mapping software and data. Shared technology can be amazingly powerful, but also complex and challenging. The Linux Foundation knows how to enable collaboration across organizations and has a great reputation for hosting active, thriving software and data projects.” Mapzen’s open resources and projects are used to create applications or integrate them into other products and platforms. As Mapzen’s resources are all open source, developers can easily build platforms without the restrictions of data sets by other commercial providers. Mapzen is used by organizations such as Snapchat, Foursquare, Mapbox, Eventbrite, The World Bank, HERE Technologies, and Mapillary. With Mapzen, it is possible to take open data and build maps with search and routing services, upgrade their own libraries and process data in real-time. This is not possible with conventional mapping or geotracking services. Simon Willison, Engineering Director at Eventbrite said, “We’ve been using Who’s On First to help power Eventbrite’s new event discovery features since 2017. The gazetteer offers a unique geographical approach which allows us to innovate extensively with how our platform thinks about events and their locations. Mapzen is an amazing project and we’re delighted to see it joining The Linux Foundation.” Mapzen is operated in the cloud and on-premise by a wide range of organizations, including, Tangram, Valhalla and Pelias. Earlier this month, Hyundai joined Automotive Grade Linux (AGL) and the Linux Foundation for innovation through open source. It was a cross-industry effort for bringing automakers, suppliers and technology companies together to accelerate the development and adoption of an open software stack. Last year, Uber announced that it is joining the Linux Foundation as a Gold Member with an aim to support the open source community. Jim Zemlin, executive director at the Linux Foundation said, “Mapzen’s open approach to software and data has allowed developers and businesses to create innovative location-based applications that have changed our lives. We look forward to extending  Mapzen’s impact even further around the globe in areas like transportation and traffic management, entertainment, photography and more to create new value for companies and consumers.” According to the official press release, the Linux Foundation will align resources to advance Mapzen’s mission and it will further grow its ecosystem of users and developers. Newer Apple maps is greener and has more details Launching soon: My Business App on Google Maps that will let you chat with businesses, on the go Atlassian overhauls its Jira software with customizable workflows, new tech stack, and roadmaps tool
Read more
  • 0
  • 0
  • 2819

article-image-2019-deloitte-tech-trends-predictions-ai-fueled-firms-noops-devsecops-intelligent-interfaces-and-more
Natasha Mathur
29 Jan 2019
6 min read
Save for later

2019 Deloitte tech trends predictions: AI-fueled firms, NoOps, DevSecOps, intelligent interfaces, and more

Natasha Mathur
29 Jan 2019
6 min read
Deloitte launched their tenth annual “Tech Trends 2019: Beyond the digital frontier” report earlier this month. The report covers predictions related to Artificial intelligence, digital future, cloud, networking, and cybersecurity. Let’s have a look at these key predictions made in the report. More companies to transform into AI-fueled organizations Deloitte 2019 report states that there’ll be an increase in the number of companies following the transformation to fully autonomous AI-fueled firms in the next 18 to 24 months, making AI a major part of their corporate strategy. AI, machine learning, and other cognitive technologies run at the center of business and IT operations in an AI-fueled firm to harness data driven-insights. As per the two consecutive Deloitte global surveys (2016–17 and 2018), cognitive technologies/AI were at the top in a list of emerging technologies in which CIOs plan to invest. AI ambitions of these CIOs is more about using AI to increase productivity, and strengthen regulatory compliance using automation. Companies to make the transition from traditional to serverless environments (NoOps) The report states many CIOs will be looking at creating a NoOps IT environment that is automated and abstracted from the underlying infrastructure. It requires small teams to manage it and will thereby allow the CIOs to invest larger human capacity in developing new capabilities that can improve the overall operational efficiency. In NoOps environments, traditional operations like the code deployment and patching schedules are internal responsibilities and are mainly automated. This shift from traditional to serverless computing will allow the cloud vendors to dynamically and automatically allocate the compute, storage, and memory depending on the request for a higher-order service. Traditional cloud service models required organizations to manually design and provision such allocations. Serverless computing offers limitless scalability, high availability, NoOps, along with zero idle time costs. More companies are expected to take advantage of advanced connectivity to configure and operate enterprise networks As per the Deloitte report, many companies will opt for advanced networking to drive the development of new products and to transform inefficient operating models. CIOs are going to be virtualizing parts of the connectivity stack with the help of network management techniques like Software-defined networking (SDN) and network function virtualization (NFV). SDN is primarily used in data centers but its use is now being extended for wide area networking to connect data centers. NFV replaces network functions such as routing, switching, encryption, firewalling, WAN acceleration, etc, and can scale horizontally or vertically on demand. The report states that enterprises will be able to better optimize or “spin up” the network capabilities on demand to fulfill the needs of a specific application or meet the end-user requirements. Growth in interfaces like computer vision, gesture control devices, etc, will transform how humans, machines, and data interact with each other The report states that although conversational technologies are currently dominating the intelligent interfaces arena, other new additions in interfaces such as computer vision, gesture control devices, embedded eye-tracking platforms, bioacoustic sensing, and emotion detection/recognition technology, are gaining ground. Intelligent interfaces help track customers’ offline habits, similar to how search engines and social media companies track their customers’ digital habits. These interfaces also help understand the customers at a personal, and more detailed level, making it possible to “micro-personalize” the products and services. We will see more of these new interfaces combined with leading-edge technologies in the future (such as machine learning, robotics, IoT, contextual awareness, advanced augmented reality, and virtual reality) to transform the way we engage with machines, data, and each other. CMOs and CIOs will partner up to elevate the human experience by moving beyond marketing The report states that channel-focused services such as websites, social and mobile platforms, content management tools, search engine optimization are slowly becoming a thing of the past.  Many organizations will be moving beyond marketing by adopting a new generation of Martech systems and approach to data gathering, decision-making (determining how and when to provide an experience), and delivery (consistent delivery of dynamic content across channels). This, in turn, helps companies create personalized and dynamic end-to-end experiences for users and builds deep emotional connections among users to products and brands. Also, CMOs are required to own the delivery of the entire customer experience, and they often find themselves almost playing the CIO’s traditional role. At the same time, CIOs are required to transform the legacy systems and come out with new infrastructure to support the next-generation data management and customer engagement systems. This is why CIOs and CMOs will collaborate more closely to deliver on their company’s new marketing strategies as well as on the established digital agendas. Organizations to embed DevSecOps to improve cybersecurity As per the report, many organizations have started to use a method called DevSecOps that includes embedding security culture, practices, and tools into different phases of their DevOps pipelines. DevSecOps help improves the security and maturity levels of a company’s DevOps pipeline. DevSecOps is not a security trend but it's a new approach that offers companies a different way of thinking about security. DevSecOps has multiple benefits. It helps the security architects, developers, and operators share their metrics aligning to security and put a focus on business priorities. Organizations embedding DevSecOps into their development pipelines can use operational insights and threat intelligence. It also helps with proactive monitoring that involves automated, and continuous testing to identify problems. The report recommends that DevSecOps should tie in your broader IT strategy, which should be further driven by your business strategy. “If you can be deliberate about sensing and evaluating emerging technologies..you can make the unknown knowable..creating the confidence and construct to embrace digital while setting the stage to move beyond the digital frontier”, reads the report. For more information, check out the official Deloitte’s 2019 tech trends. IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others We discuss the key trends for web and app developers in 2019 [Podcast] We discuss the key trends for web and app developers in 2019 [Podcast]
Read more
  • 0
  • 0
  • 3154

article-image-facebook-plans-to-integrate-instagram-whatsapp-and-messenger-amidst-public-pressure-to-regulation-or-break-up-facebook
Savia Lobo
29 Jan 2019
4 min read
Save for later

Facebook plans to integrate Instagram, Whatsapp, and Messenger amidst public pressure to regulation or break up Facebook

Savia Lobo
29 Jan 2019
4 min read
Last week, German Minister of Justice and Consumer Protection, Katarina Barley, stated that when users begin receiving hostile messages or threats through Facebook, a platform meant to merely simplify contact with friends, things take a problematic turn. She said, “It may be that it isn't in Facebook's interest to report such content, but when the company merely blames hostility on human error or on an algorithm that hasn't yet been fully developed, it isn't particularly convincing, nor does it measure up to the company's responsibility.” Barley also highlighted on the handling of user’s personal data. If users’ data is leaked during the course of sharing it with the ad companies, she says “Facebook doesn't just bear a responsibility to refrain from intentionally sharing data. It must also actively protect that data from third-party access.” Integrating the Big 3: Instagram, Whatsapp, and Messenger Last week, Mark Zuckerberg announced his plans to integrate the three popular social messaging platforms, Whatsapp, Facebook Messenger, and Instagram. These services would continue to operate independently; however, their underlying technical infrastructure would be unified. This is a crucial development at a point where Facebook’s business has been subject to a lot of scandals that includes misuse of user data, fake news, and hate-filled posts. The three messaging platforms have more than 2.6 billion users, who can actually communicate across easily once the platform is unified. According to the New York Times, “The move has the potential to redefine how billions of people use the apps to connect with one another while strengthening Facebook’s grip on users, raising antitrust, privacy and security questions. It also underscores how Mr. Zuckerberg is imposing his authority over units he once vowed to leave alone.” By integrating the three apps, “Zuckerberg hopes to increase Facebook’s utility and keep users highly engaged inside the company’s ecosystem. That could reduce people’s appetite for rival messaging services, like those offered by Apple and Google. If users can interact more frequently with Facebook’s apps, the company might also be able to increase its advertising business or add new revenue-generating services, the people said”, the NY Times reports. Alex Stamos, Facebook’s former Chief Security Officer said that he hoped, “Facebook would get public input from terrorism experts, child safety officers, privacy advocates and others and be transparent in its reasoning when it makes decisions on the details.” “It should be an open process because you can’t have it all”, he added. To know more about this news head over to the New York Times post. “This isn't accessible privacy it's inaccessible surveillance”, says Sarah Jamie Lewis Sarah Jamie Lewis, from Open Privacy, says, Facebook “will build the largest surveillance system ever conceived and will sell it under the banner of consumer encryption. They will say that this delivers on the dream of secure usable communication that balances privacy, security and law enforcement.” With this, she says that Open Privacy is building Cwtch, a metadata resistant communication tool, that can be used to build metadata resistant applications. https://twitter.com/SarahJamieLewis/status/1088914192847917056 She says, “Facebook isn't a public utility, they are a corporation that needs to make money, and the way they make money is through surveillance.” To know more about this, read Sarah’s blog post. Biometric Information Privacy Act: It is now illegal for Amazon, Facebook or Apple to collect your biometric data without consent in Illinois Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices Facebook AI research introduces enhanced LASER library that allows zero-shot transfer across 93 languages
Read more
  • 0
  • 0
  • 1826
article-image-kite-the-ai-powered-code-completion-tool-secures-17m-in-funding-and-goes-cloudless-for-better-latency-security-and-privacy
Sugandha Lahoti
29 Jan 2019
4 min read
Save for later

Kite, the AI-powered code completion tool secures $17M in funding and goes cloudless for better latency, security and privacy

Sugandha Lahoti
29 Jan 2019
4 min read
Kite is a python based, AI-powered code completion tool which helps developers avoid repetitive work like copying and pasting or fixing simple errors. Recently, they secured a new $10 million round of funding led by Dan Scholnick from Trinity Ventures. This takes Kite to $17 million in total funding. The CEOs of GitHub, Dropbox, and Twitch also joined Trinity Ventures in this funding. Kite is only available locally Kite is going completely cloudless. It will now perform all processing locally on users' computers, instead of in the cloud. Users don't even have to sign up for a Kite account The decision to go cloudless was based on two major reasons. First, it hampered the latency. It was impossible to provide low-latency completions when the intelligence lived in the cloud. Running locally, Kite works extremely fast regardless of the internet connection. Security and privacy are major concerns when working on the cloud. Users are often uncomfortable with code leaving their computer and worried about data breaches. Going local allowed them to keep their codebases on their own computers. Kite has also optimized its Python analysis engine and AI models to work with the resource constraints of users' computers. For existing Kite users, Kite has been auto-updated to work locally. If a user has previously uploaded code to their servers, they can remove data via Kite’s web portal. Line-of-Code completions Apart from this new funding and local computation, Kite also comes with Line-of-Code completions which make Kite predict several “words” of code at a time. The AI software generally predicts the next several code elements a user is likely to type. At times, it can also predict full lines of code. Some developers also wanted Kite to support Linux and other programming languages. However, the team at Kite has clarified that it is not yet available. “It's taken us longer to work on these than we originally hoped. Because we're a small team of about 15 people, we decided (with real regret) that it didn't make sense to expand our addressable market when our core product still needed work.” reads their blog post. Kite is also updating its privacy policy. Here’s a snapshot from Hacker News. "We collect a variety of usage analytics that helps us understand how you use Kite and how we can improve the product. These include: Which editors you are using Kite with Number of requests that the Kite Engine has handled for you How often you use specific Kite features e.g. how many completions from Kite did you use Size (in number of files) of codebases that you work with Names of 3rd party Python packages that you use Kite application resource usage (CPU and RAM) You can opt-out of sending this analytics by changing a Kite setting (https://help.kite.com/article/79-opting-out-of-usage-metrics). Additionally, we collect anonymized "heartbeats" that are used to make sure the Kite app is functioning properly and not crashing unexpectedly. These analytics are just simple pings with no metadata, and as mentioned, they're anonymized so that there's no way for us to trace which users they came from. We also use third party libraries (Rollbar and Crashlytics) to report errors or bugs that occur during the usage of the product. What we don't collect Contents (partial or full) of any source code file that resides on your hard drive Information (i.e. file paths) about your file system hierarchy Any indices of your code produced by the Kite Engine to power our features - these all stay local to your hard drive" Read the full information on Kite’s blog. Best practices for C# code optimization [Tutorial] A new Model optimization Toolkit for TensorFlow can make models 3x faster 6 JavaScript micro optimizations you need to know
Read more
  • 0
  • 0
  • 4220

article-image-github-octoverse-top-machine-learning-packages-languages-and-projects-of-2018
Prasad Ramesh
28 Jan 2019
2 min read
Save for later

GitHub Octoverse: top machine learning packages, languages, and projects of 2018

Prasad Ramesh
28 Jan 2019
2 min read
The top tools and languages used in machine learning for 2018 were revealed in the GitHub The State of the Octoverse: Machine Learning. The general observation showed TensorFlow being one of the projects with the most number of contributions which is not surprising considering its age and popularity. Python was in the second place of the most popular languages on GitHub after JavaScript and Java. The data on contributions of whole of 2018 led to some insights. Contributions are pushing code, pull requests, opening an issue, commenting, or any other related activities. Data consists of all public repositories and any private repositories that have opted in for the dependency graph. Top languages used for machine learning on GitHub The primary language used in a repository tagged with machine-learning is considered to rank the languages. Python is at the top followed by C++. Java makes it to the top 5 with JavaScript. What’s interesting is the growth of Julia which has bagged the sixth spot considering that it is a relatively new language. R, popular for data analytics tasks also shows up thanks to its wide range of libraries for many tasks. Python C++ JavaScript Java C# Julia Shell R TypeScript Scala Top machine learning and data science packages Projects tagged with data science or machine learning that import Python packages were considered. NumPy, which is used for mathematical operations, is used in 74% of the projects. This is not surprising as it is a supporting package for scikit-learn among others. SciPy, pandas, and matplotlib are used in over 40% of the projects. scikit-learn is a collection of many algorithms and is used in 38% of the packages. TensorFlow is used in 24% of the projects, even though it is popular the use cases for it are narrow. numpy (74%) scipy (47%) pandas (41%) matplotlib (40%) scikit-learn (38%) six (31%) tensorflow (24%) requests (23%) python-dateutil (22%) pytz (21%) Machine learning projects with most contributions Tensorflow had the most contributions followed by scikit-learn. Julia again seems to have been garnering interest ranking fourth in this list. tensorflow/tensorflow scikit-learn/scikit-learn explosion/spaCy JuliaLang/julia CMU-Perceptual-Computing-Lab/openpose tensorflow/serving thtrieu/darkflow ageitgey/face-recognition RasaHQ/rasa_nlu tesseract-ocr/tesseract GitHub Octoverse: The top programming languages of 2018 What we learnt from the GitHub Octoverse 2018 Report Julia for machine learning. Will the new language pick up pace?
Read more
  • 0
  • 0
  • 3265

article-image-google-ellen-macarthur-foundation-mckinsey-artificial-intelligence-circular-economy
Sugandha Lahoti
28 Jan 2019
4 min read
Save for later

Google and Ellen MacArthur Foundation with support from McKinsey & Company talk about the impact of Artificial Intelligence on circular economy

Sugandha Lahoti
28 Jan 2019
4 min read
The Ellen MacArthur Foundation and Google, with research and analytical support provided by McKinsey & Company have released an interesting paper talking about the intersection of two emerging megatrends: artificial intelligence and circular economy. This paper is based on the insights from over 40 interviews with experts, taking a closer look at how Artificial Intelligence can accelerate the transition to a circular economy. The paper also highlights how artificial intelligence is being used in the food and consumer electronics industries. What is Circular Economy? Circular economy is based on creating value from consuming finite resources. It is based around three principles: Design out waste and pollution Keep products and materials at their highest value Regenerate natural systems A circular economy approach encourages manufacturers to extend the usability of products, by designing products for durability, repair or refurbishment. Figure: Circular economy diagram Why AI for circular economy? The paper highlights three circular economic opportunities where AI can potentially help. These are: “Design circular products, components, and materials. AI can enhance and accelerate the development of new products, components, and materials fit for a circular economy through iterative machine-learning-assisted design processes that allow for rapid prototyping and testing. Operate circular business models. AI can magnify the competitive strength of circular economy business models, such as product-as-a-service and leasing. By combining real-time and historical data from products and users, AI can help increase product circulation and asset utilization through pricing and demand prediction, predictive maintenance, and smart inventory management. Optimize circular infrastructure. AI can help build and improve the reverse logistics infrastructure required to “close the loop” on products and materials, by improving the processes to sort and disassemble products, re-manufacture components, and recycle materials.” For each of the three use cases, the paper also highlights three case studies where Artificial Intelligence was used to create circular value within current business models. First, project ‘Accelerated Metallurgy’, funded by the European Space Agency which used AI algorithms to analyse vast amounts of data on existing materials and their properties to design and test new alloy formulations. The second case study talks about software company ZenRobotics was the first company which uses an AI software ZenBrain to recover recyclables from waste. The paper also talks about two other case studies where AI was used to grow food regeneratively and make better use of its by-products. The paper points that “the potential value unlocked by AI in helping design out waste in a circular economy for food is up to $127 billion a year in 2030.”  In another case study, AI helped in circulating consumer electronics products, components, and materials. “The equivalent AI opportunity in accelerating the transition towards a circular economy for consumer electronics is up to $90 billion a year in 2030.” The paper urges stakeholders and industrialists to take inspiration from the use cases and case studies explored in the paper to create and define new opportunities for circular economy applications of AI. It suggests three ways: “Creating greater awareness and understanding of how AI can support a circular economy is essential to encourage applications in design, business models, and infrastructure Exploring new ways to increase data accessibility and sharing will require new approaches and active collaboration between stakeholders As with all AI development efforts, those that accelerate the transition to a circular economy should be fair and inclusive, and safeguard individuals’ privacy and data security” Circular economy coupled with AI is still in its early stages. The true impact of AI in creating sustainable economy can only be realized with proper funding, investment, and awareness. Reports like these do help in creating awareness among the VCs, stakeholders, software engineers, and tech companies, but it’s up to them, how they actually translate it to implementation. You can view the full report here. Do you need artificial intelligence and machine learning expertise in house? How is Artificial Intelligence changing the mobile developer role? The ethical dilemmas developers working on Artificial Intelligence products must consider
Read more
  • 0
  • 0
  • 3403
article-image-amazon-open-sources-sagemaker-neo-to-help-developers-optimize-the-training-and-deployment-of-machine-learning-models
Natasha Mathur
28 Jan 2019
3 min read
Save for later

Amazon open-sources SageMaker Neo to help developers optimize the training and deployment of machine learning models

Natasha Mathur
28 Jan 2019
3 min read
Amazon announced last week that it’s making Amazon SageMaker Neo, a new machine learning feature in Amazon Sagemaker, available as open source. Amazon has released the code as Neo-AI project under the Apache software license. Neo-AI’s open source release will help processor vendors, device makers, and AI developers to bring and extend the latest machine learning innovations to a wide variety of hardware platforms. “with the Neo-AI project, processor vendors can quickly integrate their custom code into the compiler..and.. enables device makers to customize the Neo-AI runtime for the particular software and hardware configuration of their devices”, states the AWS team. Amazon SageMaker Neo was announced at AWS re:Invent 2018 as a newly added capability to Amazon SageMaker, its popular Machine learning Platform as a service. Neo-AI offers developers with a capability to train their machine learning models once and to run it anywhere in the cloud. It can deploy the machine learning models on multiple platforms by automatically optimizing TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models. Moreover, it can also convert the machine learning models into a common format to get rid of the software compatibility problems. It currently supports platforms from Intel, NVIDIA, and ARM. There’ll also be an added support for Xilinx, Cadence, and Qualcomm in the near future in Neo-AI. Amazon states that Neo-AI is a machine learning compiler and runtime at its core, built on traditional compiler technologies like LLVM and Halide. It also uses TVM (to compile deep learning models) and Treelite (to compile decision tree models), which had started off as open source research projects at the University of Washington. Other than these, it also performs platform-specific optimizations from different contributors. The Neo-AI project will receive contributions from several organizations such as AWS, ARM, Intel, Qualcomm, Xilinx, Cadence, and others. Also, the Neo-AI runtime is deployed currently on devices such as ADLINK, Lenovo, Leopard Imaging, Panasonic, and others. “Xilinx provides the FPGA hardware and software capabilities that accelerate machine learning inference applications in the cloud..we are pleased to support developers using Neo to optimize models for deployment on Xilinx FPGAs”, said Sudip Nag, Corporate Vice President at Xilinx. For more information, check out the official Neo-AI GitHub repository. Amazon unveils Sagemaker: An end-to-end machine learning service AWS introduces Amazon DocumentDB featuring compatibility with MongoDB, scalability and much more AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases
Read more
  • 0
  • 0
  • 3150

article-image-advocacy-groups-push-ftc-to-fine-facebook-and-break-it-up-for-repeatedly-violating-the-consent-order-and-unfair-business-practices
Amrata Joshi
25 Jan 2019
4 min read
Save for later

Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices

Amrata Joshi
25 Jan 2019
4 min read
Lately, Facebook has been in news for its data breaches and issues related to its illegal data sharing. To add to its sorrows, yesterday, advocacy groups such as Open Market Institute, Color of Change, and the Electronic Privacy Information Center among others, wrote to the Federal Trade Commission, requesting the government to intervene into how Facebook operates. The letter had a list of actions that the FTC could take which includes a multibillion-dollar fine, changing the company’s hiring practices, and breaking up Facebook for abusing its market position. Last week, Federal Trade Commission (FTC) officials were planning to impose a fine of over $22.5 billion on Facebook according to a report by Washington Post. As per the revelations made last year, over 87 million users’ data was given to Cambridge Analytica, a political consulting firm, without users’ consent. As a result of which, Facebook was fined £500,000, last October. This time Facebook might have to pay more than $22.5 million, the fine which was imposed on Google for tracking users of Apple’s Safari web browser in 2012. As per FTC, Facebook may have violated a legally binding agreement with the government to protect the privacy of users’ personal data. As a result of its Cambridge Analytica scandal, and the subsequent issues with data and privacy, advocacy groups are now calling for Facebook to be broken up because of the privacy violations and repeated consumer data breaches. The letter written to FTC by the advocacy group reads, “The record of repeated violations of the consent order can no longer be ignored. The company’s business practices have imposed enormous costs on the privacy and security of Americans, children and communities of color, and the health of democratic institutions in the United States and around the world.” According to the groups, it has been almost ten years since many organizations first brought the commission’s attention to Facebook’s unfair business practices that threatened the privacy of the consumers. The letter reads, “Facebook has violated the consent order on numerous occasions, involving the personal data of millions, possibly billions, of users of its services. Based on the duration of the violations, the scope of the violations, and the number of users impacted by the violations, we would expect that the fine in this case would be at least two orders of magnitude greater than any previous fine.” According to organizations like Open Market Institute and Color of Change, Facebook should be required to give up $2 billion as fine and divest ownership of Instagram and WhatsApp for failing to protect user data on those platforms as well.The groups have urged the FTC to require Facebook to comply with Fair Information Practices for all future uses of personal data across all services for all companies. The letter reads, “Given that Facebook’s violations are so numerous in scale, severe in nature, impactful for such a large portion of the American public and central to the company’s business model, and given the company’s massive size and influence over American consumers, penalties and remedies that go far beyond the Commission’s recent actions are called for.” According to the letter, Facebook breached its commitments to the Commission regarding the protection of WhatsApp user data. The letter further reads, “Facebook has operated for too long with too little democratic accountability. That should now end. At issue are not only the rights of consumers but also those of citizens. It should be for users of the services and for democratic institutions to determine the future of Facebook.” According to The Verge, lawmakers have been quiet on breaking up Facebook. In an interview with The Verge, Sen. Mark Warner (D-VA), one of the senators at the forefront, said that breaking up the company was more of a “last resort.” According to U.S. Securities and Exchange Commission filings reviewed by The Hill, the largest five tech companies Amazon, Apple, Google, Facebook and Twitter lobbied on a variety of issues, including trade, data privacy, immigration and copyright issues. Mark Zuckerberg, chief executive of Facebook, even got testified before Congress last year. Facebook lobbied $12.6 million. It seems that the data privacy issues made Facebook get into lobbying. Facebook AI research introduces enhanced LASER library that allows zero-shot transfer across 93 languages Russia opens civil cases against Facebook and Twitter over local data laws Trick or Treat – New Facebook Community Actions for users to create petitions and connect with public officials  
Read more
  • 0
  • 0
  • 2104