Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1204 Articles
article-image-mark-zuckerberg-is-a-liar-fraudster-unfit-to-be-the-c-e-o-of-facebook-alleges-aaron-greenspan-to-the-uk-parliamentary-committee
Vincy Davis
14 Jun 2019
10 min read
Save for later

Mark Zuckerberg is a liar, fraudster, unfit to be the C.E.O. of Facebook, alleges Aaron Greenspan to the UK Parliamentary Committee

Vincy Davis
14 Jun 2019
10 min read
Last week, the Digital, Culture, Media and Sport Sub-Committee held its hearing on Disinformation with Aaron Greenspan as the witness. Aaron is the founder, president, and CEO of Think Computer Corporation, an IT consulting service and the author of the book ‘Authoritas: One Student's Harvard Admissions and the Founding of the Facebook Era’. Aaron who claims to have had the original idea for Facebook, has been a long standing critic of the social network. In January this year, Aaron published a 75-page report, in which he states that fake accounts made up more than half of Facebook's 2.2 billion users. He has also testified the same, in front of the UK Parliamentary Committee. He minced no words when he said Mark Zuckerberg is a liar, fraudster and unfit to be the CEO of Facebook. Fake accounts Facebook differentiates between duplicated accounts and fake accounts. But Aaron considers “any account which is not in your name and you have a second account for whatever purpose, that account is ultimately a fake account”. He noticed the issue of fake accounts on Facebook last year, after which he did his own enquiry and found some “alarming” numbers. At the end of 2017, Facebook claimed that “only 1% of their accounts are fake”. By their own definition, two weeks ago they have said that the number of “fake accounts have increased to 5%.” This means that in a span of two years, their estimate has increased fivefold. The second issue, Aaron highlights is that it's “extremely unclear” how they have arrived at that estimate. Two years ago, Facebook came up with a ‘Transparency portal’, due to public pressure. The transparency portal of Facebook aims at publishing “regular reports to give our community visibility into how we enforce policies, respond to data requests and protect intellectual property, while monitoring dynamics that limit access to Facebook products.” Aaron states that he found “very difficult to reconcile the SEC filings of Facebook with the numbers from the transparency portal”. He says that the number of fake accounts published on the transparency portal of Facebook do not match with the SEC filings. This is the reason why he decided to do his own report and find out if the numbers from the “two sources of Facebook aligned” and has come to the conclusion that they “don’t”. Instead Aaron has arrived at a conditional conclusion that “around 30% of accounts on Facebook are fake”. He claims that Facebook always minimizes their numbers to make the problem look smaller than it is. Also he adds that “Based on my total use of the platform, the historical trend starting in 2006, when it was made public, up until the present, it seemed like it was safe that 50% are fake and I think this could actually be higher.” Some weeks ago, Facebook “finally updated their transparency portal and announced that in the fourth quarter of 2018, that they have disabled 1.2 billion fake accounts. And in the first quarter of 2019, the numbers counted to 2.2 billion fake accounts, which is an exponential growth curve in fake accounts, according to their own numbers, which has not been audited by any respective body”. If all the numbers are added, “by a conservative guess, it can be said that there are 10billion fake accounts, for a platform which has 2.2 billion active users”. While “Facebook says that fake accounts don’t matter very much and within their undisclosed methodology, they are doing a good job.” Aaron believes that transparency around fake accounts is the number one problem for Facebook to look after. FB is more ‘unwilling’ rather than ‘unable’ to tackle Fake accounts When asked, if he thinks that “Facebook wants to tackle these problems of fake accounts and others”, Aaron answered “No”. He believes very strongly that Mark has no clear intention of complying with the law, as he’s not appearing in front of the parliament, in the UK or in Canada or anywhere where serious questions will be aimed at him. This is because, “he has no genuine answers in many cases. So i have no faith in Facebook, I don't think it can be trusted and don't think it should be trusted and i would suggest independent analysis.” The illusion of Behavioural advertising effectiveness Aaron claims that “Behavioural advertising produces 4% benefit of revenue to advertisers”. So while adexchangers like “Google or Facebook might charge 59% more on average for targeted ads, or upto 4900% more” in some cases, they are going to get customers as they get benefit from them. He also described Facebook as a "black box", and claimed advertisers were "in the dark" about how effective their campaigns actually are on Facebook and if it’s actually reaching real users. Aaron adds that “I have come to the conclusion that Facebook is not in anybody’s control. The company has lost its capability to control its own platform. And i don’t think they can truly regain that ability”, he thus likened the social network to the ‘Chernobyl disaster’, the largest catastrophic nuclear disaster to hit the world in 1986. In this particular situation, there was a technology which was hyped and was “expected to be transformative and make some other problems go away”. Aaron says that in 2004, Mark had described the Facebook system as “something that would involve the problem of reaching critical mass, a nuclear power reference”. Aaron believes that by designing Facebook, the way it is now, “Mark has effectively removed the control from the reactor core”, and the result is an “enormous uninhabitable zone in the internet, which is polluted with disinformation and falsity, much like radiation, that transpires nearly impossible reverse.” History of Facebook Aaron who has always claimed that he is the original creator of Facebook, says that “My need to create ‘Universal Facebook’, was based on Harvard’s structure as an organization, while Mark wanted to build something cool.” He also states that “certainly neither of us had thought of this to be a global encompassing system”. Aaron says that if he would have known that Mark was planning on “something like a huge global system”, he would have made it clear to Mark that “this could end up being a privacy nightmare”. He adds that “In the early days, Facebook lost control of the platform and it will never get it back”, and “unfortunately, the Media has been a significant member in propelling Facebook since the last 10-12 years.” Facebook is not growing Aaron believes that Mark has been lying to investors and the global community about Facebook’s growth, which is not true. He says that from indicators, as an outside observer, everything suggests that the usage of Facebook is falling drastically as users are having concerns about their privacy and yet it is being publicised that Facebook is growing. In reality, Facebook is growing in countries like India, Philippines, Vietnam, and Indonesia, which are the same countries where Facebook claims in their disclaimer that “we have more fake accounts coming from these countries than anyone else.” This means that Facebook is growing in countries where there is a known problem of ‘fake accounts’ and this is more problematic than the rest of the world. Aaron alleges that this misrepresentation of facts to shareholders amounts to “fraud”. On respecting Privacy and Personal Data Aaron states that Facebook used to lie initially that “Openness is a universal good”, while the new lie Facebook says is that “Encryption is the same as privacy,” which is not true, “and that privacy is the universal good.” So Aaron believes that Mark is going completely in contrast to his earlier beliefs, as the “previous model was not working for him anymore so he made a new model, and this is going to increase” further for every next level of Facebook. Aaron also adds that “Encryption comes with a lot of pitfalls.” On the question of whether Zuckerberg respects personal data, Aaron has claimed that Mark does not believe in the concept of Personal Data as he has been performing security fraud on a number of occasions, in an incredibly blatant manner. He states that “the SEC has done nothing about it because they are afraid of targeting a billionaire”. He also pointed out that Mark is not the only executive who lies to stockholders, and claimed that even other tech giants get away with this. For example, “Elon Musk does it.” When asked “if there were any warning signs of the Cambridge Analytica scandal”, prior to 2016. Aaron says that “This wasn’t so much a breach as it was a designed behaviour, and that design was made so on Mark’s orders”. Aaron also recalled that, in 2007, when he was working with Ed Baker, Mark Zuckerberg’s current colleague, he was planning to break the law with a technique, allegedly designed to steal customer data. He added that “When I worked with Ed, he suggested for the software that we were building that, we should ask users for access to their address book. And regardless of whether the answer was yes or no, we should take that data anyway, and use that to send emails to other potential users”. Aaron claims that “At that point, I quit”. Eventually, Ed joined Facebook and is part of its growth team now. Antitrust Law Debate If an antitrust action is taken against Facebook, it would result in Whatsapp and Instagram being separated from its mother platform. However, Mark will still be responsible for 12 billion of users data of Facebook. According to Aaron, this is a huge problem. And this is the reason he believes that “antitrust action against Facebook, is not going to be effective in the long run”. He also adds that some other major methods needs to be undertaken to make Facebook behave responsibly. How do we solve a problem like Facebook? Aaron has some ideas Aaron believes that regulating an entity as large and complex like Facebook requires technical knowledge, and “by large US regulators lack the technical knowledge, to effectively enforce the laws that are already on the book.” Aaron proposes a number of solutions for ways to regulate Facebook, out of which, the most important step he believed is “to remove Mark from the C.E.O. position.” He considers Mark incapable in being a responsible CEO to Facebook. Aaron’s recommendation comes at a time when this week, around 68% of independent investors wanted the company to have an independent chairman. Despite the revolt, the proposal was not passed as Mark owns upto 75% of Class B stock, i.e., he has almost 60% of the voting power at Facebook. Obviously, he and his colleagues voted down the independent Chairman proposal very smoothly. Next, Aaron suggested to regulate Facebook, along the lines of a government regulated bank. He proposes to have KYC requirements for all social media at this point of time. Also he adds that “I think anonymous speech should not be banned, as it plays an important role, but if there’s a problem in any case, then the anonymous person should be held to account.” He says that as “Transparency around fake accounts, is the number one problem faced by Facebook”, a ‘Social Media Tax’ should be levied on all its users. He believes that it will provide some revenue to fund investigative journalism, and due to payment involved, it will require some kind of authentication. This can play a major role in identification of fake accounts. He says that this will make “the entire process manageable for governments.” Check out the full hearing on the Parliament TV website. US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Facebook argues it didn’t violate users’ privacy rights and thinks there’s no expectation of privacy because there is no privacy on social media Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech
Read more
  • 0
  • 0
  • 3373

article-image-austrian-supreme-court-rejects-facebooks-bid-to-stop-a-gdpr-violation-lawsuit-against-it-by-privacy-activist-max-schrems
Bhagyashree R
13 Jun 2019
5 min read
Save for later

Austrian Supreme Court rejects Facebook’s bid to stop a GDPR-violation lawsuit against it by privacy activist, Max Schrems

Bhagyashree R
13 Jun 2019
5 min read
On Tuesday, the Austrian Supreme Court overturned Facebook’s appeal to block a lawsuit against it for not conforming to Europe’s General Data Protection Regulation (GDPR). This decision will also have an effect on other EU member states that give “special status to industry sectors.” https://twitter.com/maxschrems/status/1138703007594496000?s=19 The lawsuit was filed by Austrian lawyer and data privacy activist, Max Schrems. In the lawsuit, he has accused Facebook of using illegal privacy policies as it forces users to give their consent for processing their data in return for using the service. GDPR does not allow forced consent as a valid legal basis for processing user data. Schrems said in a statement, “Facebook has even blocked accounts of users who have not given consent. In the end users only had the choice to delete the account or hit the ‘agree’ button–that’s not a free choice; it more reminds of a North Korean election process. Many users do not know yet that this annoying way of pushing people to consent is actually forbidden under GDPR in most cases.” Facebook has been trying to block this lawsuit by questioning whether GDPR-based cases fall under the jurisdiction of courts. According to Facebook’s appeal, these lawsuits should be handled by data protection authorities, Irish Data Protection Commissioner (DPC) in this case. Dismissing Facebook’s argument, this landmark decision says that any complaints made under Article 79 of GDPR can be reviewed both by judges and data protection authorities. This verdict comes as a sigh of relief for Schrems, who has to wait for almost 5 years to even get this lawsuit to trial because of Facebook's continuous blockade attempts. “I am very pleased that we were able to clarify this fundamental issue. We are hoping for a speedy procedure now that the case has been pending for a good 5 years," Schrems said in a press release. He further added, “If we win even part of the case, Facebook would have to adapt its business model considerably. We are very confident that we will succeed on the substance too now. Of course, they wanted to prevent such a case by all means and blocked it for five years.“ Previously, the Vienna Regional Court did give the verdict in Facebook’s favor declaring that it did not have jurisdiction and Facebook could only be sued in Ireland, where its European headquarters are. Schrems believes that this verdict was given because there is “a tendency that civil judges are not keen to have (complex) GDPR cases on their table.” Now, both the Appellate Court and the Austrian Supreme Court have agreed that everyone can file a lawsuit for GDPR violations. Schrems original idea was to make a “class action” style suit against Facebook by allowing any Facebook user to join the case. But, the court did not allow that, and Schemes' was limited to bring only a model case to the court. This is Schrems’ second victory this year in the fight against Facebook. Last month, the Irish Supreme court dismissed Facebook from stopping the referral of privacy case regarding the transfer of EU citizens’ data to the United States. The hearing of this case is now scheduled to happen at the European Court of Justice (ECJ) in July. Schrems’ eight-year-long battle against Facebook Schrems’ fight against Facebook started way before we all realized the severity of tech companies harvesting our personal data. Back in 2011, Shcrems’ professor at Santa Clara University invited Facebook’s privacy lawyer Ed Palmieri to speak to his class. Schrems was surprised to see the lawyer's lack of awareness regarding data protection laws in Europe. He then decided to write his thesis paper about Facebook’s misunderstanding of EU privacy laws. As a part of the research, he requested his personal data from Facebook and found it had his entire user history. He went on to make 22 complaints to the Irish Data Protection Commission, in which he accused Facebook of breaking European data protection laws. His efforts finally showed results, when in 2015 the European Court of Justice took down the EU–US Safe Harbor Principles. As a part of his fight for global privacy rights, Schrems also co-founded the European non-profit noyb (None of Your Business), which aims to "make privacy real”. The organization aims to introduce ways to execute privacy enforcement more effectively. It holds companies accountable who fail to follow Europe's privacy laws and also takes media initiatives to support GDPR. Looks like things hasn’t been going well for Facebook. Along with losing these cases in the EU, in a revelation yesterday by the WSJ, several emails were found that indicate Mark Zuckerberg’s knowledge of potentially problematic privacy practices at the company. You can read the entire press release on NOYB’s official website. Facebook releases Pythia, a deep learning framework for vision and language multimodal research Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny  
Read more
  • 0
  • 0
  • 2478

article-image-highlights-from-mary-meekers-2019-internet-trends-report
Sugandha Lahoti
12 Jun 2019
8 min read
Save for later

Highlights from Mary Meeker’s 2019 Internet trends report

Sugandha Lahoti
12 Jun 2019
8 min read
At Recode by Vox’s 2019 Code Conference on Tuesday, Bond partner Mary Meeker made her presentation onstage, covering everything on the internet's latest trends. Meeker had first started presenting these reports in 1995, underlining the most important statistics and technology trends on the internet. Last year in September, Meeker quit Kleiner Perkins to start her own firm Bond and is popularly known as the Queen of the Internet. Mary Meeker’s 2019 Internet trends report highlighted that the internet is continuing to grow, slowly, as more users come online, especially with mobile devices. She also talked about increased internet ad spending, data growth, as well as the rise of freemium subscription business models, interactive gaming, the on-demand economy and more. https://youtu.be/G_dwZB5h56E The internet trends highlighted by Meeker include: Internet Users E-commerce and advertising Internet Usage Freemium business models Data growth Jobs and Work Online Education Immigration and Healthcare Internet Users More than 50% of the world’s population now has access to the internet. There are 3.8 billion internet users in the world with Asia-pacific leading in both users and potential. China is the largest market with 21% of total internet users and India is at 12%. However, the growth is slowing by 6% in 2018 versus 7% in 2017 because so many people have come online that new users are harder to come by. New smartphone unit shipments actually declined in 2018. Per the global internet market cap leaders, the U.S. is stable at 18 of the top 30 and China is stable at 7 of the top 30. These are the two leading countries where internet innovation is at an especially high level. If we look at revenue growth for the internet market cap leaders it continues to slow - 11 percent year-on-year in Q1 versus 13 percent in Q4. Internet usage Internet usage had a solid growth, driven by investment in innovation. The digital media usage in the U.S. is accelerating up 7% versus 5% growth in 2017. The average US adult spends 6.3 hours each day with digital media, over half of which is spent on their mobiles. Wearables had 52 million users which doubled in four years. Roughly 70 million people globally listen to podcasts in the US, a figure that’s doubled in about four years. Outside the US, there's especially high innovation in data-driven and direct fulfillment that's growing very rapidly in China. Innovation outside the US is also especially strong in financial services. Images are also becoming an increasingly relevant way to communicate. More than 50% of the tweets of impressions today are images, video or other forms of media. Interactive gaming innovation is rising across platforms as interactive games like Fortnite become the new social media for certain people. It is accelerating with 2.4 billion users up, 6 percent year-on-year in 2018. On the flip side Almost 26% of adults are constantly online versus 21% three years ago. That number jumped to 39% for 18 to 29 year-olds surveyed. However, digital media users are taking action to reduce their usage and businesses are also taking actions to help users monitor their usage. Social media usage has decelerated up 1% in 2018 versus 6% in 2017. Privacy concerns are high but they're moderating. Regulators and businesses are improving consumer privacy control. In digital media encrypted messaging and traffic are rising rapidly. In Q1, 87 percent of global web traffic was encrypted, up from 53 percent three years ago. Another usage concern is problematic content. Problematic content on the Internet can be less filtered and more amplified. Images and streaming can be more powerful than text. Algorithms can amplify users on patterns  and social media can amplify trending topics. Bad actors can amplify ideologies, unintended bad actors can amplify misinformation and extreme views can amplify polarization. However internet platforms are indeed driving efforts to reduce problematic content as do consumers and businesses. 88% percent of people in the U.S. believe the Internet has been mostly good for them and 70% believe the Internet has been mostly good for society. Cyber attacks have continued to rise. These include state-sponsored attacks, large-scale data provider attacks, and monetary extortion attacks. E-commerce and online advertising E-commerce is now 15 percent of retail sales. Its growth has slowed — up 12.4 percent in Q1 compared with a year earlier — but still towers over growth in regular retail, which was just 2 percent in Q1. In online advertising, on comparing the amount of media time spent versus the amount of advertising dollars spent, mobile hit equilibrium in 2018 while desktop hit that equilibrium point in 2015. The Internet ads spending on an annual basis accelerated a little bit in 2018 up 22 percent.  Most of the spending is still on Google and Facebook, but companies like Amazon and Twitter are getting a growing share. Some 62 percent of all digital display ad buying is for programmatic ads, which will continue to grow. According to the leading tech companies the internet average revenue has been decelerating on a quarterly basis of 20 percent in Q1. Google and Facebook still account for the majority of online ad revenue, but the growth of US advertising platforms like Amazon, Twitter, Snapchat, and Pinterest is outstripping the big players: Google’s ad revenue grew 1.4 times over the past nine quarters and Facebook’s grew 1.9 times, while the combined group of new players grew 2.6 times. Customer acquisition costs — the marketing spending necessary to attract each new customer — is going up. That’s unsustainable because in some cases it surpasses the long-term revenue those customers will bring. Meeker suggests cheaper ways to acquire customers, like free trials and unpaid tiers. Freemium business models Freemium business models are growing and scaling. Freemium businesses equals free user experience which enables more usage, engagement, social sharing and network effects. It also equals premium user experience which drives monetization and product innovation. Freemium business evolution started in gaming, evolving and emerging in consumer and enterprise. One of the important factors for this growth is cloud deployment revenue which grew about 58% year-over-year. Another enabler of freemium subscription business models is efficient digital payments which account for more than 50% of day-to-day transactions around the world. Data growth Internet trends indicate that a number of data plumbers are helping a lot of companies collect data, manage connections, and optimize data. In a survey of retail customers, 91% preferred brands that provided personalized offers and recommendations. 83% were willing to passively share data in exchange for personalized services and 74% were willing to actively share data in exchange for personalized experiences. Data volume and utilization is also evolving rapidly. Enterprise surpassed consumer in 2018 and cloud is overtaking both. More data is now stored in the cloud than on private enterprise servers or consumer devices. Jobs and Work Strong economic indicators, internet enabled services, and jobs are helping work. If we look at global GDP. China, the US and India are rising, but Europe is falling. Cross-border trade is at 29% of global GDP and has been growing for many years. Global relative unemployment concerns are very high outside the US and low in itself. Consumer confidence index is high and rising. Unemployment is at a 19-year low but job openings are at an all-time high and wages are rising. On-demand work is creating internet-enabled opportunities and efficiencies. There are 7 million on-demand workers up 22 percent year-on-year. Remote work is also creating internet enabled work opportunities and efficiency. Americans working remotely have risen from 5 percent versus 3 percent in 2000. Online education Education costs and student debt are rising in the US whereas post-secondary education enrollment is slowing. Online education enrollment is high across a diverse base of universities - public, private for-profit, and private not-for-profit.  Top offline institutions are ramping their online offerings at a very rapid rate - most recently University of Pennsylvania, University of London, University of Michigan and UC Boulder. Google's growth in creating certificates for in-demand jobs is growing rapidly which they are doing in collaboration with Coursera. Immigration and Healthcare In the U.S. 60% of the most highly valued tech companies are founded by first or second generation Americans. They employed 1.9 million people last year. USA entitlements account for 61% of government spending versus 42% 30 years ago, and shows no signs of stopping. Healthcare is steadily digitizing, driven by consumers and the trends are very powerful. You can expect more telemedicine and on-demand consultations. For details and infographics, we recommend you to go through the slide deck of the Internet trends report. What Elon Musk and South African conservation can teach us about technology forecasting. Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them Experts present the most pressing issues facing global lawmakers on citizens’ privacy, democracy and the rights to freedom of speech.
Read more
  • 0
  • 0
  • 3257
Banner background image

article-image-deep-learning-models-have-massive-carbon-footprints-can-photonic-chips-help-reduce-power-consumption
Sugandha Lahoti
11 Jun 2019
10 min read
Save for later

Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?

Sugandha Lahoti
11 Jun 2019
10 min read
Most of the recent breakthroughs in Artificial Intelligence are driven by data and computation. What is essentially missing is the energy cost. Most large AI networks require huge number of training data to ensure accuracy. However, these accuracy improvements depend on the availability of exceptionally large computational resources. The larger the computation resource, the more energy it consumes. This  not only is costly financially (due to the cost of hardware, cloud compute, and electricity) but is also straining the environment, due to the carbon footprint required to fuel modern tensor processing hardware. Considering the climate change repercussions we are facing on a daily basis, consensus is building on the need for AI research ethics to include a focus on minimizing and offsetting the carbon footprint of research. Researchers should also put energy cost in results of research papers alongside time, accuracy, etc. The process of deep learning outsizing environmental impact was further highlighted in a recent research paper published by MIT researchers. In the paper titled “Energy and Policy Considerations for Deep Learning in NLP”, researchers performed a life cycle assessment for training several common large AI models. They quantified the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and provided recommendations to reduce costs and improve equity in NLP research and practice. They have also provided recommendations to reduce costs and improve equity in NLP research and practice. Per the paper, training AI models can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes the manufacture of the car itself). It is estimated that we must cut carbon emissions by half over the next decade to deter escalating rates of natural disaster. Source This speaks volumes about the carbon offset and brings conversation to the returns on heavy (carbon) investment of deep learning and if it is really worth the marginal improvement in predictive accuracy over cheaper, alternative methods. This news alarmed people tremendously. https://twitter.com/sakthigeek/status/1137555650718908416 https://twitter.com/vinodkpg/status/1129605865760149504 https://twitter.com/Kobotic/status/1137681505541484545         Even if some of this energy may come from renewable or carbon credit-offset resources, the high energy demands of these models are still a concern. This is because the current energy is derived from carbon-neural sources in many locations, and even when renewable energy is available, it is limited to the equipment produced to store it. The carbon footprint of NLP models The researchers in this paper adhere specifically to NLP models. They looked at four models, the Transformer, ELMo, BERT, and GPT-2, and trained each on a single GPU for up to a day to measure its power draw. Next, they used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process. This number was then converted into pounds of carbon dioxide equivalent based on the average energy mix in the US, which closely matches the energy mix used by Amazon’s AWS, the largest cloud services provider. Source The researchers found that environmental costs of training grew proportionally to model size. It exponentially increased when additional tuning steps were used to increase the model’s final accuracy. In particular, neural architecture search had high associated costs for little performance benefit. Neural architecture search is a tuning process which tries to optimize a model by incrementally tweaking a neural network’s design through exhaustive trial and error. The researchers also noted that these figures should only be considered as baseline. In practice, AI researchers mostly develop a new model from scratch or adapt an existing model to a new data set, both require many more rounds of training and tuning. Based on their findings, the authors recommend certain proposals to heighten the awareness of this issue to the NLP community and promote mindful practice and policy: Researchers should report training time and sensitivity to hyperparameters. There should be a standard, hardware independent measurement of training time, such as gigaflops required to convergence. There should also be a  standard measurement of model sensitivity to data and hyperparameters, such as variance with respect to hyperparameters searched. Academic researchers should get equitable access to computation resources. This trend toward training huge models on tons of data is not feasible for academics, because they don’t have the computational resources. It will be more cost effective for academic researchers to pool resources to build shared compute centers at the level of funding agencies, such as the U.S. National Science Foundation. Researchers should prioritize computationally efficient hardware and algorithms. For instance, developers could aid in reducing the energy associated with model tuning by providing easy-to-use APIs implementing more efficient alternatives to brute-force. The next step is to introduce energy costs as a standard metric, that researchers are expected to report their findings. They should also try to minimise carbon footprint by developing compute efficient training methods such as new ML algos, or new engineering tools to make existing ones more compute efficient. Above all, we need to formulate strict public policies that steer digital technologies toward speeding a clean energy transition while mitigating the risks. Another factor which contributes to high energy consumptions are Optical neural networks which are used for most deep learning tasks. To tackle that issue, researchers and major tech companies — including Google, IBM, and Tesla — have developed “AI accelerators,” specialized chips that improve the speed and efficiency of training and testing neural networks. However, these AI accelerators use electricity and have a theoretical minimum limit for energy consumption. Also, most present day ASICs are based on CMOS technology and suffer from the interconnect problem. Even in highly optimized architectures where data are stored in register files close to the logic units, a majority of the energy consumption comes from data movement, not logic. Analog crossbar arrays based on CMOS gates or memristors promise better performance, but as analog electronic devices, they suffer from calibration issues and limited accuracy. Implementing chips that use light instead of electricity Another group of MIT researchers have developed a “photonic” chip that uses light instead of electricity, and consumes relatively little power in the process. The photonic accelerator uses more compact optical components and optical signal-processing techniques, to drastically reduce both power consumption and chip area. Practical applications for such chips can also include reducing energy consumption in data centers. “In response to vast increases in data storage and computational capacity in the last decade, the amount of energy used by data centers has doubled every four years, and is expected to triple in the next 10 years.” https://twitter.com/profwernimont/status/1137402420823306240 The chip could be used to process massive neural networks millions of times more efficiently than today’s classical computers. How the photonic chip works? The researchers have given a detailed explanation of the chip’s working in their research paper, “Large-Scale Optical Neural Networks Based on Photoelectric Multiplication”. The chip relies on a compact, energy efficient “optoelectronic” scheme that encodes data with optical signals, but uses “balanced homodyne detection” for matrix multiplication. This technique that produces a measurable electrical signal after calculating the product of the amplitudes (wave heights) of two optical signals. Pulses of light encoded with information about the input and output neurons for each neural network layer — which are needed to train the network — flow through a single channel.  Optical signals carrying the neuron and weight data fan out to grid of homodyne photodetectors. The photodetectors use the amplitude of the signals to compute an output value for each neuron. Each detector feeds an electrical output signal for each neuron into a modulator, which converts the signal back into a light pulse. That optical signal becomes the input for the next layer, and so on. Limitation of Photonic accelerators Photonic accelerators generally have an unavoidable noise in the signal. The more light that’s fed into the chip, the less noise and greater accuracy. Less input light increases efficiency but negatively impacts the neural network’s performance. The ideal condition is achieved when AI accelerators is measured in how many joules it takes to perform a single operation of multiplying two numbers. Traditional accelerators are measured in picojoules, or one-trillionth of a joule. Photonic accelerators measure in attojoules, which is a million times more efficient. In their simulations, the researchers found their photonic accelerator could operate with sub-attojoule efficiency. Tech companies are the largest contributors of carbon footprint The realization that training an AI model can produce emissions equivalent to a five cars, should make carbon footprint of artificial intelligence an important consideration for researchers and companies going forward. UMass Amherst’s Emma Strubell, one of the research team and co-author of the paper said, “I’m not against energy use in the name of advancing science, obviously, but I think we could do better in terms of considering the trade off between required energy and resulting model improvement.” “I think large tech companies that use AI throughout their products are likely the largest contributors to this type of energy use,” Strubell said. “I do think that they are increasingly aware of these issues, and there are also financial incentives for them to curb energy use.” In 2016, Google’s ‘DeepMind’ was able to reduce the energy required to cool Google Data Centers by 30%. This full-fledged AI system has features including continuous monitoring and human override. Recently Microsoft doubled its internal carbon fee to $15 per metric ton on all carbon emissions. The funds from this higher fee will maintain Microsoft’s carbon neutrality and help meet their sustainability goals. On the other hand, Microsoft is also two years into a seven-year deal—rumored to be worth over a billion dollars—to help Chevron, one of the world’s largest oil companies, better extract and distribute oil. https://twitter.com/AkwyZ/status/1137020554567987200 Amazon had announced that it would power data centers with 100 percent renewable energy without a dedicated timeline. Since 2018 Amazon has reportedly slowed down its efforts to use renewable energy using only 50 percent.  It has also not announced any new deals to supply clean energy to its data centers since 2016, according to a report by Greenpeace, and it quietly abandoned plans for one of its last scheduled wind farms last year. In April, over 4,520 Amazon employees organized against Amazon’s continued profiting from climate devastation. However, Amazon rejected all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting. Both these studies’ researchers illustrate the dire need to change our outlook towards building Artificial Intelligence models and chips that have an impact on the carbon footprint. However, this does not mean halting the research of AI altogether. Instead there should be an awareness of the environmental impact that training AI models might have. Which in turn can inspire researchers to develop more efficient hardware and algorithms for the future. Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change. Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models. Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 4216

article-image-salesforce-is-buying-tableau-in-a-15-7-billion-all-stock-deal
Richard Gall
10 Jun 2019
4 min read
Save for later

Salesforce is buying Tableau in a $15.7 billion all-stock deal

Richard Gall
10 Jun 2019
4 min read
Salesforce, one of the world's leading CRM platforms, is buying data visualization software Tableau in an all-stock deal worth $15.7 billion. The news comes just days after it emerged that Google is buying one of Tableau's competitors in the data visualization market, Looker. Taken together, the stories highlight the importance of analytics to some of the planet's biggest companies. They suggest that despite years of the big data revolution, it's only now that market-leading platforms are starting to realise that their customers want the level of capabilities offered by the best in the data visualization space. Salesforce shareholders will use their stock to purchase Tableau. As the press release published on the Salesforce site explains "each share of Tableau Class A and Class B common stock will be exchanged for 1.103 shares of Salesforce common stock, representing an enterprise value of $15.7 billion (net of cash), based on the trailing 3-day volume weighted average price of Salesforce's shares as of June 7, 2019." The acquisition is expected to be completed by the end of October 2019. https://twitter.com/tableau/status/1138040596604575750 Why is Salesforce buying Tableau? The deal is an incredible result for Tableau shareholders. At the end of last week, its market cap was $10.7 billion. This has led to some scepticism about just how good a deal this is for Salesforce. One commenter on Hacker News said "this seems really high for a company without earnings and a weird growth curve. Their ticker is cool and maybe sales force [sic] wants to be DATA on nasdaq. Otherwise, it will be hard to justify this high markup for a tool company." With Salesforce shares dropping 4.5% as markets opened this week, it seems investors are inclined to agree - Salesforce is certainly paying a premium for Tableau. However, whatever the long term impact of the acquisition, the price paid underlines the fact that Salesforce views Tableau as exceptionally important to its long term strategy. It opens up an opportunity for Salesforce to reposition and redefine itself as much more than just a CRM platform. It means it can start compete with the likes of Microsoft, which has a full suite of professional and business intelligence tools. Moreover, it also provides the platform with another way of potentially onboarding customers - given Tableau is well-known as a powerful yet accessible data visualization tool, it create an avenue through which new users can find their way to the Salesforce product. Marc Benioff, Chair and co-CEO of Salesforce, said "we are bringing together the world’s #1 CRM with the #1 analytics platform. Tableau helps people see and understand data, and Salesforce helps people engage and understand customers. It’s truly the best of both worlds for our customers--bringing together two critical platforms that every customer needs to understand their world.” Tableau has been a target for Salesforce for some time. Leaked documents from 2016 found that the data visualization was one of 14 companies that Salesforce had an interest in (another was LinkedIn, which would eventually be purchased by Microsoft). Read next: Alteryx vs. Tableau: Choosing the right data analytics tool for your business What's in it for Tableau (aside from the money...)? For Tableau, there are many other benefits of being purchased by Salesforce alongside the money. Primarily this is about expanding the platform's reach - Salesforce users are people who are interested in data with a huge range of use cases. By joining up with Salesforce, Tableau will become their go-to data visualization tool. "As our two companies began joint discussions," Tableau CEO Adam Selipsky said, "the possibilities of what we might do together became more and more intriguing. They have leading capabilities across many CRM areas including sales, marketing, service, application integration, AI for analytics and more. They have a vast number of field personnel selling to and servicing customers. They have incredible reach into the fabric of so many customers, all of whom need rich analytics capabilities and visual interfaces... On behalf of our customers, we began to dream about we might accomplish if we could combine our ability to help people see and understand data with their ability to help people engage and understand customers." What will happen to Tableau? Tableau won't be going anywhere. It will continue to exist under its own brand with the current leadership all remaining, including Selipsky. What does this all mean for the technology market? At the moment, it's too early to say - but the last year or so has seen some major high-profile acquisitions by tech companies. Perhaps we're seeing the emergence of a tooling arms race as the biggest organizations attempt to arm themselves with ecosystems of established market-leading tools. Whether this is good or bad for users remains to be seen, however.  
Read more
  • 0
  • 0
  • 2567

article-image-did-unfettered-growth-kill-maker-media-financial-crisis-leads-company-to-shutdown-maker-faire-and-lay-off-all-staff
Savia Lobo
10 Jun 2019
5 min read
Save for later

Did unfettered growth kill Maker Media? Financial crisis leads company to shutdown Maker Faire and lay off all staff

Savia Lobo
10 Jun 2019
5 min read
Updated: On July 10, 2019, Dougherty announced the relaunch of Maker Faire and Maker Media with the new name “Make Community“. Maker Media Inc., the company behind Maker Faire, the popular event that hosts arts, science, and engineering DIY projects for children and their parents, has laid off all its employees--22 employees--and have decided to shut down due to financial troubles. In January 2005, the company first started off with MAKE, an American bimonthly magazine focused on do it yourself and/or DIWO projects involving computers, electronics, robotics, metalworking, woodworking, etc. for both adults and children. In 2006, the company first held its Maker Faire event, that lets attendees wander amidst giant, inspiring art and engineering installations. Maker Faire now includes 200 owned and licensed events per year in over 40 countries. The Maker movement gained momentum and popularity when MAKE magazine first started publishing 15 years ago.  The movement emerged as a dominant source of livelihood as individuals found ways to build small businesses using their creative activity. In 2014, The WhiteHouse blog posted an article stating, “Maker Faires and similar events can inspire more people to become entrepreneurs and to pursue careers in design, advanced manufacturing, and the related fields of science, technology, engineering and mathematics (STEM).” With funding from the Department of Labor, “the AFL-CIO and Carnegie Mellon University are partnering with TechShop Pittsburgh to create an apprenticeship program for 21st-century manufacturing and encourage startups to manufacture domestically.” Recently, researchers from Baylor University and the University of North Carolina, in their research paper, have highlighted opportunities for studying the conditions under which the Maker movement might foster entrepreneurship outcomes. Dale Dougherty, Maker Media Inc.’s founder and CEO, told TechCrunch, “I started this 15 years ago and it’s always been a struggle as a business to make this work. Print publishing is not a great business for anybody, but it works…barely. Events are hard . . . there was a drop off in corporate sponsorship”. “Microsoft and Autodesk failed to sponsor this year’s flagship Bay Area Maker Faire”, TechCrunch reports. Dougherty further told that the company is trying to keep the servers running. “I hope to be able to get control of the assets of the company and restart it. We’re not necessarily going to do everything we did in the past but I’m committed to keeping the print magazine going and the Maker Faire licensing program”, he further added. In 2016, the company laid off 17 of its employees, followed by 8 employees recently in March. “They’ve been paid their owed wages and PTO, but did not receive any severance or two-week notice”, TechCrunch reports. These layoffs may have hinted the staff of the financial crisis affecting the company. Maker Media Inc. had raised $10 million from Obvious Ventures, Raine Ventures, and Floodgate. Dougherty says, “It started as a venture-backed company but we realized it wasn’t a venture-backed opportunity. The company wasn’t that interesting to its investors anymore. It was failing as a business but not as a mission. Should it be a non-profit or something like that? Some of our best successes, for instance, are in education.” The company has a huge public following for its products. Dougherty told TechCrunch that despite the rain, Maker Faire’s big Bay Area event last week met its ticket sales target. Also, about 1.45 million people attended its events in 2016. “MAKE: magazine had 125,000 paid subscribers and the company had racked up over one million YouTube subscribers. But high production costs in expensive cities and a proliferation of free DIY project content online had strained Maker Media”, writes TechCrunch. Dougherty told TechCrunch he has been overwhelmed by the support shown by the Maker community. As of now, licensed Maker Faire events around the world will proceed as planned. “Dougherty also says he’s aware of Oculus co-founder Palmer Luckey’s interest in funding the company, and a GoFundMe page started for it”, TechCrunch reports. Mike Senese, Executive Editor, MAKE magazine, tweeted, “Nothing but love and admiration for the team that I got to spend the last six years with, and the incredible community that made this amazing part of my life a reality.” https://twitter.com/donttrythis/status/1137374732733493248 https://twitter.com/xeni/status/1137395288262373376 https://twitter.com/chr1sa/status/1137518221232238592 Former Mythbusters co-host Adam Savage, who was a regular presence at the Maker Faire, told The Verge, “Make Media has created so many important new connections between people across the world. It showed the power from the act of creation. We are the better for its existence and I am sad. I also believe that something new will grow from what they built. The ground they laid is too fertile to lie fallow for long.” On July 10, 2019, Dougherty announced he’ll relaunch Maker Faire and Maker Media with the new name “Make Community“. The official launch of Make Community will supposedly be next week. The company is also working on a new issue of Make Magazine that is planned to be published quarterly and the online archives of its do-it-yourself project guides will remain available. Dougherty told TechCrunch “with the goal that we can get back up to speed as a business, and start generating revenue and a magazine again. This is where the community support needs to come in because I can’t fund it for very long.” GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism
Read more
  • 0
  • 0
  • 2157
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-worried-about-deepfakes-check-out-the-new-algorithm-that-manipulate-talking-head-videos-by-altering-the-transcripts
Vincy Davis
07 Jun 2019
6 min read
Save for later

Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts

Vincy Davis
07 Jun 2019
6 min read
Last week, a team of researchers from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research published a paper titled “Text-based Editing of Talking-head Video”. This paper proposes a method to edit a talking-head video based on its transcript to produce a realistic output video, in which the dialogue of the speaker has been modified. Basically, the editor modifies a video using a text transcript, to add new words, delete unwanted ones or completely rearrange the pieces by dragging and dropping. This video will maintain a seamless audio-visual flow, without any jump cuts and will look almost flawless to the untrained eye. The researchers want this kind of text-based editing approach to lay the foundation for better editing tools, in post production of movies and television. Actors often botch small bits of performance or leave out a critical word. This algorithm can help video editors fix that, which has until now involves expensive reshoots. It can also help in easy adaptation of audio-visual video content to specific target audiences. The tool supports three types of edit operations- add new words, rearrange existing words, delete existing words. Ohad Fried, a researcher in the paper says that “This technology is really about better storytelling. Instructional videos might be fine-tuned to different languages or cultural backgrounds, for instance, or children’s stories could be adapted to different ages.” https://youtu.be/0ybLCfVeFL4 How does the application work? The method uses an input talking-head video and a transcript to perform text-based editing. The first step is to align phonemes to the input audio and track each input frame to construct a parametric head model. Next, a 3D parametric face model with each frame of the input talking-head video is registered. This helps in selectively blending different aspects of the face. Then, a background sequence is selected and is used for pose data and background pixels. The background sequence allows editors to edit challenging videos with hair movement and slight camera motion. As Facial expressions are an important parameter, the researchers have tried to preserve the retrieved expression parameters as much as possible, by smoothing out the transition between them. This provides an output of edited parameter sequence which describes the new desired facial motion and a corresponding retimed background video clip. This is forwarded to a ‘neural face rendering’ approach. This step changes the facial motion of the retimed background video to match the parameter sequence. Thus the rendering procedure produces photo-realistic video frames of the subject, appearing to speak the new phrase.These localized edits seamlessly blends into the original video, producing an edited result. Lastly to add the audio, the resulted video is retimed to match the recording at the level of phones. The researchers have used the performers own voice in all their synthesis results. Image Source: Text-based Editing of Talking-head Video The researchers have tested the system with a series of complex edits including adding, removing and changing words, as well as translations to different languages. When the application was tried in a crowd-sourced study with 138 participants, the edits were rated as “real”, almost 60% of the time. Fried said that “The visual quality is such that it is very close to the original, but there’s plenty of room for improvement.” Ethical considerations: Erosion of truth, confusion and defamation Even though the application is quite useful for video editors and producers, it raises important and valid concerns about its potential for misuse. The researchers have also agreed that such a technology might be used for illicit purposes. “We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals. We are concerned about such deception and misuse.” They have recommended certain precautions to be taken to avoid deception and misuse such as using watermarking. “The fact that the video is synthesized may be obvious by context, directly stated in the video or signaled via watermarking. We also believe that it is essential to obtain permission from the performers for any alteration before sharing a resulting video with a broad audience.” They urge the community to continue to develop forensics, fingerprinting and verification techniques to identify manipulated video. They also support the creation of appropriate regulations and laws that would balance the risks of misuse of these tools against the importance of creative, consensual use cases. The public however remain dubious pointing out valid arguments on why the ‘Ethical Concerns’ talked about in the paper, fail. A user on Hacker News comments, “The "Ethical concerns" section in the article feels like a punt. The author quoting "this technology is really about better storytelling" is aspirational -- the technology's story will be written by those who use it, and you can bet people will use this maliciously.” https://twitter.com/glenngabe/status/1136667296980701185 Another user feels that such kind of technology will only result in “slow erosion of video evidence being trustworthy”. Others have pointed out how the kind of transformation mentioned in the paper, does not come under the broad category of ‘video-editing’ ‘We need more words to describe this new landscape’ https://twitter.com/BrianRoemmele/status/1136710962348617728 Another common argument is that the algorithm can be used to generate terrifyingly real Deepfake videos. A Shallow Fake video was Nancy Pelosi’s altered video, which circulated recently, that made it appear she was slurring her words by slowing down the video. Facebook was criticized for not acting faster to slow the video’s spread. Not just altering speeches of politicians, altered videos like these can also, for instance, be used to create fake emergency alerts, or disrupt elections by dropping a fake video of one of the candidates before voting starts. There is also the issue of defaming someone on a personal capacity. Sam Gregory, Program Director at Witness, tweets that one of the main steps in ensuring effective use of such tools would be to “ensure that any commercialization of synthetic media tools has equal $ invested in detection/safeguards as in detection.; and to have a grounded conversation on trade-offs in mitigation”. He has also listed more interesting recommendations. https://twitter.com/SamGregory/status/1136964998864015361 For more details, we recommend you to read the research paper. OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 3285

article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 2994

article-image-roger-mcnamee-on-silicon-valleys-obsession-for-building-data-voodoo-dolls
Savia Lobo
05 Jun 2019
5 min read
Save for later

Roger McNamee on Silicon Valley’s obsession for building “data voodoo dolls”

Savia Lobo
05 Jun 2019
5 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday May 27 to Wednesday May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Roger McNamee’s take on why Silicon Valley wants to build data voodoo dolls for users. Roger McNamee is the Author of Zucked: Waking up to the Facebook Catastrophe. His remarks in this section of the hearing builds on previous hearing presentations by Professor Zuboff, Professor Park Ben Scott and the previous talk by Jim Balsillie. Roger McNamee’s remarks build on previous hearing presentations by Professor Zuboff, Professor Park Ben Scott and the previous talk by Jim Balsillie. He started off by saying, “Beginning in 2004, I noticed a transformation in the culture of Silicon Valley and over the course of a decade customer focused models were replaced by the relentless pursuit of global scale, monopoly, and massive wealth.” McNamee says that Google wants to make the world more efficient, they want to eliminate user stress that results from too many choices. Now, Google knew that society would not permit a business model based on denying consumer choice and free will, so they covered their tracks. Beginning around 2012, Facebook adopted a similar strategy later followed by Amazon, Microsoft, and others. For Google and Facebook, the business is behavioral prediction using which they build a high-resolution data avatar of every consumer--a voodoo doll if you will. They gather a tiny amount of data from user posts and queries; but the vast majority of their data comes from surveillance, web tracking, scanning emails and documents, data from apps and third parties, and ambient surveillance from products like Alexa, Google assistant, sidewalk labs, and Pokemon go. Google and Facebook used data voodoo dolls to provide their customers who are marketers with perfect information about every consumer. They use the same data to manipulate consumer choices just as in China behavioral manipulation is the goal. The algorithms of Google and Facebook are tuned to keep users on site and active; preferably by pressing emotional buttons that reveal each user's true self. For most users, this means content that provokes fear or outrage. Hate speech, disinformation, and conspiracy theories are catnip for these algorithms. The design of these platforms treats all content precisely the same whether it be hard news from a reliable site, a warning about an emergency, or a conspiracy theory. The platforms make no judgments, users choose aided by algorithms that reinforce past behavior. The result is, 2.5 billion Truman shows on Facebook each a unique world with its own facts. In the U.S. nearly 40% of the population identifies with at least one thing that is demonstrably false; this undermines democracy. “The people at Google and Facebook are not evil they are the products of an American business culture with few rules where misbehavior seldom results in punishment”, he says. Unlike industrial businesses, internet platforms are highly adaptable and this is the challenge. If you take away one opportunity they will move on to the next one and they are moving upmarket getting rid of the middlemen. Today, they apply behavioral prediction to advertising but they have already set their sights on transportation and financial services. This is not an argument against undermining their advertising business but rather a warning that it may be a Pyrrhic victory. If a user’s goals are to protect democracy and personal liberty, McNamee tells them, they have to be bold. They have to force a radical transformation of the business model of internet platforms. That would mean, at a minimum banning web tracking, scanning of email and documents, third party commerce and data, and ambient surveillance. A second option would be to tax micro targeted advertising to make it economically unattractive. But you also need to create space for alternative business models using trust that longs last. Startups can happen anywhere they can come from each of your countries. At the end of the day, though the most effective path to reform would be to shut down the platforms at least temporarily as Sri Lanka did. Any country can go first. The platform's have left you no choice the time has come to call their bluff companies with responsible business models will emerge overnight to fill the void. McNamee explains, “when they (organizations) gather all of this data the purpose of it is to create a high resolution avatar of each and every human being. Doesn't matter whether they use their systems or not they collect it on absolutely everybody. In the Caribbean, Voodoo was essentially this notion that you create a doll, an avatar, such that you can poke it with a pin and the person would experience that pain right and so it becomes literally a representation of the human being.” To know more you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher Over 19 years of ANU(Australian National University) students’ and staff data breached
Read more
  • 0
  • 0
  • 2413

article-image-jim-balsillie-on-data-governance-challenges-and-6-recommendations-to-tackle-them
Savia Lobo
05 Jun 2019
5 min read
Save for later

Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them

Savia Lobo
05 Jun 2019
5 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Jim Balsillie’s take on Data Governance. Jim Balsillie, Chair, Centre for International Governance Innovation; Retired Chairman and co-CEO of BlackBerry, starts off by talking about how Data governance is the most important public policy issue of our time. It is cross-cutting with economic, social and security dimensions. It requires both national policy frameworks and international coordination. He applauded the seriousness and integrity of Mr. Zimmer Angus and Erskine Smith who have spearheaded a Canadian bipartisan effort to deal with data governance over the past three years. “My perspective is that of a capitalist and global tech entrepreneur for 30 years and counting. I'm the retired Chairman and co-CEO of Research in Motion, a Canadian technology company [that] we scaled from an idea to 20 billion in sales. While most are familiar with the iconic BlackBerry smartphones, ours was actually a platform business that connected tens of millions of users to thousands of consumer and enterprise applications via some 600 cellular carriers in over 150 countries. We understood how to leverage Metcalfe's law of network effects to create a category-defining company, so I'm deeply familiar with multi-sided platform business model strategies as well as navigating the interface between business and public policy.”, he adds. He further talks about his different observations about the nature, scale, and breadth of some collective challenges for the committee’s consideration: Disinformation in fake news is just two of the negative outcomes of unregulated attention based business models. They cannot be addressed in isolation; they have to be tackled horizontally as part of an integrated whole. To agonize over social media’s role in the proliferation of online hate, conspiracy theories, politically motivated misinformation, and harassment, is to miss the root and scale of the problem. Social media’s toxicity is not a bug, it's a feature. Technology works exactly as designed. Technology products services and networks are not built in a vacuum. Usage patterns drive product development decisions. Behavioral scientists involved with today's platforms helped design user experiences that capitalize on negative reactions because they produce far more engagement than positive reactions. Among the many valuable insights provided by whistleblowers inside the tech industry is this quote, “the dynamics of the attention economy are structurally set up to undermine the human will.” Democracy and markets work when people can make choices align with their interests. The online advertisement driven business model subverts choice and represents a fundamental threat to markets election integrity and democracy itself. Technology gets its power through the control of data. Data at the micro-personal level gives technology unprecedented power to influence. “Data is not the new oil, it's the new plutonium amazingly powerful dangerous when it spreads difficult to clean up and with serious consequences when improperly used.” Data deployed through next-generation 5G networks are transforming passive in infrastructure into veritable digital nervous systems. Our current domestic and global institutions rules and regulatory frameworks are not designed to deal with any of these emerging challenges. Because cyberspace knows no natural borders, digital transformation effects cannot be hermetically sealed within national boundaries; international coordination is critical. With these observations, Balsillie has further provided six recommendations: Eliminate tax deductibility of specific categories of online ads. Ban personalized online advertising for elections. Implement strict data governance regulations for political parties. Provide effective whistleblower protections. Add explicit personal liability alongside corporate responsibility to effect the CEO and board of directors’ decision-making. Create a new institution for like-minded nations to address digital cooperation and stability. Technology is becoming the new 4th Estate Technology is disrupting governance and if left unchecked could render liberal democracy obsolete. By displacing the print and broadcast media and influencing public opinion, technology is becoming the new Fourth Estate. In our system of checks and balances, this makes technology co-equal with the executive that led the legislative and the judiciary. When this new Fourth Estate declines to appear before this committee, as Silicon Valley executives are currently doing, it is symbolically asserting this aspirational co-equal status. But is asserting the status and claiming its privileges without the traditions, disciplines, legitimacy, or transparency that checked the power of the traditional Fourth Estate. The work of this international grand committee is a vital first step towards reset redress of this untenable current situation. Referring to what Professor Zuboff said last night, we Canadians are currently in a historic battle for the future of our democracy with a charade called sidewalk Toronto. He concludes by saying, “I'm here to tell you that we will win that battle.” To know more you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy, and Ethics” on ParlVU. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? UK lawmakers to social media: “You’re accessories to radicalization, accessories to crimes”, hearing on spread of extremist content Key Takeaways from Sundar Pichai’s Congress hearing over user data, political bias, and Project Dragonfly
Read more
  • 0
  • 0
  • 2052
article-image-experts-present-most-pressing-issues-facing-global-lawmakers-on-citizens-privacy-democracy-and-rights-to-freedom-of-speech
Sugandha Lahoti
31 May 2019
17 min read
Save for later

Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech

Sugandha Lahoti
31 May 2019
17 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy, and Ethics are hosting a hearing on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29 as a series of discussions with experts, and tech execs over the three days. The committee invited expert witnesses to testify before representatives from 12 countries ( Canada, United Kingdom, Singapore, Ireland, Germany, Chile, Estonia, Mexico, Morocco, Ecuador, St. Lucia, and Costa Rica) on how governments can protect democracy and citizen rights in the age of big data. The committee opened with a round table discussion where expert witnesses spoke about what they believe to be the most pressing issues facing lawmakers when it comes to protecting the rights of citizens in the digital age. Expert witnesses that took part were: Professor Heidi Tworek, University of British Columbia Jason Kint, CEO of Digital Content Next Taylor Owen, McGill University Ben Scott, The Center for Internet and Society, Stanford Law School Roger McNamee, Author of Zucked: Waking up to the Facebook Catastrophe Shoshana Zuboff, Author of The Age of Surveillance Capitalism Maria Ressa, Chief Executive Officer and Executive Editor of Rappler Inc. Jim Balsillie, Chair, Centre for International Governance Innovation The session was led by Bob Zimmer, M.P. and Chair of the Standing Committee on Access to Information, Privacy and Ethics. Other members included Nathaniel Erskine-Smith, and Charlie Angus, M.P. and Vice-Chair of the Standing Committee on Access to Information, Privacy and Ethics. Also present was Damian Collins, M.P. and Chair of the UK Digital, Culture, Media and Sport Committee. Testimonies from the witnesses “Personal data matters more than context”, Jason Kint, CEO of Digital Content Next The presentation started with Mr. Jason Kint, CEO of Digital Content Next, a US based Trade association, who thanked the committee and appreciated the opportunity to speak on behalf of 80 high-quality digital publishers globally. He begins by saying how DCN has prioritized shining a light on issues that erode trust in the digital marketplace, including a troubling data ecosystem that has developed with very few legitimate constraints on the collection and use of data about consumers. As a result personal data is now valued more highly than context, consumer expectations, copyright, and even facts themselves. He believes it is vital that policymakers begin to connect the dots between the three topics of the committee's inquiry, data privacy, platform dominance and, societal impact. He says that today personal data is frequently collected by unknown third parties without consumer knowledge or control. This data is then used to target consumers across the web as cheaply as possible. This dynamic creates incentives for bad actors, particularly on unmanaged platforms, like social media, which rely on user-generated content mostly with no liability. Here the site owners are paid on the click whether it is from an actual person or a bot on trusted information or on disinformation. He says that he is optimistic about regulations like the GDPR in the EU which contain narrow purpose limitations to ensure companies do not use data for secondary uses. He recommends exploring whether large tech platforms that are able to collect data across millions of devices, websites, and apps should even be allowed to use this data for secondary purposes. He also applauds the decision of the German cartel office to limit Facebook's ability to collect and use data across its apps and the web. He further says that issues such as bot fraud, malware, ad blockers, clickbait, privacy violations and now disinformation are just symptoms. The root cause is unbridled data collection at the most personal level.  Four years ago DC ended the original financial analysis labeling Google and Facebook the duopoly of digital advertising. In a 150+ billion dollar digital ad market across the North America and the EU, 85 to 90 percent of the incremental growth is going to just these two companies. DNC dug deeper and connected the revenue concentration to the ability of these two companies to collect data in a way that no one else can. This means both companies know much of your browsing history and your location history. The emergence of this duopoly has created a misalignment between those who create the content and those who profit from it. The scandal involving Facebook and Cambridge analytic underscores the current dysfunctional dynamic. With the power Facebook has over our information ecosystem our lives and our democratic systems it is vital to know whether we can trust the company. He also points out that although, there's been a well documented and exhausting trail of apologies, there's been little or no change in the leadership or governance of Facebook. In fact the company has repeatedly refused to have its CEO offer evidence to pressing international government. He believes there should be a deeper probe as there's still much to learn about what happened and how much Facebook knew about the Cambridge Analytica scandal before it became public. Facebook should be required to have an independent audit of its user account practices and its decisions to preserve or purge real and fake accounts over the past decade. He ends his testimony saying that it is critical to shed light on these issues to understand what steps must be taken to improve data protection. This includes providing consumers with greater transparency and choice over their personal data when using practices that go outside of the normal expectations of consumers. Policy makers globally must hold digital platforms accountable for helping to build a healthy marketplace and for restoring consumer trust and restoring competition. “We need a World Trade Organization 2.0 “, Jim Balsillie, Chair, Centre for International Governance Innovation; Retired Chairman and co-CEO of BlackBerry Jim begins by saying that Data governance is the most important public policy issue of our time. It is cross-cutting with economic, social, and security dimension. It requires both national policy frameworks and international coordination. A specific recommendation he brought forward in this hearing was to create a new institution for like-minded nations to address digital cooperation and stability. “The data driven economies effects cannot be contained within national borders”, he said, “we need new or reformed rules of the road for digitally mediated global commerce, a World Trade Organization 2.0”. He gives the example of Financial Stability Board which was created in the aftermath of the 2008 financial crisis to foster global financial cooperation and stability. He recommends forming a similar global institution, for example, digital stability board, to deal with the challenges posed by digital transformation. The nine countries on this committee plus the five other countries attending, totaling 14 could constitute founding members of this board which would undoubtedly grow over time. “Check business models of Silicon Valley giants”, Roger McNamee, Author of Zucked: Waking up to the Facebook Catastrophe Roger begins by saying that it is imperative that this committee and that nations around the world engage in a new thought process relative to the ways of controlling companies in Silicon Valley, especially to look at their business models. By nature these companies invade privacy and undermine democracy. He assures that there is no way to stop that without ending the business practices as they exist. He then commends Sri Lanka who chose to shut down the platforms in response to a terrorist act. He believes that that is the only way governments are going to gain enough leverage in order to have reasonable conversations. He explains more on this in his formal presentation, which took place yesterday. “Stop outsourcing policies to the private sector”, Taylor Owen, McGill University He begins by making five observations about the policy space that we’re in right now. First, self-regulation and even many of the forms of co-regulation that are being discussed have and will continue to prove insufficient for this problem. The financial incentives are simply powerfully aligned against meaningful reform. These are publicly traded largely unregulated companies whose shareholders and directors expect growth by maximizing a revenue model that it is self part of the problem. This growth may or may not be aligned with the public interest. Second, disinformation, hate speech, election interference, privacy breaches, mental health issues and anti-competitive behavior must be treated as symptoms of the problem not its cause. Public policy should therefore focus on the design and the incentives embedded in the design of the platforms themselves. If democratic governments determine that structure and design is leading to negative social and economic outcomes, then it is their responsibility to govern. Third, governments that are taking this problem seriously are converging on a markedly similar platform governance agenda. This agenda recognizes that there are no silver bullets to this broad set of problems and that instead, policies must be domestically implemented and internationally coordinated across three categories: Content policies which seek to address a wide range of both supply and demand issues about the nature amplification and legality of content in our digital public sphere. Data policies which ensure that public data is used for the public good and that citizens have far greater rights over the use, mobility, and monetization of their data. Competition policies which promote free and competitive markets in the digital economy. Fourth, the propensity when discussing this agenda to overcomplicate solutions serves the interests of the status quo. He then recommends sensible policies that could and should be implemented immediately: The online ad micro targeting market could be made radically more transparent and in many cases suspended entirely. Data privacy regimes could be updated to provide far greater rights to individuals and greater oversight and regulatory power to punish abuses. Tax policy can be modernized to better reflect the consumption of digital goods and to crack down on tax base erosion and profit sharing. Modernized competition policy can be used to restrict and rollback acquisitions and a separate platform ownership from application and product development. Civic media can be supported as a public good. Large-scale and long term civic literacy and critical thinking efforts can be funded at scale by national governments, not by private organizations. He then asks difficult policy questions for which there are neither easy solutions, meaningful consensus nor appropriate existing international institutions. How we regulate harmful speech in the digital public sphere? He says, that at the moment we've largely outsourced the application of national laws as well as the interpretation of difficult trade-offs between free speech and personal and public harms to the platforms themselves. Companies who seek solutions rightly in their perspective that can be implemented at scale globally. In this case, he argues that what is possible technically and financially for the companies might be insufficient for the goals of the public good or the public policy goals. What is liable for content online? He says that we’ve clearly moved beyond the notion of platform neutrality and absolute safe harbor but what legal mechanisms are best suited to holding platforms, their design, and those that run them accountable. Also, he asks how are we going to bring opaque artificial intelligence systems into our laws and norms and regulations? He concludes saying that these difficult conversation should not be outsourced to the private sector. They need to be led by democratically accountable governments and their citizens. “Make commitments to public service journalism”, Ben Scott, The Center for Internet and Society, Stanford Law School Ben states that technology doesn't cause the problem of data misinformation, and irregulation. It infact accelerates it. This calls for policies to be made to limit the exploitation of these technology tools by malignant actors and by companies that place profits over the public interest. He says, “we have to view our technology problem through the lens of the social problems that we're experiencing.” This is why the problem of political fragmentation or hate speech tribalism and digital media looks different in each countries. It looks different because it feeds on the social unrest, the cultural conflict, and the illiberalism that is native to each society. He says we need to look at problems holistically and understand that social media companies are a part of a system and they don't stand alone as the super villains. The entire media market has bent itself to the performance metrics of Google and Facebook. Television, radio, and print have tortured their content production and distribution strategies to get likes shares and and to appear higher in the Google News search results. And so, he says, we need a comprehensive public policy agenda and put red lines around the illegal content. To limit data collection and exploitation we need to modernize competition policy to reduce the power of monopolies. He also says, that we need to publicly educate people on how to help themselves and how to stop being exploited. We need to make commitments to public service journalism to provide alternatives for people, alternatives to the mindless stream of clickbait to which we have become accustomed. “Pay attention to the physical infrastructure”, Professor Heidi Tworek, University of British Columbia Taking inspiration from Germany's vibrant interwar media democracy as it descended into an authoritarian Nazi regime, Heidi lists five brief lessons that she thinks can guide policy discussions in the future. These can enable governments to build robust solutions that can make democracies stronger. Disinformation is also an international relations problem Information warfare has been a feature not a bug of the international system for at least a century. So the question is not if information warfare exists but why and when states engage in it. This happens often when a state feels encircled, weak or aspires to become a greater power than it already is. So if many of the causes of disinformation are geopolitical, we need to remember that many of the solutions will be geopolitical and diplomatic as well, she adds. Pay attention to the physical infrastructure Information warfare disinformation is also enabled by physical infrastructure whether it is the submarine cables a century ago or fiber optic cables today. 95 to 99 percent of international data flows through undersea fiber-optic cables. Google partly owns 8.5 percent of those submarine cables. Content providers also own physical infrastructure She says, Russia and China, for example are surveying European and North American cables. China we know as of investing in 5G but combining that with investments in international news networks. Business models matter more than individual pieces of content Individual harmful content pieces go viral because of the few companies that control the bottleneck of information. Only 29% of Americans or Brits understand that their Facebook newsfeed is algorithmically organized. The most aware are the Finns and there are only 39% of them that understand that. That invisibility can provide social media platforms an enormous amount of power that is not neutral. At a very minimum, she says, we need far more transparency about how algorithms work and whether they are discriminatory. Carefully design robust regulatory institutions She urges governments and the committee to democracy-proof whatever solutions,  come up with. She says, “we need to make sure that we embed civil society or whatever institutions we create.” She suggests an idea of forming social media councils that could meet regularly to actually deal with many such problems. The exact format and the geographical scope are still up for debate but it's an idea supported by many including the UN Special Rapporteur on freedom of expression and opinion, she adds. Address the societal divisions exploited by social media Heidi says, that the seeds of authoritarianism need fertile soil to grow and if we do not attend to the underlying economic and social discontents, better communications cannot obscure those problems forever. “Misinformation is effect of one shared cause, Surveillance Capitalism”, Shoshana Zuboff, Author of The Age of Surveillance Capitalism Shoshana also agrees with the committee about how the themes of platform accountability, data security and privacy, fake news and misinformation are all effects of one shared cause. She identifies this underlying cause as surveillance capitalism and defines  surveillance capitalism as a comprehensive systematic economic logic that is unprecedented. She clarifies that surveillance capitalism is not technology. It is also not a corporation or a group of corporations. This is infact a virus that has infected every economic sector from insurance, retail, publishing, finance all the way through to product and service manufacturing and administration all of these sectors. According to her, Surveillance capitalism cannot also be reduced to a person or a group of persons. Infact surveillance capitalism follows the history of market capitalism in the following way - it takes something that exists outside the marketplace and it brings it into the market dynamic for production and sale. It claims private human experience for the market dynamic. Private human experience is repurposed as free raw material which are rendered as behavioral data. Some of these behavioral data are certainly fed back into product and service improvement but the rest are declared of behavioral surplus identified for their rich predictive value. These behavioral surplus flows are then channeled into the new means of production what we call machine intelligence or artificial intelligence. From these come out prediction products. Surveillance capitalists own and control not one text but two. First is the public facing text which is derived from the data that we have provided to these entities. What comes out of these, the prediction products, is the proprietary text, a shadow text from which these companies have amassed high market capitalization and revenue in a very short period of time. These prediction products are then sold into a new kind of marketplace that trades exclusively in human futures. The first name of this marketplace was called online targeted advertising and the human predictions that were sold in those markets were called click-through rates. By now that these markets are no more confined to that kind of marketplace. This new logic of surveillance capitalism is being applied to anything and everything. She promises to discuss on more of this in further sessions. “If you have no facts then you have no truth. If you have no truth you have no trust”, Maria Ressa, Chief Executive Officer and Executive Editor of Rappler Inc. Maria believes that in the end it comes down to the battle for truth and journalists are on the front line of this along with activists. Information is power and if you can make people believe lies, then you can control them. Information can be used for commercial benefits as well as a means to gain geopolitical power. She says,  If you have no facts then you have no truth. If you have no truth you have no trust. She then goes on to introduce a bit about her formal presentation tomorrow saying that she will show exactly how quickly a nation, a democracy can crumble because of information operations. She says she will provide data that shows it is systematic and that it is an erosion of truth and trust.  She thanks the committee saying that what is so interesting about these types of discussions is that the countries that are most affected are democracies that are most vulnerable. Bob Zimmer concluded the meeting saying that the agenda today was to get the conversation going and more of how to make our data world a better place will be continued in further sessions. He said, “as we prepare for the next two days of testimony, it was important for us to have this discussion with those who have been studying these issues for years and have seen firsthand the effect digital platforms can have on our everyday lives. The knowledge we have gained tonight will no doubt help guide our committee as we seek solutions and answers to the questions we have on behalf of those we represent. My biggest concerns are for our citizens’ privacy, our democracy and that our rights to freedom of speech are maintained according to our Constitution.” Although, we have covered most of the important conversations, you can watch the full hearing here. Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee. A brief list of drafts bills in US legislation for protecting consumer data privacy
Read more
  • 0
  • 0
  • 2106

article-image-privacy-experts-discuss-gdpr-its-impact-and-its-future-on-beth-kindigs-tech-lightning-rounds-podcast
Savia Lobo
28 May 2019
9 min read
Save for later

Privacy Experts discuss GDPR, its impact, and its future on Beth Kindig’s Tech Lightning Rounds Podcast

Savia Lobo
28 May 2019
9 min read
User’s data was being compromised even before the huge Cambridge Analytica scandal was brought to light. On May 25th, 2018, when the GDPR first came into existence in the European Union for data protection and privacy, it brought in much power to individuals over their personal data and to simplify the regulatory environment for international businesses. GDPR recently completed one year and since its inception, these have highly helped in better data privacy regulation. These privacy regulations divided companies into data processors and data controllers. Any company who has customers in the EU must comply regardless of where the company is located. In episode 6 of Tech Lightning Rounds, Beth Kindig of Intertrust speaks to experts from three companies who have implemented GDPR. Robin Andruss, the Director of Privacy at Twilio, a leader in global communications that is uniquely positioned to handle data from text messaging sent inside its applications. Tomas Sander of Intertrust, the company that invented digital rights management and has been advocating for privacy for nearly 30 years. Katryna Dow, CEO of Meeco, a startup that introduces the concept of data control for digital life. Robin Andruss’ on Twilio’s stance on privacy Twilio provides messaging, voice, and video inside mobile and web applications for nearly 40,000 companies including Uber, Lyft, Yelp, Airbnb, Salesforce and many more. “Twilio is one of the leaders in the communications platform as a service space, where we power APIs to help telecommunication services like SMS and texting, for example. A good example is when you order a Lyft or an Uber and you’ll text with a Uber driver and you’ll notice that’s not really their phone number. So that’s an example of one of our services”, Andruss explains. Twilio includes “binding corporate rules”, the global framework around privacy. He says, for anyone who’s been in the privacy space for a long time, they know that it’s actually very challenging to reach this standard. Organizations need to work with a law firm or consultancy to make sure they are meeting a bar of privacy and actually have their privacy regulations and obligations agreed to and approved by their lead DPA, Data Protection Authority in the EU, which in Twilio’s case is the Irish DPC. “We treat everyone who uses Twilio services across the board the same, our corporate rules. One rule, we don’t have a different one for the US or the EU. So I’d say that they are getting GDPR level of privacy standards when you use Twilio”, Andruss said. Talking about the California Consumer Privacy Act (CCPA), Andruss said that it’s mostly more or less targeted towards advertising companies and companies that might sell data about individuals and make money off of it, like Intelius or Spokeo or those sort of services. Beth asked Andruss on “how concerned the rest of us should be about data and what companies can do internally to improve privacy measures” to which he said, “just think about, really, what you’re putting out there, and why, and this third party you’re giving your information to when you are giving it away”. Twilio’s “no-shenanigans” and “Wear your customers’ shoes” approach to privacy Twilio’s “No-shenigans” approach to privacy encourages employees to do the right thing for their end-users and customers. Andruss explained this with an example, “You might be in a meeting, and you can say, “Is that the right thing? Do we really wanna do that? Is that the right thing to do for our customers or is that shenanigany does it not feel right?”. The “Wear your customers’ shoes.” approach is, when Twilio builds a product or thinks about something, they think about how to do the right thing for their customers. This builds trust within the customers that the organization really cares about privacy and wants to do the right thing while customers use Twilio’s tools and services. Tomas Sander on privacy pre-GDPR and post-GDPR Tomas Sander started off by explaining the basics of GDPR, what it does, and how it can help users, and so on. He also cleared a common doubt that most people have about the reach of EU’s GDPR. He said, “One of the main things that the GDPR has done is that it has an extraterritorial reach. So GDPR not only applies to European companies, but to companies worldwide if they provide goods and services to European citizens”. GDPR has “made privacy a much more important issue for many organizations” due to which GDPR has huge fines for non-compliance and that has contributed for it to be taken seriously by companies globally. Because of data breaches, “security has become a boardroom issue for many companies. Now, privacy has also become a boardroom issue”, Sander adds. He said that GDPR has been extremely effective in setting the privacy debate worldwide. Although it’s a regulation in Europe, it’s been extremely effective through its global impact on organizations and on thinking of policymakers, what they wanna do about privacy in their countries. However, talking about positive impact, Sander said that data behemoths such as Google and Facebook are still collecting data from many, many different sources, aggregating it about users, and creating detailed profiles for the purpose of selling advertising, usually, so for profit. This is why the jury is still out! “And this practice of taking all this different data, from location data to smart home data, to their social media data and so on and using them for sophisticated user profiling, that practice hasn’t recognizably changed yet”, he added. Sander said he “recently heard data protection commissioners speak at a privacy conference in Washington, and they believe that we’re going to see some of these investigations conclude this summer. And hopefully then there’ll be some enforcement, and some of the commissioners certainly believe that there will be fines”. Sander’s suggestion for users who are not much into tech is,  “I think people should be deeply concerned about privacy.” He said they can access your web browsing activities, your searches, location data, the data shared on social media, facial recognition from images, and also these days IoT and smart home data that give people intimate insights into what’s happening in your home. With this data, the company can keep a tab on what you do and perhaps create a user profile. “A next step they could take is that they don’t only observe what you do and predict what the next step is you’re going to do, but they may also try to manipulate and influence what you do. And they would usually do that for profit motives, and that is certainly a major concern. So people may not even know, may not even realize, that they’re being influenced”. This is a major concern because it really questions “our individual freedom about… It really becomes about democracy”. Sander also talked about an incident that took place in Germany where its far-right party, “Alternative For Germany”, “Alternative für Deutschland” were able to use a Facebook feature that has been created for advertisers to help it achieve the best result in the federal election for any far right-wing party in Germany after World War 2. The feature that was being used here was a feature of “look-alike” audiences. Facebook helped this party to analyze the characteristics of the 300,000 users who had liked the “Alternative For Germany”, who had liked this party. Further, from these users, it created a “look-alike” audience of another 300,000 users that were similar in characteristics to those who had already liked this party, and then they were specifically targeting ads to this group. Katrina Dow on getting people digitally aware Dow thinks, “the biggest challenge right now is that people just don’t understand what goes on under the surface”. She explains how by a simple picture sharing of a child playing in a park can impact the child’s credit rating in the future.  She says, “People don’t understand the consequences of something that I do right now, that’s digital, and what it might impact some time in the future”. She also goes on explaining how to help people make a more informed choice around the services they wanna use or argue for better rights in terms of those services, so those consequences don’t happen. Dow also discusses one of the principles of the GDPR, which is designing privacy into the applications or websites as the foundation of the design, rather than adding privacy as an afterthought. Beth asked if GDPR, which introduces some level of control, is effective. To which Dow replied, “It’s early days. It’s not working as intended right now.” Dow further explained, “the biggest problem right now is the UX level is just not working. And organizations that have been smart in terms of creating enormous amounts of friction are using that to their advantage.” “They’re legally compliant, but they have created that compliance burden to be so overwhelming, that I agree or just anything to get this screen out of the way is driving the behavior”, Dow added. She says that a part of GDPR is privacy by design, but what we haven’t seen the surface to the UX level. “And I think right now, it’s just so overwhelming for people to even work out, “What’s the choice?” What are they saying yes to? What are they saying no to? So I think, the underlying components are there and from a legal framework. Now, how do we move that to what we know is the everyday use case, which is how you interact with those frameworks”, Dow further added. To listen to this podcast and know more about this in detail, visit Beth Kindig’s official website. Github Sponsors: Could corporate strategy eat FOSS culture for dinner? Mozilla and Google Chrome refuse to support Gab’s Dissenter extension for violating acceptable use policy SnapLion: An internal tool Snapchat employees abused to spy on user data
Read more
  • 0
  • 0
  • 2244

article-image-speech2face-a-neural-network-that-imagines-faces-from-hearing-voices-is-it-too-soon-to-worry-about-ethnic-profiling
Savia Lobo
28 May 2019
8 min read
Save for later

Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling?

Savia Lobo
28 May 2019
8 min read
Last week, a few researchers from the MIT CSAIL and Google AI published their research study of reconstructing a facial image of a person from a short audio recording of that person speaking, in their paper titled, “Speech2Face: Learning the Face Behind a Voice”. The researchers designed and trained a neural network which uses millions of natural Internet/YouTube videos of people speaking. During training, they demonstrated that the model learns voice-face correlations that allows it to produce images that capture various physical attributes of the speakers such as age, gender, and ethnicity. The entire training was done in a self-supervised manner, by utilizing the natural co-occurrence of faces and speech in Internet videos, without the need to model attributes explicitly. They said they further evaluated and numerically quantified how their Speech2Face reconstructs, obtains results directly from audio, and how it resembles the true face images of the speakers. For this, they tested their model both qualitatively and quantitatively on the AVSpeech dataset and the VoxCeleb dataset. The Speech2Face model The researchers utilized the VGG-Face model, a face recognition model pre-trained on a large-scale face dataset called DeepFace and extracted a 4096-D face feature from the penultimate layer (fc7) of the network. These face features were shown to contain enough information to reconstruct the corresponding face images while being robust to many of the aforementioned variations. The Speech2Face pipeline consists of two main components: 1) a voice encoder, which takes a complex spectrogram of speech as input, and predicts a low-dimensional face feature that would correspond to the associated face; and 2) a face decoder, which takes as input the face feature and produces an image of the face in a canonical form (frontal-facing and with neutral expression). During training, the face decoder is fixed, and only the voice encoder is trained which further predicts the face feature. How were the facial features evaluated? To quantify how well different facial attributes are being captured in Speech2Face reconstructions, the researchers tested different aspects of the model. Demographic attributes Researchers used Face++, a leading commercial service for computing facial attributes. They evaluated and compared age, gender, and ethnicity, by running the Face++ classifiers on the original images and our Speech2Face reconstructions. The Face++ classifiers return either “male” or “female” for gender, a continuous number for age, and one of the four values, “Asian”, “black”, “India”, or “white”, for ethnicity. Source: Arxiv.org Craniofacial attributes Source: Arxiv.org The researchers evaluated craniofacial measurements commonly used in the literature, for capturing ratios and distances in the face. They computed the correlation between F2F and the corresponding S2F reconstructions. Face landmarks were computed using the DEST library. As can be seen, there is statistically significant (i.e., p < 0.001) positive correlation for several measurements. In particular, the highest correlation is measured for the nasal index (0.38) and nose width (0.35), the features indicative of nose structures that may affect a speaker’s voice. Feature similarity The researchers further test how well a person can be recognized from on the face features predicted from speech. They, first directly measured the cosine distance between the predicted features and the true ones obtained from the original face image of the speaker. The table above shows the average error over 5,000 test images, for the predictions using 3s and 6s audio segments. The use of longer audio clips exhibits consistent improvement in all error metrics; this further evidences the qualitative improvement observed in the image below. They further evaluated how accurately they could retrieve the true speaker from a database of face images. To do so, they took the speech of a person to predict the feature using the Speech2Face model and query it by computing its distances to the face features of all face images in the database. Ethical considerations with Speech2Face model Researchers said that the training data used is a collection of educational videos from YouTube and that it does not represent equally the entire world population. Hence, the model may be affected by the uneven distribution of data. They have also highlighted that “ if a certain language does not appear in the training data, our reconstructions will not capture well the facial attributes that may be correlated with that language”. “In our experimental section, we mention inferred demographic categories such as “White” and “Asian”. These are categories defined and used by a commercial face attribute classifier and were only used for evaluation in this paper. Our model is not supplied with and does not make use of this information at any stage”, the paper mentions. They also warn that any further investigation or practical use of this technology would be carefully tested to ensure that the training data is representative of the intended user population. “If that is not the case, more representative data should be broadly collected”, the researchers state. Limitations of the Speech2Face model In order to test the stability of the Speech2Face reconstruction, the researchers used faces from different speech segments of the same person, taken from different parts within the same video, and from a different video. The reconstructed face images were consistent within and between the videos. They further probed the model with an Asian male example speaking the same sentence in English and Chinese to qualitatively test the effect of language and accent. While having the same reconstructed face in both cases would be ideal, the model inferred different faces based on the spoken language. In other examples, the model was able to successfully factor out the language, reconstructing a face with Asian features even though the girl was speaking in English with no apparent accent. “In general, we observed mixed behaviors and a more thorough examination is needed to determine to which extent the model relies on language. More generally, the ability to capture the latent attributes from speech, such as age, gender, and ethnicity, depends on several factors such as accent, spoken language, or voice pitch. Clearly, in some cases, these vocal attributes would not match the person’s appearance”, the researchers state in the paper. Speech2Cartoon: Converting generated image into cartoon faces The face images reconstructed from speech may also be used for generating personalized cartoons of speakers from their voices. The researchers have used Gboard, the keyboard app available on Android phones, which is also capable of analyzing a selfie image to produce a cartoon-like version of the face. Such cartoon re-rendering of the face may be useful as a visual representation of a person during a phone or a video conferencing call when the person’s identity is unknown or the person prefers not to share his/her picture. The reconstructed faces may also be used directly, to assign faces to machine-generated voices used in home devices and virtual assistants. https://twitter.com/NirantK/status/1132880233017761792 A user on HackerNews commented, “This paper is a neat idea, and the results are interesting, but not in the way I'd expected. I had hoped it would the domain of how much person-specific information this can deduce from a voice, e.g. lip aperture, overbite, size of the vocal tract, openness of the nares. This is interesting from a speech perception standpoint. Instead, it's interesting more in the domain of how much social information it can deduce from a voice. This appears to be a relatively efficient classifier for gender, race, and age, taking voice as input.” “I'm sure this isn't the first time it's been done, but it's pretty neat to see it in action, and it's a worthwhile reminder: If a neural net is this good at inferring social, racial, and gender information from audio, humans are even better. And the idea of speech as a social construct becomes even more relevant”, he further added. This recent study is interesting considering the fact that it is taking AI to another level wherein we are able to predict the face just by using audio recordings and even without the need for a DNA. However, there can be certain repercussions, especially when it comes to security. One can easily misuse such technology by impersonating someone else and can cause trouble. It would be interesting to see how this study turns out to be in the near future. To more about the Speech2Face model in detail, head over to the research paper. OpenAI introduces MuseNet: A deep neural network for generating musical compositions An unsupervised deep neural network cracks 250 million protein sequences to reveal biological structures and functions OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 5884
article-image-facial-recognition-technology-is-faulty-racist-biased-abusive-to-civil-rights-act-now-to-restrict-misuse-say-experts-to-house-oversight-and-reform-committee
Vincy Davis
27 May 2019
6 min read
Save for later

‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee

Vincy Davis
27 May 2019
6 min read
Last week, the US House Oversight and Reform Committee held its first hearing on examining the use of ‘Facial Recognition Technology’. The hearing was an incisive discussion on the use of facial recognition by government and commercial entities, flaws in the technology, lack of regulation and its impact on citizen’s civil rights and liberties. The chairman of the committee, U.S. Representative of Maryland, Elijah Cummings said that one of the goals of this hearing about Facial Recognition Technology is to “protect against its abuse”. At the hearing, Joy Buolamwini, founder of Algorithmic Justice League highlighted one of the major pressing points for the failure of this technology as ‘misidentification’, that can lead to false arrests and accusations, a risk especially for marginalized communities. On one of her studies at MIT, on facial recognition systems, it was found that for the task of guessing a gender of a face, IBM, Microsoft and Amazon had error rates which rose to over 30% for darker skin and women. On evaluating benchmark datasets from organizations like NIST (National Institute for Standards and Technology), a striking imbalance was found. The dataset contained 75 percent male and 80 percent lighter skin data, which she addressed as “pale male datasets”. She added that our faces may well be the final frontier of privacy and Congress must act now to uphold American freedom and rights at minimum. Professor of Law at the University of the District of Columbia, Andrew G. Ferguson agreed with Buolamwini stating, “Congress must act now”, to prohibit facial recognition until Congress establishes clear rules. “The fourth amendment won’t save us. The Supreme Court is trying to make amendments but it’s not fast enough. Only legislation can react in real time to real time threats.” Another strong concern raised at the hearing was the use of facial recognition technology in law enforcement. Neema Singh Guliani, Senior Legislative Counsel of American Civil Liberties Union, said law enforcement across the country, including the FBI, is using face recognition in an unrestrained manner. This growing use of facial recognition is being done “without legislative approval, absent safeguards, and in most cases, in secret.” She also added that the U.S. reportedly has over 50 million surveillance cameras, this combined with face recognition threatens to create a near constant surveillance state. An important addition to regulating facial recognition technology was to include all kinds of biometric surveillance under the ambit of surveillance technology. This includes voice recognition and gait recognition, which is also being used actively by private companies like Tesla. This surveillance should not only include legislation, but also real enforcement so “when your data is misused you have actually an opportunity, to go to court and get some accountability”, Guliani added. She also urged the committee to investigate how FBI and other federal agencies are using this technology, whose accuracy has not been tested and how the agency is complying with the Constitution by “reportedly piloting Amazon's face recognition product”. Like FBI and other government agencies, even companies like Amazon and Facebook were heavily criticized by members of the committee for misusing the technology. It was notified that these companies look for ways to develop this technology and market facial recognition. On the same day of this hearing, came the news that Amazon shareholders rejected the proposal on ban of selling its facial recognition tech to governments. This year in January, activist shareholders had proposed a resolution to limit the sale of Amazon’s facial recognition tech called Rekognition to law enforcement and government agencies. This technology is regarded as an enabler of racial discrimination of minorities as it was found to be biased and inaccurate. Jim Jordan, U.S. Representative of Ohio, raised a concern as to how, “Some unelected person at the FBI talks to some unelected person at the state level, and they say go ahead,” without giving “any notification to individuals or elected representatives that their images will be used by the FBI.” Using face recognition in such casual manner poses “a unique threat to our civil rights and liberties”, noted Clare Garvie, Senior Associate of Georgetown University Law Center and Center on Privacy & Technology. Studies continue to show that the accuracy of face recognition varies on the race of the person being searched. This technology “makes mistakes and risks making more mistakes and more misidentifications of African Americans”. She asserted that “face recognition is too powerful, too pervasive, too susceptible to abuse, to continue being unchecked.” A general agreement by all the members was that a federal legislation is necessary, in order to prevent a confusing and potentially contradictory patchwork, of regulation of government use of facial recognition technology. Another point of discussion was how great facial recognition could work, if implemented in a ‘real’ world. It can help the surveillance and healthcare sector in a huge way, if its challenges are addressed correctly. Dr. Cedric Alexander, former President of National Organization of Black Law Enforcement Executives, was more cautious of banning the technology. He was of the opinion that this technology can be used by police in an effective way, if trained properly. Last week, San Francisco became the first U.S. city to pass an ordinance barring police and other government agencies from using facial recognition technology. This decision has attracted attention across the country and could be followed by other local governments. A council member Gomez made the committee’s stand clear that they, “are not anti-technology or anti-innovation, but we have to be very aware that we're not stumbling into the future blind.” Cummings concluded the hearing by thanking the witnesses stating, “I've been here for now 23 years, it's one of the best hearings I've seen really. You all were very thorough and very very detailed, without objection.” The second hearing is scheduled on June 4th, and will have law enforcement witnesses. For more details, head over to the full Hearing on Facial Recognition Technology by the House Oversight and Reform Committee. Read More Over 30 AI experts join shareholders in calling on Amazon to stop selling Rekognition, its facial recognition tech, for government surveillance Oakland Privacy Advisory Commission lay out privacy principles for Oaklanders and propose ban on facial recognition Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience?
Read more
  • 0
  • 0
  • 2744

article-image-postgresql-12-beta-1-released
Fatema Patrawala
24 May 2019
6 min read
Save for later

PostgreSQL 12 Beta 1 released

Fatema Patrawala
24 May 2019
6 min read
The PostgreSQL Global Development Group announced yesterday its first beta release of PostgreSQL 12. It is now also available for download. This release contains previews of all features that will be available in the final release of PostgreSQL 12, though some details of the release could also change. PostgreSQL 12 feature highlights Indexing Performance, Functionality, and Management PostgreSQL 12 will improve the overall performance of the standard B-tree indexes with improvements to the space management of these indexes as well. These improvements also provide a reduction of index size for B-tree indexes that are frequently modified, in addition to a performance gain. Additionally, PostgreSQL 12 adds the ability to rebuild indexes concurrently, which lets you perform a REINDEX operation without blocking any writes to the index. This feature should help with lengthy index rebuilds that could cause downtime when managing a PostgreSQL database in a production environment. PostgreSQL 12 extends the abilities of several of the specialized indexing mechanisms. The ability to create covering indexes, i.e. the INCLUDE clause that was introduced in PostgreSQL 11, has now been added to GiST indexes. SP-GiST indexes now support the ability to perform K-nearest neighbor (K-NN) queries for data types that support the distance (<->) operation. The amount of write-ahead log (WAL) overhead generated when creating a GiST, GIN, or SP-GiST index is also significantly reduced in PostgreSQL 12, which provides several benefits to the disk utilization of a PostgreSQL cluster and features such as continuous archiving and streaming replication. Inlined WITH queries (Common table expressions) Common table expressions (or WITH queries) can now be automatically inlined in a query if they: a) are not recursive b) do not have any side-effects c) are only referenced once in a later part of a query This removes an "optimization fence" that has existed since the introduction of the WITH clause in PostgreSQL 8.4 Partitioning PostgreSQL 12 while processing tables with thousands of partitions for operations, it only needs to use a small number of partitions. This release also provides improvements to the performance of both INSERT and COPY into a partitioned table. ATTACH PARTITION can now be performed without blocking concurrent queries on the partitioned table. Additionally, the ability to use foreign keys to reference partitioned tables is now permitted in PostgreSQL 12. JSON path queries per SQL/JSON specification PostgreSQL 12 now allows execution of JSON path queries per the SQL/JSON specification in the SQL:2016 standard. Similar to XPath expressions for XML, JSON path expressions let you evaluate a variety of arithmetic expressions and functions in addition to comparing values within JSON documents. A subset of these expressions can be accelerated with GIN indexes, allowing the execution of highly performant lookups across sets of JSON data. Collations PostgreSQL 12 now supports case-insensitive and accent-insensitive comparisons for ICU provided collations, also known as "nondeterministic collations". When used, these collations can provide convenience for comparisons and sorts, but can also lead to a performance penalty as a collation may need to make additional checks on a string. Most-common Value Extended Statistics CREATE STATISTICS, introduced in PostgreSQL 12 to help collect more complex statistics over multiple columns to improve query planning, now supports most-common value statistics. This leads to improved query plans for distributions that are non-uniform. Generated Columns PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet. Pluggable Table Storage Interface PostgreSQL 12 introduces the pluggable table storage interface that allows for the creation and use of different methods for table storage. New access methods can be added to a PostgreSQL cluster using the CREATE ACCESS METHOD command and subsequently added to tables with the new USING clause on CREATE TABLE. A table storage interface can be defined by creating a new table access method. In PostgreSQL 12, the storage interface that is used by default is the heap access method, which is currently is the only built-in method. Page Checksums The pg_verify_checkums command has been renamed to pg_checksums and now supports the ability to enable and disable page checksums across a PostgreSQL cluster that is offline. Previously, page checksums could only be enabled during the initialization of a cluster with initdb. Authentication & Connection Security GSSAPI now supports client-side and server-side encryption and can be specified in the pg_hba.conf file using the hostgssenc and hostnogssencrecord types. PostgreSQL 12 also allows for discovery of LDAP servers based on DNS SRV records if PostgreSQL was compiled with OpenLDAP. Few noted behavior changes in PostgreSQL 12 There are several changes introduced in PostgreSQL 12 that can affect the behavior as well as management of your ongoing operations. A few of these are noted below; for other changes, visit the "Migrating to Version 12" section of the release notes. The recovery.conf configuration file is now merged into the main postgresql.conf file. PostgreSQL will not start if it detects thatrecovery.conf is present. To put PostgreSQL into a non-primary mode, you can use the recovery.signal and the standby.signal files. You can read more about archive recovery here: https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY Just-in-Time (JIT) compilation is now enabled by default. OIDs can no longer be added to user created tables using the WITH OIDs clause. Operations on tables that have columns that were created using WITH OIDS (i.e. columns named "OID") will need to be adjusted. Running a SELECT * command on a system table will now also output the OID for the rows in the system table as well, instead of the old behavior which required the OID column to be specified explicitly. Testing for Bugs & Compatibility The stability of each PostgreSQL release greatly depends on the community, to test the upcoming version with the workloads and testing tools in order to find bugs and regressions before the general availability of PostgreSQL 12. As this is a Beta, minor changes to database behaviors, feature details, and APIs are still possible. The PostgreSQL team encourages the community to test the new features of PostgreSQL 12 in their database systems to help eliminate any bugs or other issues that may exist. A list of open issues is publicly available in the PostgreSQL wiki. You can report bugs using this form on the PostgreSQL website: Beta Schedule This is the first beta release of version 12. The PostgreSQL Project will release additional betas as required for testing, followed by one or more release candidates, until the final release in late 2019. For further information please see the Beta Testing page. Many other new features and improvements have been added to PostgreSQL 12. Please see the Release Notes for a complete list of new and changed features. PostgreSQL 12 progress update Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial]
Read more
  • 0
  • 0
  • 5544