Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-google-trying-to-ethics-wash-its-decisions-with-its-new-advanced-tech-external-advisory-council
Fatema Patrawala
27 Mar 2019
6 min read
Save for later

Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council?

Fatema Patrawala
27 Mar 2019
6 min read
Google yesterday announced a new external advisory board to help monitor the company’s use of artificial intelligence for ways in which it may violate ethical principles it laid out last summer. The group was announced by Kent Walker, Google’s senior vice president of global affairs, and it includes experts on a wide-ranging series of subjects, including mathematics, computer science, philosophy, psychology, and even foreign policy. Following is the complete list of the advisory council appointed by Google: Alessandro Acquisti, a leading behavioral economist and privacy researcher. Bubacarr Bah, an expert in applied and computational mathematics De Kai, a leading researcher in natural language processing, music technology and machine learning Dyan Gibbens, an expert in industrial engineering and CEO of Trumbull Joanna Bryson, an expert in psychology and AI, and a longtime leader in AI ethics Kay Coles James, a public policy expert with extensive experience working at the local, state and federal levels of government Luciano Floridi, a leading philosopher and expert in digital ethics William Joseph Burns, a foreign policy expert and diplomat The group will be called the Advanced Technology External Advisory Council, and it appears Google wants it to be seen as an independent watchdog keeping an eye on how it deploys AI in the real world. It wants to focus on facial recognition technology and mitigation of built-in bias in machine learning training methods. “This group will consider some of Google’s most complex challenges that arise under our AI Principles ... providing diverse perspectives to inform our work,” Walker writes. Behind the selection of the council As for the members, the names may not be easily recognizable to those outside academia. However, the credentials of the board appear to be of the highest caliber, with resumes that include multiple presidential administration positions and stations at top-notch universities spanning University of Oxford, Hong Kong University of Science and Technology, and UC Berkeley. Having said that, the selection of the Heritage Foundation President Kay Coles James and CEO of Trumbull Dyan Gibbens received harsh criticism on Twitter. It has been noted that James, through her involvement with the conservative think tank, has espoused anti-LGBTQ rhetoric on her public Twitter profile: https://twitter.com/farbandish/status/1110624709308121088 https://twitter.com/EerkeBoiten/status/1110675556713091072 One of the members, Joanna Bryson also expressed astonishing comments on Twitter for being selected as a part of the council. Joanna states, she has no idea of what she is getting into but she will certainly do her best. https://twitter.com/luke_stark/status/1110630992979652608 Google’s history of controversies Last year, Google found itself embroiled in controversy over its participation in a US Department of Defense drone program called Project Maven. Following immense internal backlash and external criticism for putting employees to work on AI projects that may involve the taking of human life, Google decided to end its involvement in Maven following the expiration of its contract. It also put together a new set of guidelines, what CEO Sundar Pichai dubbed Google’s AI Principles, that would prohibit the company from working on any product or technology that might violate “internationally accepted norms” or “widely accepted principles of international law and human rights.” “We recognize that such powerful technology raises equally powerful questions about its use,” Pichai wrote at the time. “How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.” Google effectively wants its AI research to be “socially beneficial,” and that often means not taking government contracts or working in territories or markets with notable human rights violations. Regardless, Google found itself in yet another similar controversy related to its plans to launch a search product in China, one that may involve deploying some form of artificial intelligence in a country currently trying to use that very same technology to surveil and track its citizens. Google’s pledge differs from the stances of Amazon and Microsoft, both of which have said they will continue to work the US government. Microsoft has secured a $480 million contract to provide HoloLens headsets to the Pentagon, while Amazon continues to sell its Rekognition facial recognition software to law enforcement agencies. Google also formed a “responsible innovation team” internally that Walker says has reviewed hundreds of different launches to-date, some of which have aligned with its principles while others haven’t. For example, that team helped Google make the decision not to sell facial recognition technology until there’s been more ethical and policy debate on the issue. Why critics are skeptical of this move? Rashida Richardson, director of policy research at AI Now Institute, expressed skepticism about the ambiguity of Google and other companies’ AI principles at the MIT Technology Review Conference held in San Francisco on Tuesday. For example, Google’s document leans heavily on the word “appropriate.” “Who is defining what appropriate means?” she asked. Walker said that Google's new council is meant to foster more defined discussion. He added that the company had over 300 people looking at machine learning fairness issues. "We’re doing our best to put our money where our mouth is,” Kent said. Google has previously had embarrassing technology screw-ups driven by bias in its machine learning systems, like when its photos algorithm labeled black people as gorillas. It would not be wrong to say that today’s announcement — which perhaps not coincidentally comes a day after Amazon said it would earmark $10 million with the National Science Foundation for AI fairness research, and after Microsoft executive Harry Shum said the company would add an ethics review focusing on AI issues to its standard product audit checklist — appears to be an attempt by Google to fend off broader, continued criticism of private sector AI pursuits. https://twitter.com/smunson/status/1110657292549029888 Thoughtful decisions require careful and nuanced consideration of how the AI principles … should apply, how to make tradeoffs when principles come into conflict, and how to mitigate risks for a given circumstance,” says Walker in an earlier blog post. Google and Facebook working hard to clean image after the media backlash from the Christchurch terrorist attack Google announces Stadia, a cloud-based game streaming service, at GDC 2019 Google to be the founding member of CDF (Continuous Delivery Foundation)
Read more
  • 0
  • 0
  • 2337

article-image-elastic-stack-6-7-releases-with-elastic-maps-elastic-update-and-much-more
Amrata Joshi
27 Mar 2019
3 min read
Save for later

Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more!

Amrata Joshi
27 Mar 2019
3 min read
Yesterday, the team at Elastic released Elastic Stack 6.7 a group of open source products from Elastic designed to help users take data from any type of source and visualize that data in real time. What’s new in Elastic Stack 6.7? Elastic Maps Elastic Maps is a new dedicated solution used for mapping, querying, and visualizing geospatial data in Kibana. They expand on existing geospatial visualization options in Kibana with features such as visualization of multiple layers and data sources in the same map. It also includes features like dynamic data-driven styling on vector layers on maps, mapping of both aggregate and document-level data and much more. Elastic Maps also embeds the query bar with autocomplete for real-time ad-hoc search. Elastic Uptime This release comes with Elastic Uptime, that makes it easy to detect when application services are down or they are responding slowly. It notifies users about problems way before those services are called by the application. Cross Cluster Replication (CCR) Cross Cluster Replication (CCR) now has a variety of use cases that include cross-datacenter and cross-region replication and it is generally available. Index Lifecycle Management (ILM) With this release, Index lifecycle management (ILM) is now generally available and also ready for production use. ILM helps Elasticsearch admins with defining and automating lifecycle management policies, such as how data is to be managed and moved between phases like hot, warm, cold, and deletion phases while it ages. Elasticsearch SQL Elasticsearch SQL, helps users with interacting and querying their Elasticsearch data using SQL. Elasticsearch SQL functionality includes the JDBC and ODBC clients, which allows third-party tools to connect to Elasticsearch as a backend datastore. With this release, Elasticsearch SQL gets generally available. Canvas Canvas that helps users to showcase and present live data from Elasticsearch with pixel-perfect precision, becomes generally available with this release. Kibana localization In this release, Kibana’s first localization, which is now available in simplified Chinese. Kibana also introduces a new localization framework that provides support for additional languages. Functionbeat Functionbeat is a Beat that deploys as a function in serverless computing frameworks, as well as streams, cloud infrastructure logs, and metrics into Elasticsearch. The Functionbeat is now generally available and it supports the AWS Lambda framework and can stream data from CloudWatch Logs, SQS, and Kinesis. Upgrade Assistant The Upgrade Assistant in this release will help users in preparing their existing Elastic Stack environment for the upgrade to 7.0. The Upgrade Assistant includes both APIs and UIs and works as an important cluster checkup tool to help plan the upgrade. It also helps in identifying things like deprecation warnings to enable a smoother upgrade experience. To know more about this release, check out Elastic’s blog post. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Core CPython developer unveils a new project that can analyze his phone’s ‘silent connections’ How to handle backup and recovery with PostgreSQL 11 [Tutorial]  
Read more
  • 0
  • 0
  • 3149

article-image-ieee-standards-association-releases-ethics-guidelines-for-automation-and-intelligent-systems
Natasha Mathur
27 Mar 2019
4 min read
Save for later

IEEE Standards Association releases ethics guidelines for automation and intelligent systems

Natasha Mathur
27 Mar 2019
4 min read
IEEE Standards Association (IEEE-SA) released the first version of Ethics guidelines for automation and Intelligent systems, titled “Ethically Aligned Design (EAD): A vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems”, earlier this week. EAD guidelines feature scientific analysis and resources, high-level principles as well as actionable recommendations for ethical implementation of autonomous and intelligent systems (A/IS). “We offer high-level General Principles in Ethically Aligned Design that we consider to be imperatives for creating and operating A/IS that further human values and ensure trustworthiness”, reads EAD. The EAD guideline explains eight high-level ethical principles that can be applied to all types of autonomous and intelligent systems (A/IS), irrespective of whether they are physical robots, software systems or algorithmic chatbots. Eight General Principles in EAD Human Rights As mentioned in EAD, A/IS shall be created and operated in such a way that it respects, promotes, and protects internationally the recognized human rights. These rights should be fully taken into consideration by individuals, companies, research institutions, and governments to reflect the principle that A/IS respects and fulfills the human rights, freedoms, human dignity, and cultural diversity. Well-being EAD states that A/IS creators should focus on improving human well-being as a primary success criterion for development. EAD recommends that A/IS should prioritize human well-being as the outcome in all system designs. It should use the best available and widely accepted “well-being metrics” as their reference point. Data Agency A/IS creators should put more emphasis on empowering individuals with an added ability to access and securely share their data. A/IS creators should focus on maintaining people’s capacity to have control over their identity. Organizations and governments, should test and implement technologies that allow the individuals to specify their online agent for case-by-case authorization decisions. For minors, current guardianship approaches should be implemented to determine their suitability in this context. Effectiveness Creators should provide evidence of the effectiveness and fitness for the purpose of A/IS. EAD recommends that creators engaged in the development of A/IS should focus on defining the metrics to serve as valid and meaningful gauges of the effectiveness of the system. Creators of A/IS should design systems where the metrics on specific deployments of the system can be aggregated to deliver information on the effectiveness of the system across different deployments. Also, industry associations and other organizations (IEEE and ISO) should collaborate to develop standards for reporting on the effectiveness of A/IS. Transparency EAD states that the basis of a particular A/IS decision should always be discoverable. It recommends that new standards should be developed in a way that it describes measurable and testable levels of transparency. Also, these standards would offer designers with a guide for self-assessing transparency during development and suggest mechanisms for improving transparency. Accountability As per EAD, A/IS should be created and operated in a way so that it offers an “unambiguous rationale” for decisions made. EAD states that in order to address the issues of responsibility and Accountability,  courts should clarify the “responsibility, culpability, liability, and accountability” for A/IS prior to the development and deployment. It also states that designers and developers of A/IS should be made aware of the diversity in existing cultural norms among these A/IS. Awareness of Misuse EAD states that creators should offer protection against all potential misuses and risks of A/IS in operation. EAD recommends that creators should be made aware of methods of misuse. It also states that A/IS should be designed in ways that can minimize the opportunity for these systems. Public awareness should be improved surrounding the issues of potential A/IS technology misuse. Competence EAD states that the creators should specify and operators should adhere to the knowledge and skill required for safe operation. It also mentions that the creators of A/IS should clearly specify the types and levels of knowledge required to understand and operate any given application of A/IS. Also, creators of A/IS should provide the affected parties with information on the role of the operator and the implications of operator error. Rich and detailed documentation should be made accessible to the experts and the general public. For more information, check out the official Ethically Aligned Design guidelines IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others What the IEEE 2018 programming languages survey reveals to us 2019 Deloitte tech trends predictions: AI-fueled firms, NoOps, DevSecOps, intelligent interfaces, and more
Read more
  • 0
  • 0
  • 2845
Visually different images

article-image-mozilla-launches-firefox-lockbox-a-password-manager-for-android
Amrata Joshi
27 Mar 2019
3 min read
Save for later

Mozilla launches Firefox Lockbox, a password manager for Android

Amrata Joshi
27 Mar 2019
3 min read
Yesterday, Mozilla announced the availability of Firefox Lockbox on Android, a password manager for Firefox web browser users. Firebox Lockbox will help users to access their logins stored in their Firefox browser from their mobile device. Lockbox is one of the projects by Mozilla developed through a Test Flight program. Also, it is free and users don’t have to do any extra setup for it. The user passwords securely sync from the Firefox browser to the Firefox Lockbox app. Mozilla has also taken care of security as the Lockbox app can be locked with facial recognition or a fingerprint (depending on device support). The user passwords are also encrypted well, in a way this prevents Mozilla from reading user data. Users have to simply sign in to their Firefox account and Firefox Lockbox will manage all the passwords that users have saved in Firefox. The passwords will be available both online and offline, once they are synced. With Firefox Lockbox, it’s easy to keep track of passwords across devices as it automatically fills in the user passwords saved on the desktop to the apps like Facebook or Yelp, on users’ mobile device. Few users think that this is an amazing project, however, the Chrome users are expecting a Chrome plugin. Others are also worried about the longevity of Firefox Lockup. A user commented on HackerNews, “I think this is wonderful, but I have two concerns. First, if there isn't a Chrome plugin, it's not going to be of much use to me. I still use Chrome on my laptop (for a multitude of reasons) and if Lockbox doesn't interoperate with it, it's not a useful tool. Second, I worry about the longevity of the project. Other than Firefox, Mozilla is not known for their long-term support of consumer products. Persona? Firefox OS? Thunderbird? I don't want to switch to a product that's only going to be retired in a year.” Few others think that Lockbox’s UI is very simple. Another comment reads, “Am I missing something, or is this landing page really nothing more than a screenshot and an app button? I know minimal pages are trendy, but that seems like taking it a bit too far.” Read more about this news on Mozilla’s blog post. Mozilla’s Firefox Send is now publicly available as an encrypted file sharing service Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser Mozilla engineer shares the implications of rewriting browser internals in Rust
Read more
  • 0
  • 0
  • 2425

article-image-desmond-u-patton-director-of-safelab-shares-why-ai-systems-should-be-a-product-of-interdisciplinary-research-and-diverse-teams
Bhagyashree R
27 Mar 2019
4 min read
Save for later

Desmond U. Patton, Director of SAFElab shares why AI systems should be a product of interdisciplinary research and diverse teams

Bhagyashree R
27 Mar 2019
4 min read
The debate about AI systems being non-inclusive, sexist, and racist has been going on for a very long time. Though most of the time the blame is put on the training data, one of the key reasons behind this behavior is not having a diverse team. Last week, Stanford University launched a new institute named the Stanford Institute of Human-Centered Artificial Intelligence (HAI). This institute is aimed at researching and developing human-centered applications and technologies through multidisciplinary collaboration to get “true diversity of thought”. While the institute talked about diversity, its list of faculty members failed to reflect that. Out of the 121 members initially announced as the part of the institute, more than 100 were white and majority of them were male even as women and people of color had pioneered AI ethics and safety. https://twitter.com/chadloder/status/1108588849503109120 Emphasizing on the importance of interdisciplinary research in AI, Desmond U. Patton, the director of SAFElab, shared his experience of working on AI systems as a non-tech person. Through this blog post, he also encouraged his fellow social workers and non-tech people to contribute in AI research to make AI more inclusive. For the past 6 years, Patton with his colleagues from computer science and data science fields worked on co-designing AI systems aimed to understand the underlying causes of community-based violence. He believes that social media posts can prove to be very helpful in identifying people who are at risk of getting involved in gun violence. So, he created an interdisciplinary group of researchers who, with the help of AI techniques, study the language and images in social-media posts to identify patterns of grieving and anger. Patton believes that having a domain expert in the team is important. All the crucial decisions related to the AI system such as concepts that should be analyzed, framing of those concepts, and error analysis of outputs should be taken jointly. His team also worked with community groups and people who previously worked for gangs involved in gun violence to co-design the AI systems. They hired community members and advisory teams and valued their suggestions, critiques, and ideas in shaping up the AI systems. Algorithmic and workforce bias has led to a lot of controversies in the recent years, including facial recognition systems misidentifying black women. Looking at these cases, Joy Buolamwini founded Algorithmic Justice League (AJL), a collective that focuses on creating more inclusive and ethical AI systems. AJL researches about algorithmic bias, provides a platform to people for raising their concerns and experiences with coded discrimination, and runs algorithmic audits to hold the companies accountable. Though it has not become a norm, the concept of interdisciplinary research is surely gaining attention of several researchers and technologists. At EmTech Digital, Rediet Abebe, a computer science researcher at Cornell University, said, “We need adequate representation of communities that are being affected. We need them to be present and tell us the issues they’re facing.” She further added, “We also need insights from experts from areas including social sciences and the humanities … they’ve been thinking about this and working on this for longer than I’ve been alive. These missed opportunities to use AI for social good—these happen when we’re missing one or more of these perspectives.” Abebe has cofounded Mechanism Design for Social Good, a multi-institution, interdisciplinary research group that aims to improve access to opportunities for those who have been historically underserved and marginalized. The organization has worked on several areas including global inequality, algorithmic bias and discrimination, and the impact of algorithmic decision-making on specific policy areas including online labor markets, health care, and housing. AI researchers and developers need to collaborate with social sciences and underserved communities. We need to have those people affected by these systems to have a say in building these systems. The more different people we have, the more perspectives we get. And, this is the type of team that brings more innovation to technology. Read Patton’s full article on Medium. Professors from MIT and Boston University discuss why you need to worry about the ‘wrong kind of AI’ Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience? How AI is transforming the Smart Cities IoT? [Tutorial]
Read more
  • 0
  • 0
  • 3111

article-image-the-ftc-issues-orders-to-7-broadband-companies-to-analyze-isp-privacy-practices-given-they-are-also-ad-support-content-platforms
Savia Lobo
27 Mar 2019
3 min read
Save for later

The FTC issues orders to 7 broadband companies to analyze ISP privacy practices given they are also ad-support content platforms

Savia Lobo
27 Mar 2019
3 min read
The Federal Trade Commission announced yesterday that they have issued orders to seven U.S. Internet broadband providers to analyze how these broadband companies carry out the data collection and distribution process. Seven broadband companies including AT&T Inc., AT&T Mobility LLC, Comcast Cable Communications doing business as Xfinity, Google Fiber Inc., T-Mobile US Inc., Verizon Communications Inc., and Cellco Partnership doing business as Verizon Wireless, received orders from the FTC for monitoring the companies’ privacy policies, procedures, and practices. According to the FTC press release, “This study is to better understand Internet service providers’ privacy practices in light of the evolution of telecommunications companies into vertically integrated platforms that also provide advertising-supported content. Under current law, the FTC has the ability to enforce against unfair and deceptive practices involving Internet service providers.” What information does the FTC plan to retrieve? The FTC is authorized to issue the “Orders to File a Special Report by Section 6(b) of the FTC Act”, the press release reads. The Commission seeks to obtain information on the categories of personal information collected about consumers or their devices, including the purpose for which the information is collected or used. Also, the techniques for collecting such information: whether the information collected is shared with third parties; internal policies for access to such data; and how long the information is retained; They will also analyze whether the information is aggregated, anonymized or de-identified. The other factors they’ll analyze include: Copies of the companies’ notices and disclosures to consumers about their data collection practices; Whether the companies offer consumers choices about the collection, retention, use, and disclosure of personal information, and whether the companies have denied or degraded service to consumers who decline to opt-in to data collection; and Procedures and processes for allowing consumers to access, correct, or delete their personal information. “The FTC has given the companies up to 45 days to hand over the requested information”, The Verge reports. A user wrote on HackerNews, “It’s good to check on this of course but...as far as ISPs go, this is actually about #3 on the list of problems I want the FTC or someone to fix” “How about the fact that there’s usually only one choice. Or that Internet that everyone wants can be force-bundled with ridiculous things no one wants (like a home phone line and minimum TV bundle), that we tolerate because there is no option. Or prices that go up forever with no improvements, except when they all magically found a way the day after Google Fiber was announced. These companies abuse their positions and need to be checked for that in addition to privacy”, the user added. To know more about this news in detail, visit the official press release. Facebook under criminal investigations for data sharing deals: NYT report Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices US Senators introduce a bill to avoid misuse of facial recognition technology
Read more
  • 0
  • 0
  • 1616
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-uber-and-lyft-drivers-strike-in-los-angeles
Richard Gall
26 Mar 2019
3 min read
Save for later

Uber and Lyft drivers strike in Los Angeles

Richard Gall
26 Mar 2019
3 min read
Uber and Lyft drivers yesterday went on strike across Los Angeles in opposition to Uber's decision to cut rates by 25% in the Los Angeles area. Organized by Rideshare Drivers United, the strike comes as a further labor-led fightback against ride-hailing platforms following news that U.K.-based Uber drivers are suing the company over access to personal data. If anyone thought 2018's techlash was over, they need to think again - it appears worker solidarity is only getting stronger in the tech industry. What was the purpose of the March 25 Uber strike? Uber and Lyft drivers have experienced declining wages for a number of years as the respective platforms compete to grow customers. This made news earlier in March that Uber would be reducing the per mile rate for drivers from 80¢ to 60¢ particularly tough to take. It underlined to many drivers that things are probably only going to continue to get worse while power rests solely on the side of the platforms. https://twitter.com/_drivers_united/status/1107745851890253824?s=20 But there was more at stake than just an increase in wages. In many ways opposition to Uber's pay cut is simply a first step in a longer road towards improved working conditions and greater power. Rideshare Drivers United actually has an extensive list of aims and demands: A 10% cap on commission The right to organize and negotiate for improved working conditions Ensuring Uber and Lyft are working in accordance with authorities on green initiatives With New York City authorities taking steps to implement minimum pay requirements in December 2018, the action on the west coast could certainly be seen as an attempt to push for consistency across the country. However, it doesn't appear that Los Angeles authorities are interested in taking similar steps at the moment. Uber and Lyft's response to the Los Angeles strike In a statement given to The Huffington Post, an Uber spokesperson said that the changes "will make rates comparable to where they were in September, while giving drivers more control over how they earn by allowing them to build a model that fits their schedule best.” In the same piece, the HuffPo quotes a Lyft spokesperson who points out that the company hasn't changed their rates for 12 months. Support for striking Uber and Lyft drivers Support for the strikers came from many quarters, including the National Union of Health Workers and Senator Bernie Sanders. "One job should be enough to make a decent living in America" the NUHW said. https://twitter.com/NUHW/status/1110270149309849600?s=20 Time for Silicon Valley to rethink There's clearly a long way to go if Rideshare Drivers United are going to achieve their aims. But the conversation is shifting and many Silicon Valley executives will need to look up and take notice - perhaps it's time to rethink things.
Read more
  • 0
  • 0
  • 2060

article-image-chinas-facial-recognition-powered-airport-kiosks-an-attempt-to-invade-privacy
Fatema Patrawala
26 Mar 2019
7 min read
Save for later

Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience?

Fatema Patrawala
26 Mar 2019
7 min read
We've all heard stories of China's technological advancements and how face recognition is really gaining a lot of traction. If you needed any proof of China's technological might over most countries of the world, it doesn't get any better (or scarier) than this. On Sunday, tech analyst at Tencent & WeChat, Matthew Brennan, tweeted a video of a facial recognition kiosk at Chengdu Shuangliu International Airport in the People’s Republic of China. The kiosk seemed to give Brennan minutely personalized flight information as he walked by after automatically scanning his face in just seconds. A simple 22 second video crossed over 1.2 million views in just over a day. The tweet went viral, with many commenters writing how dystopian or terrifying they found this technology. Suggesting how wary we should be of the proliferation of biometric systems like those used in China. https://twitter.com/mbrennanchina/status/1109741811310837760 “There’s one guarantee that I’ll never get to go to China now,” one Twitter user wrote in response. “That’s called fascism and it’s not moral or ok,” another comment read. https://twitter.com/delruky/status/1109812054012002304 Surveillance tech isn’t a new idea The airport facial recognition technology isn’t new in China, and similar systems are already being implemented at airports in the United States. In October, Shanghai’s Hongqiao airport reportedly debuted China’s first system that allowed facial recognition for automated check-in, security clearance, and boarding. And since 2016, the Department of Homeland Security has been testing facial recognition at U.S. airports. This biometric exit program uses photos taken at TSA checkpoints to perform facial recognition tests to verify international travelers’ identities. Documents recently obtained by Buzzfeed show that Homeland Security is now racing to implement this system at the top 20 airports in the U.S. by 2021. And it isn’t just the federal government that has been rolling out facial recognition at American airports. In May of 2017, Delta announced it was testing a face-scanning system at Minneapolis-Saint Paul International Airport that allowed customers to check in their bags, or, as the company called it in a press release, “biometric-based bag drop.” The airline followed up those tests with what it celebrated as “the first biometric terminal” in the U.S. at Atlanta’s Maynard H. Jackson International Airport at the end of last year. Calling it an “end-to-end Delta Biometrics experience,” Delta’s system uses facial recognition kiosks for check-in, baggage check, TSA identification, and boarding. The facial recognition option is saving an average of two seconds for each customer at boarding, or nine minutes when boarding a wide body aircraft. “Delta’s successful launch of the first biometric terminal in the U.S. at the world’s busiest airport means we are designing the airport biometric experience blueprint for the industry,” said Gil West, Delta’s COO. “We’re removing the need for a customer checking a bag to present their passport up to four times per departure – which means we’re giving customers the option of moving through the airport with one less thing to worry about, while empowering our employees with more time for meaningful interactions with customers.” Dubai International Airport's Terminal 3 will soon replace its security-clearance counter with a walkway tunnel filled with 80 face-scanning cameras disguised as a distracting immersive video. The airport has an artsy, colorful video security and customs tunnel that scans your face, adds you to a database, indexes you with artificial intelligence and decides if you're free to leave -- or not. Potential dangers surveillance tech could bring in From first glance, the kiosk does seem really cool. But it should also serve as a warning as to what governments and companies can do with our data if left unchecked. After all, if an airport kiosk can identify Brennan in seconds and show him his travel plans, the Chinese government can clearly use facial recognition tech to identify citizens wherever they go. The government may record everyone's face and could automatically fine/punish someone if they do break/bend the rules. Matter of fact, they are already doing this via their social credit system. If you are officially designated as a “discredited individual,” or laolai in Mandarin, you are banned from spending on “luxuries,” whose definition includes air travel and fast trains. This class of people, most of whom have shirked their debts, sit on a public database maintained by China’s Supreme Court. For them, daily life is a series of inflicted indignities – some big, some small – from not being able to rent a home in their own name, to being shunned by relatives and business associates. Alibaba, China’s equivalent of Amazon, already has control over the traffic lights in one Chinese city, Hangzhou. Alibaba is far from shy about it’s ambitions. It has 120,000 developers working on the problem and intends to commercialise and sell the data it gathers about citizens. Surveillance technology is pervasive in our society, leading to fierce debate between proponents and opponents. Government surveillance, in particular, has been brought increasingly under public scrutiny, with proponents arguing that it increases security, and opponents decrying its invasion of privacy. Critics have loudly accused governments of employing surveillance technologies that sweep up massive amounts of information, intruding on the privacy of millions, but with little to no evidence of success. And yet, evaluating whether surveillance technology increases security is a difficult task. From the War Resisters League to the Government Accountability Project, Data for Black Lives and 18 Million Rising, from Families Belong Together to the Electronic Frontier Foundation, more than 85 groups signed letters to corporate giants Microsoft, Amazon and Google, demanding that the companies commit not to sell face surveillance technology to the government. Shoshana Zuboff, writer of the book, The Age of Surveillance Capitalism mentions that the technology companies insist their technology is too complex to be legislated, there are companies that have poured billions into lobbying against oversight, and while building empires on publicly funded data and the details of our private lives. They have repeatedly rejected established norms of societal responsibility and accountability. Causes more harm to the Minority groups and vulnerable communities There has been a long history of use of surveillance technologies that will particularly impact vulnerable communities and groups such as immigrants, communities of color, religious minorities, even domestic violence and sexual assault survivors. Privacy is not only a luxury that many residents cannot afford. In surveillance-heavy precincts, for practical purposes, privacy cannot be bought at any price. Privacy advocates have sometimes struggled to demonstrate the harms of government surveillance to the general public. Part of the challenge is empirical. Federal, state, and local governments shield their high-technology operations with stealth, obfuscation, and sometimes outright lies when obliged to answer questions. In many cases, perhaps most, these defenses defeat attempts to extract a full, concrete accounting of what the government knows about us, and how it puts that information to use. There is a lot less mystery for the poor and disfavored, for whom surveillance takes palpable, often frightening forms. The question is, as many commenters pointed out after Brennan’s tweet, do we want this kind of technology available? If so, how could it be kept in check and not abused by governments and other institutions? That’s something we don’t have an answer for yet–an answer we desperately need. Alarming ways governments are using surveillance tech to watch you Seattle government arrange public review on the city’s surveillance tech systems The Intercept says IBM developed NYPD surveillance tools that let cops pick targets based on skin color
Read more
  • 0
  • 0
  • 4142

article-image-professors-from-mit-and-boston-university-discuss-why-you-need-to-worry-about-the-wrong-kind-of-ai
Natasha Mathur
26 Mar 2019
4 min read
Save for later

Professors from MIT and Boston University discuss why you need to worry about the ‘wrong kind of AI’

Natasha Mathur
26 Mar 2019
4 min read
Daron Acemoglu, a professor from MIT (Massachusettes Institute of Technology) and Pascual Restrepo, a professor from Boston University published a paper earlier this month, titled, “The Wrong Kind of AI? Artificial Intelligence and the future of Labor Demand”.   In the paper, professors talk about how recent technological advancements have been biased towards automation, thereby, changing the focus from creating new tasks to productively employ labor. They argue that the consequences of this choice have been ceasing the labor demand, decreasing labor share in national income, increasing equality, and low productivity growth. Automation technologies do not increase labor’s productivity Professors state that there’s a common preconceived notion that advancements in tech lead to an increase in productivity, which in turn, leads to an increase in demand for labor, thereby, impacting employment and wages. However, this is not entirely true as automation tech does not boost labor’s productivity. Instead, this tech replaces labor’s productivity by finding a cheaper capital substitute in terms of tasks performed by humans. In a nutshell, automation tech always reduces the ‘labor’s share in value added’. “In an age of rapid automation, labor’s relative standing will deteriorate and workers will be particularly badly affected if new technologies are not raising productivity sufficiently”, states professors. But, the paper also poses a question that if automation tends to reduce the labor share then why did the labor share remain constant over the last two centuries? Also, why does productivity growth go hand-in-hand with commensurate wage growth? Professors state that in order to understand this relationship and find an answer, people need to recognize different types of technological advances that contribute to productivity growth. They state that Labor demand has not increased over the last two centuries due to technologies that have made labor more productive rather due to new technologies that have eliminated labor from tasks in which it previously specialized. The ‘Wrong kind of AI’ Professors state that economists put a great deal of trust in the Market’s ability to distribute the resources efficiently. However, there are many who disagree. “Is there any reason to worry that AI applications with the promise of reinstating human labor will not be exploited and resources will continue to pour instead into the wrong kind of AI?” state the professors. Professors listed several reasons for market failures in innovation with some specific reasons that are important in terms of AI. Few of these reasons are as follows: Innovation creates externalities and markets do not perform well under such externalities. Markets struggle in case there are alternative and competing technological paradigms.  This is because in case a wrong paradigm moves ahead of the other, it can get very difficult to reverse the trend and benefit from the possibilities offered by an alternative paradigm. The research paper states that there are additional factors that can distort choices over what types of AI applications to develop. The first one being that in case of employment creation having a social value beyond what is in the GDP statistics, this social value gets ignored by the market. Recently, the US government has been frugal in its support for research and its determination to change the direction of technological change. A part of this change is because of: reduction in resources devoted to the government for support of innovation. the increasingly dominant role of the private sector in setting agenda in high-tech areas. This shift discourages the research related to future promise and other social objectives. To sum it up, professors state that although there is no ‘definitive evidence’ that research and corporate resources are getting directed towards the wrong kind of AI, the market for innovation does not provide a good enough reason to expect an efficient balance between different types of AI. Instead of contributing to productivity growth, employment, and shared prosperity, automation advancement would instead lead to anemic growth and inequality. “Though many today worry about the security risks and other..consequences of AI, we have argued that there are prima facie reasons for worrying about the wrong kind of AI from an economic point of view becoming all the rage and the basis of future technological development”, reads the paper. Stanford University launches Institute of Human Centered Artificial Intelligence; receives public backlash for non-representative faculty makeup Researchers at Columbia University use deep learning to translate brain activity into words for epileptic cases Researchers discover Spectre like new speculative flaw, “SPOILER” in Intel CPU’s
Read more
  • 0
  • 0
  • 3437

article-image-trick-or-a-treat-telegram-announces-its-new-delete-feature-that-deletes-messages-on-both-the-ends
Amrata Joshi
26 Mar 2019
4 min read
Save for later

Trick or a treat: Telegram announces its new ‘delete feature’ that deletes messages on both the ends

Amrata Joshi
26 Mar 2019
4 min read
Just two days ago, Telegram, announced the new ‘delete feature’ that allows users to delete messages in one-to-one and/or group private chats. This latest feature allows users to selectively delete their own messages or the messages sent by others in the chat. To delete the message they don’t even need to compose the original message in the first place. This feature is available in Telegram 5.5. So next time you have a conversation with someone, you can delete all of the chats from your device and from the device on the other hand (with whom you are chatting). For deleting a message from both the ends, a user needs to tap on the message and select delete. After that, the user will be given an option of ‘delete for me’ or for everyone. Pavel Durov, Founder at Telegram justified the need for the feature. He writes, “Relationships start and end, but messaging histories with ex-friends and ex-colleagues remain available forever. It’s getting worse. Within the next few decades, the volume of our private data stored by our chat partners will easily quadruple.” According to him, users should have control of their digital historic conversation. He further added, “An old message you already forgot about can be taken out of context and used against you decades later. A hasty text you sent to a girlfriend in school can come haunt you in 2030 when you decide to run for mayor. We have to admit: Despite all of our progress in encryption and privacy, we have very little actual control of our data. We can’t go back in time and erase things for other people.” Telegram’s ‘delete’ feature repercussions This might sound like an exciting feature but what lies before us is the bigger question ‘Will the feature get misused?’ The next time if someone bullies a user on Telegram chat or sends something abusive or absurd. The victim probably won’t even have the proof to show it to others in the case, the attacker deletes them. Moreover, if a group chat involves a series of conversation and a user maliciously deletes a few messages, other users from the group won’t come to know. This way, the conversation might get misinterpreted and the flow of the conversation would get disturbed. The conversation might more look like a manipulated one and cause more trouble. The traces available for criminals or attackers over this platform will get wiped off. It is giving control to the users but secretly opening the ways for malicious attacks. WhatsApp’s unsend message feature seems better in this regard because it lets users delete their own messages and which also sounds legit. Also, when the message is deleted, users in the group chat or in the private chat get notified about it unlike how it works in Telegram. The feature could also cause trouble in case the user accidentally end up deleting someone else’s message in the group or in private chat as the deleted message can’t even get recalled. While talking about the misuse, Durov writes, “We know some people may get concerned about the potential misuse of this feature or about the permanence of their chat histories. We thought carefully through those issues, but we think the benefit of having control over your own digital footprint should be paramount.” Few users are happy with this news and think that this feature can save them in case they accidentally share something sensitive. A user commented on HackerNews, “As a person who accidentally posted sensitive info on chats, I welcome this feature. I do wish they implemented an indication "message deleted" in the chat to show that the editing took place.” Few others think that this feature could cause major trouble. Another user commented, “The problem I see with this is that it adds the ability to alter history. You can effectively remove messages sent by the other person, altering context and changing the meaning of the conversation. You can also remove evidence of transactions, work or anything else. I get that this is supposed to be a benefit, but it's also a very significant issue, especially where business is concerned.” To know more, check out Telegram’s blog post. Messaging app Telegram’s updated Privacy Policy is an open challenge The Indian government proposes to censor social media content and monitor WhatsApp messages Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager
Read more
  • 0
  • 0
  • 3178
article-image-a-bitwise-study-presented-to-the-sec-reveals-that-95-of-coinmarketcaps-btc-trading-volume-report-is-fake
Savia Lobo
25 Mar 2019
2 min read
Save for later

A Bitwise study presented to the SEC reveals that 95% of CoinMarketCap’s BTC trading volume report is fake

Savia Lobo
25 Mar 2019
2 min read
A recent research report by Bitwise Asset Management last week revealed that 95% of the reported trading volumes in Bitcoin by CoinMarketCap.com is fake and artificially created by unregulated exchanges. Surprisingly, this fake data came from CoinMarketCap.com, the most widely cited source for bitcoin volume and is also used by most of the major media outlets. CoinMarketCap hasn't responded yet to the findings. “Despite its widespread use, the CoinMarketCap.com data is wrong. It includes a large amount of fake and/or non-economic trading volume, thereby giving a fundamentally mistaken impression of the true size and nature of the bitcoin market”, the Bitwise report states. The report also claims that only 10 cryptocurrency exchanges have actual volume, including major names like Binance, Coinbase, Kraken, Gemini, and Bittrex. https://twitter.com/BitwiseInvest/status/1109114656944209921 Following are the key takeaways of the report: 95% of reported BTC spot volume is fake. Likely motive is listing fees (can be $1-3M) Real daily spot volume is ~$270M 10 exchanges make up almost all of the real trading volume Majority of the 10 are regulated Spreads are <0.10%. Arbitrage is super efficient CoinMarketCap.com(CMC) originally reported a combined $6 billion in average daily trading volume. However, the 226-slide presentation by Bitwise to the U.S. Securities and Exchanges Commission (SEC) revealed that only $273 million of CMC’s reported BTC trading volume was legitimate. The report also has a detailed breakdown of all the exchanges that report more than $1 million in daily trading volumes on CoinMarketCap. Matthew Hougan, the global head of Bitwise’s research division, said, “People looked at cryptocurrency and said this market is a mess; that’s because they were looking at data that was manipulated”. Bitwise also posted on its official Twitter account, “Arbitrage between the 10 real exchanges has improved significantly. The avg price deviation of any one exchange from the aggregate price is now less than 0.10%! Well below the arbitrage band considering exchange-level fees (0.10-0.30%) & hedging costs.” https://twitter.com/BitwiseInvest/status/1109114686635687936 To know more about this in detail, head over to the complete Bitwise report. 200+ Bitcoins stolen from Electrum wallet in an ongoing phishing attack Can Cryptocurrency establish a new economic world order? Crypto-cash is missing from the wallet of dead cryptocurrency entrepreneur Gerald Cotten – find it, and you could get $100,000
Read more
  • 0
  • 0
  • 5446

article-image-zoom-the-video-conferencing-company-files-to-go-public-possibly-a-profitable-ipo
Amrata Joshi
25 Mar 2019
3 min read
Save for later

Zoom, the video conferencing company files to go public, possibly a profitable IPO

Amrata Joshi
25 Mar 2019
3 min read
It seems filing an Initial Public Offering(IPO) is in trend since quite some time now. Major companies like Lyft, Uber, Spotify, and Airbnb have already filed an IPO, with Pinterest’s recent addition to the list. Last week, Zoom, the video conferencing company filed on the Nasdaq Stock Market to go public by next month. But it seems Zoom is on the brighter side with respect to its revenue as the company has raised around $145 million from the previous year and this year the company holds $330 million in revenue. There is a year on year increase with respect to the increase in the company’s revenue. Zoom marked more than doubled revenues from 2017 to 2018 as it was $60.8 million in 2017 and it became $151.5 million, last year. Even the losses have shrunk from $14 million in 2017, $8.2 million last year and just $7.5 million this year in January 2019. Eric S. Yuan, the CEO, and founder at Zoom will hold  22% of the company share. Li Ka-shing, a Chinese billionaire holds 6.1% while the current directors and executives own 36.7%. Emergence Capital who owns 12.5% (pre-IPO stake) has been backing Zoom, according to the IPO filing. The investors’ list involved in Zoom includes Sequoia Capital who holds 11.4% (pre-IPO stake) and Digital Mobile Venture holding 9.8%. Eric Yuan who previously was the Vice President of engineering at Cisco, had sold Cisco for $3.2 billion in 2007. In a statement to TechCrunch, last month, he said that “he would never sell another company again.” This clearly indicates that he preferred opting for an IPO for Zoom instead of selling it. Few users are appreciating the effort taken by Eric Yuan. A user commented on HackerNews, “The founder from Cisco is Eric Yuan. He's been working this thing for a long time and it's awesome to see them get to this.” Another user commented, “I’m impressed by their financials. $330m total revenues and $7m profit. Rarely do we hear these days about a tech company IPO who’s profitable” Some others think that Zoom is not good software and their experience with Zoom has been bad. A comment reads, “The whole experience has been bad enough that I get actively annoyed at seeing giant Zoom ads plastered all over 101, on buses, at T5 at JFK, etc. and think about what those cost compared to allocating some engineering time to fixing really basic bugs. Maybe it's a cheaper solution than the other options, I don't really know. But if the decision was up to me, Zoom would be basically last on my list if any significant portion of the company was using Linux.” Though the figures show that the company is on the profit side, but it would be interesting to know how a company remains profitable while going public. To know more about this news, check out Zoom’s official statement. Pinterest files new IPO framework and reports revenue of roughly $756 million in 2018 Slack confidentially files to go public SUSE is now an independent company after being acquired by EQT for $2.5 billion  
Read more
  • 0
  • 0
  • 2472

article-image-pinterest-files-new-ipo-framework-and-reports-revenue-of-roughly-756-million-in-2018
Sugandha Lahoti
25 Mar 2019
5 min read
Save for later

Pinterest files new IPO framework and reports revenue of roughly $756 million in 2018

Sugandha Lahoti
25 Mar 2019
5 min read
Pinterest filed a new IPO paperwork on Friday, in preparation for an initial public offering, expected in April. Pinterest will go public on NYSE with the ticker symbol $PINS. S-1 IPO framework offers a comprehensive look at Pinterest’s business including the revenue generated, salary’s given to CXOs and other details. Pinterest Financials Pinterest has consistently reported revenue growth and falling losses. The company, says it earned more than $750 million in revenue last year, and it’s cut its losses from nearly $200 million in 2016 down to just under $75 million annually. Pinterest says it was, in fact, profitable in the fourth quarter of 2018. Pinterest grew 58.2 percent from 2016 to 2017, and 60.0 percent from 2017 to 2018. In total, Pinterest has posted $1.525 billion in revenue since 2016. Source:  Page 69 of S-1 Pinterest counted 265 million monthly active users, bringing in some $700 million in ad revenue in 2018, per reports, a 50 percent increase year-over-year. Source:  Page 67 of S-1 The company’s global average revenue per user (ARPU) in the year ended December 31, 2018, was $3.14, up 25 percent YoY. Its U.S. ARPU, meanwhile, sat at $9.04, a 47 percent increase from the previous estimation. Source:  Page 70 of S-1 Pinterest closed calendar 2018 with nearly $628 million in cash and equivalents. The company’s operating cash flow improved from -$102.9 million in 2017 to -$60.4 million in 2018, and the firm went from negative investing cash flow to positive last year. In the S-1 filing, “the company says that it will be well-set for future growth after the offering, presuming that it doesn’t manage to reach positive operating cash flow in short order.” The S1-filing also showed the salaries earned by CXOs. Co-founder and CEO Ben Silbermann earned a salary of $197,100. However, CFO Todd Morgenfeld earned a base salary of $360,500 with stock awards worth $22,028,696. The company did not break down its stock ownership so it’s not clear what would be the salaries of the execs, once Pinterest goes public.  The company will offer two classes of stock. Class A shares will receive one vote per share, while Class B shares will receive 20 votes per share. Pinterest raised a total of almost $1.5 billion from its investors. The company’s biggest funding round was a $186 million Series G in May 2015, led by Goldman Sachs Investment Partners, SV Angel, and Wellington Management. The company went on to raised $150 million in June 2017 from Sinai Ventures. Associated Risk factors Pinterest said “eight out of 10 moms” are on its platform, adding that “are often the primary decision-makers when it comes to buying products and services for their household.” In its S-1 filing, Pinterest acknowledges this as a risk and notes that they would now have to penetrate new demographics. The company noted that its reliance on other third-party platforms and services could be a risk factor for the business. In 2018, Facebook changed its login authentication systems, which negatively impacted Pinterest’s user growth in the second quarter of 2018. “If Facebook or Google discontinue single sign-on or experience an outage, then we may lose and be unable to recover users previously using this function, and our user growth or engagement could decline” noted the company in the filing. Pinterest also acknowledged ad-blocking tools as one of its risk factors which may, in the future, harm its profitability. “Existing ad blocking technologies that have not been effective on our service may become effective as we make certain product changes, and new ad blocking technologies may be developed,” the company writes in the filing. US and EU regulations based on privacy and sensitive content policies may also hold Pinterest accountable for failure to comply with possible content removal requirements in the future. This was also identified as one of the risks in the S-1 filing. Pinterest took steps to reduce the spread of misinformation about vaccines on its platform last month when it suspended search results including pins and boards, for the terms related to vaccinations, whether in favor or against them. The company notes, “We are in the early stages of our monetization efforts and are still growing and scaling our revenue model. Our growth strategy depends on, among other things, attracting more advertisers (including serving more mid-market and unmanaged advertisers and expanding our sales efforts to reach advertisers in additional international markets), scaling our business with existing advertisers and expanding our advertising product offerings, such as self-serve tools. There is no assurance that this revenue model will continue to be successful or that we will generate increased revenue.” Pinterest’s public offering joins the list of highly-valued technology firms going public this year including Lyft, Zoom and soon Uber and Slack. Here’s what the Twitterati had to say. https://twitter.com/alexeheath/status/1109190664439414784 https://twitter.com/GavinSBaker/status/1109838015764090882 https://twitter.com/DanaWollman/status/1109192413963304960 https://twitter.com/joeloskarr/status/1109553172400562176 Facebook and Google pressurized to work against ‘Anti-Vaccine’ trends after Pinterest blocks anti-vaccination content from its pinboards Slack confidentially files to go public SUSE is now an independent company after being acquired by EQT for $2.5 billion
Read more
  • 0
  • 0
  • 3179
article-image-stanford-university-launches-institute-of-human-centered-artificial-intelligence-receives-public-backlash-for-non-representative-faculty-makeup
Fatema Patrawala
22 Mar 2019
5 min read
Save for later

Stanford University launches Institute of Human Centered Artificial Intelligence; receives public backlash for non-representative faculty makeup

Fatema Patrawala
22 Mar 2019
5 min read
On Monday, Stanford University launched the new Institute for Human-Centered Artificial Intelligence (HAI) to augment humanity with AI. The institute aims to study, guide and develop human-centered artificial intelligence technologies and applications and advance the goal of a better future for humanity through AI, according to the announcement. Its co-leaders are John Etchemendy professor of philosophy and a former Stanford University provost, and Fei-Fei Li, who is a computer science professor and a former Chief Scientist for Google Cloud AI and ML. “So much of the discussion about AI is focused narrowly around engineering and algorithms... We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.” Li explains in a blog post. The institute was launched at a symposium on campus, and it will include faculty members from all seven schools at Stanford — including the School of Medicine — and will work closely with companies in a variety of sectors, including health care, and with organizations such as AI4All. "Its biggest role will be to reach out to the global AI community, including universities, companies, governments and civil society to help forecast and address issues that arise as this technology is rolled out," said Etchemendy, in the announcement. "We do not believe we have answers to the many difficult questions raised by AI, but we are committed to convening the key stakeholders in an informed, fact-based quest to find those answers." The symposium featured a star-studded speaker lineup that included industry titans Bill Gates, Reid Hoffman, Demis Hassabis, and Jeff Dean, as well as dozens of professors in fields as diverse as philosophy and neuroscience. Even California Governor, Gavin Newsom made an appearance, giving the final keynote speech. As the audience of the event included former Secretaries of State Henry Kissinger and George Shultz, former Yahoo CEO Marissa Mayer, and Instagram Co-founder Mike Krieger. Any AI initiative that government, academia, and industry all jointly support is good news for the future of the tech field. HAI differs from many other AI efforts in that its goal is not to create AI rivaling humans in intelligence, but rather to find ways where AI can augment human capabilities and enhance human productivity and quality of life. If you missed the event, you can view a video recording here. Institute aims to become a representative of humanity but ends up being claimed as exclusionary While the Institute’s mission stated “The creators and designers of AI must be broadly representative of humanity.” It has been noticed that the institute has 121 faculty members listed on their website, and not a single member of Stanford’s new AI faculty is black. https://twitter.com/chadloder/status/1108588849503109120 There were questions as to why so many of the most influential people in the Valley decided to align with this center and publicly support it, and why this center aims to raise $1 billion to further its efforts. What does this center offer such a powerful group of people? https://twitter.com/annaeveryday/status/1108594937145114625 The moment such comments were made on Twitter the institute’s website was quickly updated to include one previously unlisted faculty member, Juliana Bidadanure, an assistant professor of philosophy. Bidadanure was not listed among the institute’s staff prior, according to a version of the page preserved on the Internet Archive’s Wayback Machine, and Juliana also spoke at the institute’s opening event. It is imperative to say that we live in an age where predictive policing is real and can disproportionately hit minority communities, job hiring is handled by AI and can discriminate against women. We closely know about Google and Facebook’s algorithms deciding on what information we see and which conspiracy theory YouTube serves up next. But the algorithms making those decisions are closely guarded company secrets with global impact. In Silicon Valley and the broader Bay Area, the conversation and the speakers have shifted. It’s no longer a question of if technology can discriminate. The questions now include who can be impacted, how we can fix it, and what are we even building anyway? When a group of mostly white engineers gets together to build these systems, the impact on marginalized groups is particularly stark. Algorithms can reinforce racism in domains like housing and policing. Recently Facebook announced that the platform has removed targeting ads related to protected classes such as race, ethnicity, sexual orientation, and religion. Algorithm bias mirrors what we see in the real world. Artificial intelligence mirrors its developers and the data sets its trained on. Where there used to be a popular mythology that algorithms were just technology’s way of serving up objective knowledge, there’s now a loud and increasingly global argument about just who is building the tech and what it’s doing to the rest of us. The stated goal of Stanford’s new human-AI institute is admirable. But to get to a group that is truly “broadly representative of humanity,” they’ve got miles to go. Facebook and Microsoft announce Open Rack V3 to address the power demands from artificial intelligence and networking So, you want to learn artificial intelligence. Here’s how you do it. What can happen when artificial intelligence decides on your loan request
Read more
  • 0
  • 0
  • 3786

article-image-microsoft-announces-microsoft-defender-atp-for-mac-a-fully-automated-dna-data-storage-and-revived-office-assistant-clippy
Natasha Mathur
22 Mar 2019
4 min read
Save for later

Microsoft announces: Microsoft Defender ATP for Mac, a fully automated DNA data storage, and revived office assistant Clippy

Natasha Mathur
22 Mar 2019
4 min read
Microsoft made a series of new announcements, earlier this week. These include a new Microsoft Defender ATP for Mac, a first fully automated DNA data storage system, and the Revived Microsoft Office Assistant, Clippy. Microsoft Defender ATP for Mac Microsoft team announced yesterday that it's expanding the reach of the core components of its security platforms (including the new Threat & Vulnerability Management) to Mac devices. Also, the name of these unified endpoint security platforms has been updated to Microsoft Defender ATP (Advanced Threat Protection) from the prior Windows Defender ATP, keeping in mind its new cross-platform nature. “We’ve been working closely with industry partners to enable Windows Defender Advanced Threat Protection (ATP) customers to protect their non-Windows devices while keeping a centralized “single pane of glass” experience”, states the Microsoft Team. Users can install the Microsoft Defender ATP client on devices running macOS Mojave, macOS High Sierra, or macOS Sierra to manage and protect these devices. This app offers next-gen anti-malware protection, allowing users to review and perform configuration of their protection. Users can also configure the advanced settings, including disabling or enabling real-time protection, cloud-delivered protection, and automatic sample submission among others. Moreover, devices with alerts and detections will also get surfaced in the Microsoft Defender ATP portal. Security analysts and admins can then further review these alerts on Mac devices. Other than that, the Microsoft team also plans to bring Microsoft Intune in the future. This would enable the users to configure and deploy the settings via alternative Mac and MDM management tools such as JAMF. Fully automated DNA data storage system Microsoft announced the new and first fully automated DNA data storage system, yesterday. The system allows with the storage and retrieval of data in manufactured DNA. This move is aimed at moving the DNA tech out of the research lab and into commercial data centers, says the Microsoft team. The team (Microsoft researchers and University of Washington) successfully encoded the word “hello” in snippets of fabricated DNA. They then further converted it back to digital data with the help of a fully automated end-to-end system. This automated DNA data storage system makes use of the software developed by the Microsoft and UW team that helps convert the ones and zeros of digital data into the As, Ts, Cs, and Gs (the building blocks of DNA). It then leverages the inexpensive, ‘off-the-shelf’  lab equipment to allow the flow of necessary liquids and chemicals into a synthesizer. This synthesizer then builds the manufactured snippets of DNA and pushes them into a storage vessel. In case the system wants to retrieve the information, it can add other chemicals to properly prepare the DNA and uses microfluidic pumps to push the liquids into other parts of the system. This system is then able to “read” the DNA sequences and convert them back to information understandable by a computer. According to the researchers, “the goal of the project was not to prove how fast or inexpensively the system could work, but simply to demonstrate that automation is possible” Revived Office Assistant Clippy Microsoft revived its 90s Microsoft Office Assistant, called Clippy, earlier this week on Tuesday. Microsoft Office team brought back Clippy as an app that can offer animated Clippy stickers on chats in Microsoft Teams, company’s group chat software.These Clippy stickers were also released on Microsoft’s official Office developer GitHub page, allowing all the Microsoft Teams users to import and use these stickers for free. However, Clippy was removed yet again the next day. This is because the “brand police” within Microsoft was not happy with the reappearance of Clippy on Microsoft Teams, reports The Verge. The GitHub project associated with the same has also been removed. Clippy fans, however, are not happy with the company’s decision and have started a thread requesting Microsoft to bring back Clippy in Microsoft Teams. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Microsoft open-sources Project Zipline, its data compression algorithm and hardware for the cloud Microsoft announces Game stack with Xbox Live integration to Android and iOS
Read more
  • 0
  • 0
  • 2232