Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1204 Articles
article-image-facebooks-outgoing-head-of-communications-and-policy-takes-blame-for-hiring-pr-firm-definers-and-reveals-more
Melisha Dsouza
22 Nov 2018
4 min read
Save for later

Facebook's outgoing Head of communications and policy takes blame for hiring PR firm ‘Definers’ and reveals more

Melisha Dsouza
22 Nov 2018
4 min read
On 4th November, the New York Times published a scathing report on Facebook that threw the tech giant under scrutiny for its leadership morales. The report pointed out how Facebook has been following the strategy of 'delaying, denying and deflecting’ the blame for all the controversies surrounding it. One of the recent scandals it was involved in was hiring a PR firm- called Definers- who did opposition research and shared content that criticized Facebook’s rivals Google and Apple, diverting focus from the impact of Russian interference on Facebook. They also pushed the idea that liberal financier George Soros was behind a growing anti-Facebook movement. Now, in a memo sent by Elliot Schrage (Facebook’s outgoing Head of Communications and Policy) to Facebook employees and obtained by TechCrunch, he takes the blame for hiring The Definers. Elliot Schrage, who after the Cambridge Analytica scandal, announced in June that he was leaving, admitted that his team asked Definers to push negative narratives about Facebook's competitors. He also stated that Facebook asked Definers to conduct research on liberal financier George Soros. His argument was that after George Soros attacked Facebook in a speech at Davos, calling them a “menace to society”, they wanted to determine if he had any financial motivation. According to the TechCrunch report, Elliot denied that the company asked the PR firm to distribute or create fake news. "I knew and approved of the decision to hire Definers and similar firms. I should have known of the decision to expand their mandate," Schrage said in the memo. He further stresses on being disappointed that a lot of the company’s internal discussion has become public. According to the memo, “This is a serious threat to our culture and ability to work together in difficult times.” Saving Mark and Sheryl from additional finger pointing, Schrage further added "Over the past decade, I built a management system that relies on the teams to escalate issues if they are uncomfortable about any project, the value it will provide or the risks that it creates. That system failed here and I'm sorry I let you all down. I regret my own failure here." As a follow-up note to the memo, Sheryl Sandberg (COO, Facebook) also shares accountability of hiring Deniers. She says “I want to be clear that I oversee our Comms team and take full responsibility for their work and the PR firms who work with us” Conveniently enough, this memo comes after the announcement that Elliot is stepping down from his post at Facebook. Elliot’s replacement, Facebook’s new head of global policy and former U.K. Deputy Prime Minister, Nick Clegg will now be reviewing its work with all political consultants. The entire scandal has led to harsh criticism from the media circle like Kara Swisher and from academics like Scott Galloway. On an episode of Pivot with Kara Swisher and Scott Galloway,  Kara comments that “Sheryl Sandberg ... really comes off the worst in this story, although I still cannot stand the ability of people to pretend that this is not all Mark Zuckerberg’s responsibility,” She further followed up with a jarring comment stating “He is the CEO. He has 60 percent. He’s an adult, and they’re treating him like this sort of adult boy king who doesn’t know what’s going on. It’s ridiculous. He knows exactly what’s going on.” Galloway added that since Sheryl had “written eloquently on personal loss and the important discussion around gender equality”, these accomplishments gave her “unfair” protection, and that it might also be true that she will be “unfairly punished.” He raises questions on both, Mark and Sheryl’s leadership saying “Can you think of any individuals who have made so much money doing so much damage? I mean, they make tobacco executives look like Mister Rogers.” On 19th November, he tweeted a detailed theory on why Sandberg is yet a part of Facebook; because “The Zuck can't be (fired)” and nobody wants to be the board who "fires the woman". https://twitter.com/profgalloway/status/1064559077819326464 Here’s another recent tweet thread from Scott which is a sarcastic take on what a “Big Tech” company actually is: https://twitter.com/profgalloway/status/1065315074259202048 Head over to CNBC to know more about this news. What is Facebook hiding? New York Times reveals Facebook’s insidious crisis management strategy NYT Facebook exposé fallout: Board defends Zuckerberg and Sandberg; Media call and transparency report Highlights BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”  
Read more
  • 0
  • 0
  • 2221

article-image-opencv-4-0-releases-with-experimental-vulcan-g-api-module-and-qr-code-detector-among-others
Natasha Mathur
21 Nov 2018
2 min read
Save for later

OpenCV 4.0 releases with experimental Vulcan, G-API module and QR-code detector among others

Natasha Mathur
21 Nov 2018
2 min read
Two months after the OpenCV team announced the alpha release of Open CV 4.0, the final version 4.0 of OpenCV is here. OpenCV 4.0 was announced last week and is now available as a c++11 library that requires a c++ 11- compliant compiler. This new release explores features such as a G-API module, QR code detector, performance improvements, and DNN improvements among others. OpenCV is an open source library of programming functions which is mainly aimed at real-time computer vision. OpenCV is cross-platform and free for use under the open-source BSD license. Let’s have a look at what’s new in OpenCV 4.0. New Features G-API: OpenCV 4.0 comes with a completely new module opencv_gapi. G-API is an engine responsible for very efficient image processing, based on the lazy evaluation and on-fly construction of the processing graph. QR code detector and decoder: OpenCV 4.0 comprises QR code detector and decoder that has been added to opencv/objdetect module along with a live sample. The decoder is currently built on top of QUirc library. Kinect Fusion algorithm: A popular Kinect Fusion algorithm has been implemented, optimized for CPU and GPU (OpenCL), and integrated into opencv_contrib/rgbd module.  Kinect 2 support has also been updated in opencv/videoio module to make the live samples work. DNN improvements Support has been added for Mask-RCNN model. A new Integrated ONNX parser has been added. Support added for popular classification networks such as the YOLO object detection network. There’s been an improvement in the performance of the DNN module in OpenCV 4.0 when built with Intel DLDT support by utilizing more layers from DLDT. OpenCV 4.0 comes with experimental Vulkan backend that has been added for the platforms where OpenCL is not available. Performance improvements In OpenCV 4.0, hundreds of basic kernels in OpenCV have been rewritten with the help of "wide universal intrinsics". Wide universal intrinsics map to SSE2, SSE4, AVX2, NEON or VSX intrinsics, depending on the target platform and the compile flags. This leads to better performance, even for the already optimized functions. Support has been added for IPP 2019 using the IPPICV component upgrade. For more information, check out the official release notes. Image filtering techniques in OpenCV 3 ways to deploy a QT and OpenCV application OpenCV and Android: Making Your Apps See
Read more
  • 0
  • 0
  • 4319

article-image-the-us-department-of-commerce-wants-to-regulate-export-of-ai-and-related-products
Prasad Ramesh
21 Nov 2018
4 min read
Save for later

The US Department of Commerce wants to regulate export of AI and related products

Prasad Ramesh
21 Nov 2018
4 min read
This Monday the Department of Commerce, Bureau of Industry and Security (BIS) published a proposal to control the export of AI from USA. This move seems to lean towards restricting AI tech going out of the country to protect the national security of USA. The areas that come under the licensing proposal Artificial intelligence, as we’ve seen in recent years has great potential for both good and harm. The DoC in the United States of America is not taking any chances with it. The proposal lists many areas of AI that could potentially require a license to be exported to certain countries. Other than computer vision, natural language processing, military-specific products like adaptive camouflage and faceprint for surveillance is also listed in the proposal to restrict the export of AI. The areas major areas listed in the proposal are: Biotechnology including genomic and genetic engineering Artificial intelligence (AI) and machine learning including neural networks, computer vision, and natural language processing Position, Navigation, and Timing (PNT) technology Microprocessor technology like stacked memory on chip Advanced computing technology like memory-centric logic Data analytics technology like data analytics by visualization and analysis algorithms Quantum information and sensing technology like quantum computing, encryption, and sensing Logistics technology like mobile electric power Additive manufacturing like 3D printing Robotics like micro drones and molecular robotics Brain-computer interfaces like mind-machine interfaces Hypersonics like flight control algorithms Advanced Materials like adaptive camouflage Advanced surveillance technologies faceprint and voiceprint technologies David Edelman, a former adviser to ex-US president Barack Obama said: “This is intended to be a shot across the bow, directed specifically at Beijing, in an attempt to flex their muscles on just how broad these restrictions could be”. Countries that could be affected with regulation on export of AI To determine the level of export controls, the department will consider the potential end-uses and end-users of the technology. The list of countries is not clear but ones to which exports are restricted like embargoed countries will be considered. Also, China could be one of them. What does this mean for companies? If your organization creates products in ‘emerging technologies’ then there will be restrictions on the countries you can export to and also on disclosure of technology to foreign nationals in United States. Depending on the criteria, non-US citizens might even need licenses to participate in research and development of such technology. This will restrict non-US citizens to participate and take back anything from, say an advanced AI research project. If the new regulations go into effect, it will affect the security review of foreign investments across these areas. When the list of technologies is finalized, many types of foreign investments will be subject to a review and deals could be halted or undone. Public views on academic research In addition to commercial applications and products, this regulation could also be bad news for academic research. https://twitter.com/jordanbharrod/status/1065047269282627584 https://twitter.com/BryanAlexander/status/1064941028795400193 Even Google Home, Amazon Alexa, iRobot Roomba could be affected. https://twitter.com/R_D/status/1064511113956655105 But it does not look like research papers will be really affected. The document states that the commerce does not intend to expand jurisdiction on ‘fundamental research’ for ‘emerging technologies’ that is intended to be published and not currently subject to EAR as per § 734.8. But will this affect open-source technologies? We really hope not. Deadline for comments is less than 30 days away BIS has invited comments to the proposal for defining and categorizing emerging technologies, the impact of the controls in US technology leadership among other topics. However the short deadline of December 19, 2018 indicates their haste to implement licensing export of AI quickly. For more details, and to know where you can submit your comments, read the proposal. The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence Google open sources BERT, an NLP pre-training technique Teaching AI ethics – Trick or Treat?
Read more
  • 0
  • 0
  • 3563
Banner background image

article-image-googlewalkout-demanded-a-truly-equity-culture-for-everyone-pichai-shares-a-comprehensive-plan-for-employees-to-safely-report-sexual-harassment
Melisha Dsouza
09 Nov 2018
4 min read
Save for later

#GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment

Melisha Dsouza
09 Nov 2018
4 min read
Last week, 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment that they encountered at Google’s workplace. This global walkout by Google workers was a response to the New York times report on Google published last month, shielding senior executives accused of sexual misconduct. Yesterday, Google addressed these demands in a note written by Sundar Pichai to their employees. He admits that they have “not always gotten everything right in the past” and they are “sincerely sorry”  for the same. This supposedly ‘comprehensive’ plan will provide more transparency into how employees raise concerns and how Google will handle them. Here are some of the major changes that caught our attention: Following suite after Uber and Microsoft, Google has eliminated forced arbitration in cases of sexual harassment. Fostering a more transparent nature in reporting a sexual harassment case, employees can now be accompanied with support persons to the meetings with HR. Google is planning to update and expand their mandatory sexual harassment training. They will now be conducting these annually instead of once in two years. If an employee fails to complete his/her training, they will receive a one-rating dock in the employees performance review system. This applies to senior management as well where they could be downgraded from ‘exceeds expectation’ to ‘meets expectation’. They will turn increase focus towards diversity, equity and inclusion in 2019, through hiring, progression and retention, in order to create a more inclusive culture for everyone. Google found that one of the most common factors among the harassment complaints is that the perpetrator was under the influence of alcohol (~20% of cases). Stating the policy again, the plan mentions that excessive consumption of alcohol is not permitted when an employee is at work, performing Google business, or attending a Google-related event, whether onsite or offsite. Going forward, all leaders at the company will be expected to create teams, events, offsites and environments in which excessive alcohol consumption is strongly discouraged. They will be expected to follow the two-drink rule. Although the plan is a step towards making workplace conditions stable, it does leave out some of the more inherent concerns related to structural changes as stated by the organizers of the Google Walkout. For example, the structural inequity that separates ‘full time’ employees from contract workers. Contract workers make up more than half of Google’s workforce, and perform essential roles across the company. However, they receive few of the benefits associated with tech company employment. They are also largely women, people of color, immigrants, and people from working class backgrounds. “We demand a truly equitable culture, and Google leadership can achieve this by putting employee representation on the board and giving full rights and protections to contract workers, our most vulnerable workers, many of whom are Black and Brown women.” -Google Walkout Organizer Stephanie Parker Google’s plan to bring transparency at the workplace looks like a positive step towards improving their workplace culture. It would be interesting to see how the plan works out for Google’s employees, as well as other organizations using this as an example to maintain a peaceful workplace environment for their workers. You can head over to Medium.com to read the #GoogleWlakout organizers’ response to the update. Head over to Pichai’s blog post for details on the announcement itself. Technical and hidden debts in machine learning – Google engineers’ give their perspective 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?
Read more
  • 0
  • 0
  • 2925

article-image-un-on-web-summit-2018-how-we-can-create-a-safe-and-beneficial-digital-future-for-all
Bhagyashree R
07 Nov 2018
4 min read
Save for later

UN on Web Summit 2018: How we can create a safe and beneficial digital future for all

Bhagyashree R
07 Nov 2018
4 min read
On Monday, at the opening ceremony of Web Summit 2018, Antonio Guterres, the secretary general of the United Nations (UN) spoke about the benefits and challenges that come with cutting edge technologies. Guterres highlighted that the pace of change is happening so quickly that trends such as blockchain, IoT, and artificial intelligence can move from the cutting edge to the mainstream in no time. Guterres was quick to pay tribute to technological innovation, detailing some of the ways this is helping UN organizations improve the lives of people all over the world. For example, UNICEF is now able to map a connection between school in remote areas, and the World Food Programme is using blockchain to make transactions more secure, efficient and transparent. But these innovations nevertheless pose risks and create new challenges that we need to overcome. Three key technological challenges the UN wants to tackle Guterres identified three key challenges for the planet. Together they help inform a broader plan of what needs to be done. The social impact of the third and fourth industrial revolution With the introduction of new technologies, in the next few decades we will see the creation of thousands of new jobs. These will be very different from what we are used to today, and will likely require retraining and upskilling. This will be critical as many traditional jobs will be automated. Guterres believes that consequences of unemployment caused by automation could be incredibly disruptive - maybe even destructive - for societies. He further added that we are not preparing fast enough to match the speed of these growing technologies. As a solution to this, Guterres said: “We will need to make massive investments in education but a different sort of education. What matters now is not to learn things but learn how to learn things.” While many professionals will be able to acquire the skills to become employable in the future, some will inevitably be left behind. To minimize the impact of these changes, safety nets will be essential to help millions of citizens transition into this new world, and bring new meaning and purpose into their lives. Misuse of the internet The internet has connected the world in ways people wouldn’t have thought possible a generation ago. But it has also opened up a whole new channel for hate speech, fake news, censorship and control. The internet certainly isn’t creating many of the challenges facing civic society on its own - but it won’t be able to solve them on its own either. On this, Guterres said: “We need to mobilise the government, civil society, academia, scientists in order to be able to avoid the digital manipulation of elections, for instance, and create some filters that are able to block hate speech to move and to be a factor of the instability of societies.” The problem of control Automation and AI poses risks that exceed the challenges of the third and fourth industrial revolutions. They also create urgent ethical dilemmas, forcing us to ask exactly what artificial intelligence should be used for. Smarter weapons might be a good idea if you’re an arms manufacturer, but there needs to be a wider debate that takes in wider concerns and issues. “The weaponization of artificial intelligence is a serious danger and the prospects of machines that have the capacity by themselves to select and destroy targets is creating enormous difficulties or will create enormous difficulties,” Guterres remarked. His solution might seem radical but it’s also simple: ban them. He went on to explain: “To avoid the escalation in conflict and guarantee that international military laws and human rights are respected in the battlefields, machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant and should be banned by international law.” How we can address these problems Typical forms of regulations can help to a certain extent, as in the case of weaponization. But these cases are limited. In the majority of circumstances technologies move so fast that legislation simply cannot keep up in any meaningful way. This is why we need to create platforms where governments, companies, academia, and civil society can come together, to discuss and find ways that allow digital technologies to be “a force for good”. You can watch Antonio Guterres’ full talk on YouTube. Tim Berners-Lee is on a mission to save the web he invented MEPs pass a resolution to ban “Killer robots” In 5 years, machines will do half of our job tasks of today; 1 in 2 employees need reskilling/upskilling now – World Economic Forum survey
Read more
  • 0
  • 0
  • 2948

article-image-technical-and-hidden-debts-in-machine-learning-google-engineers-give-their-perspective
Prasad Ramesh
06 Nov 2018
6 min read
Save for later

Technical and hidden debts in machine learning - Google engineers’ give their perspective

Prasad Ramesh
06 Nov 2018
6 min read
In a paper, Google engineers have pointed out the various costs of maintaining a machine learning system. The paper, Hidden Technical Debt in Machine Learning Systems, talks about technical debt and other ML specific debts that are hard to detect or hidden. They found that is common to incur massive maintenance costs in real-world machine learning systems. They looked at several ML-specific risk factors to account for in system design. These factors include boundary erosion, entanglement, hidden feedback loops, undeclared consumers, data dependencies, configuration issues, changes in the external world, and a number of system-level anti-patterns. Boundary erosion in complex models In traditional software engineering, setting strict abstractions boundaries helps in logical consistency among the inputs and outputs of a given component. It is difficult to set these boundaries in machine learning systems. Yet, machine learning is needed in areas where the desired behavior cannot be effectively expressed with traditional software logic without depending on data. This results in a boundary erosion in a couple of areas. Entanglement Machine learning systems mix signals together, entangle them and make isolated improvements impossible. Change to one input may change all the other inputs and an isolated improvement cannot be done. It is referred to as the CACE principle: Change Anything Changes Everything. There are two possible ways to avoid this: Isolate models and serve ensembles. Useful in situations where the sub-problems decompose naturally. In many cases, ensembles work well as the errors in the component models are not correlated. Relying on this combination creates a strong entanglement and improving an individual model may make the system less accurate. Another strategy is to focus on detecting changes in the prediction behaviors as they occur. Correction cascades There are cases where a problem is only slightly different than another which already has a solution. It can be tempting to use the same model for the slightly different problem. A small correction is learned as a fast way to solve the newer problem. This correction model has created a new system dependency on the original model. This makes it significantly more expensive to analyze improvements to the models in the future. The cost increases when correction models are cascaded. A correction cascade can create an improvement deadlock. Visibility debt caused by undeclared consumers Many times a model is made widely accessible that may later be consumed by other systems. Without access controls, these consumers may be undeclared, silently using the output of a given model as an input to another system. These issues are referred to as visibility debt. These undeclared consumers may also create hidden feedback loops. Data dependencies cost more than code dependencies Data dependencies can carry a similar capacity as dependency debt for building debt, only more difficult to detect. Without proper tooling to identify them, data dependencies can form large chains that are difficult to untangle. They are of two types. Unstable data dependencies For moving along the process quickly, it is often convenient to use signals from other systems as input to your own. But some input signals are unstable, they can qualitatively or quantitatively change behavior over time. This can happen as the other system updates over time or made explicitly. A mitigation strategy is to create versioned copies. Underutilized data dependencies Underutilized data dependencies are input signals that provide little incremental modeling benefit. These can make an ML system vulnerable to change where it is not necessary. Underutilized data dependencies can come into a model in several ways—via legacy, bundled, epsilon or correlated features. Feedback loops Live ML systems often end up influencing their own behavior on being updated over time. This leads to analysis debt. It is difficult to predict the behavior of a given model before it is released in such a case. These feedback loops are difficult to detect and address if they occur gradually over time. This may be the case if the model is not updated frequently. A direct feedback loop is one in which a model may directly influence the selection of its own data for future training. In a hidden feedback loop, two systems influence each other indirectly. Machine learning system anti-patterns It is common for systems that incorporate machine learning methods to end up with high-debt design patterns. Glue code: Using generic packages results in a glue code system design pattern. In that, a massive amount of supporting code is typed to get data into and out of general-purpose packages. Pipeline jungles: Pipeline jungles often appear in data preparation as a special case of glue code. This can evolve organically with new sources added. The result can become a jungle of scrapes, joins, and sampling steps. Dead experimental codepaths: Glue code commonly becomes increasingly attractive in the short term. None of the surrounding structures need to be reworked. Over time, these accumulated codepaths create a growing debt due to the increasing difficulties of maintaining backward compatibility. Abstraction debt: There is a lack of support for strong abstractions in ML systems. Common smells: A smell may indicate an underlying problem in a component system. These can be data smells, multiple-language smell, or prototype smells. Configuration debt Debt can also accumulate when configuring a machine learning system. A large system has a wide number of configurations with respect to features, data selection, verification methods and so on. It is common that configuration is treated an afterthought. In a mature system, config lines can be larger than the code lines themselves and each configuration line has potential for mistakes. Dealing with external world changes ML systems interact directly with the external world and the external world is rarely stable. Some measures that can be taken to deal with the instability are: Fixing thresholds in dynamic systems It is necessary to pick a decision threshold for a given model to perform some action. Either to predict true or false, to mark an email as spam or not spam, to show or not show a given advertisement. Monitoring and testing Unit testing and end-to-end testing cannot ensure complete proper functioning of an ML system.  For long-term system reliability, comprehensive live monitoring and automated response is critical. Now there is a question of what to monitor. The authors of the paper point out three areas as starting points—prediction bias, limits for actions, and upstream producers. Other related areas in ML debt In addition to the mentioned areas, an ML system may also face debts from other areas. These include data testing debt, reproducibility debt, process management debt, and cultural debt. Conclusion Moving quickly often introduces technical debt. The most important insight from this paper, according to the authors is that technical debt is an issue that both engineers and researchers need to be aware of. Paying machine learning related technical debt requires commitment, which can often only be achieved by a shift in team culture. Prioritizing and rewarding this effort which needs to be recognized is important for the long-term health of successful machine learning teams. For more details, you can read the paper at NIPS website. Uses of Machine Learning in Gaming Julia for machine learning. Will the new language pick up pace? Machine learning APIs for Google Cloud Platform
Read more
  • 0
  • 0
  • 6943
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-facebooks-ceo-mark-zuckerberg-summoned-for-hearing-by-uk-and-canadian-houses-of-commons
Bhagyashree R
01 Nov 2018
2 min read
Save for later

Facebook's CEO, Mark Zuckerberg summoned for hearing by UK and Canadian Houses of Commons

Bhagyashree R
01 Nov 2018
2 min read
Yesterday, the chairs of the UK and Canadian Houses of Commons issued a letter calling for Mark Zuckerberg, Facebook’s CEO to appear before them. The primary aim of this hearing is to get a clear idea of what measures Facebook is taking to avoid the spreading of disinformation on the social media platform and to protect user data. It is scheduled to happen at the Westminster Parliament on Tuesday 27th November. The committee has already gathered evidence regarding several data breaches and process failures including the Cambridge Analytica scandal and is now seeking answers from Mark Zuckerberg on what led to all of these incidents. Mark last attended a hearing in April with the Senate's Commerce and Judiciary committees this year in which he was asked about the company’s failure to protect its user data, its perceived bias against conservative speech, and its use for selling illegal material like drugs. After which he has not attended any of the hearings and instead sent other senior representatives such as Sheryl Sandberg, COO at Facebook. The letter pointed out: “You have chosen instead to send less senior representatives, and have not yourself appeared, despite having taken up invitations from the US Congress and Senate, and the European Parliament.” Throughout this year we saw major security and data breaches involving Facebook. The social media platform faced a security issue last month which impacted almost 50 million user accounts. Its engineering team discovered that hackers were able to find a way to exploit a series of bugs related to the View As Facebook feature. Earlier this year, Facebook witnessed a backlash for the Facebook-Cambridge Analytica data scandal. It was a major political scandal about Cambridge Analytica using personal data of millions of Facebook users for political purposes without their permission. The reports of this hearing will be shared in December if at all Zuckerberg agrees to attend it. The committee has requested his response till 7th November. Read the full letter issued by the committee. Facebook is at it again. This time with Candidate Info where politicians can pitch on camera Facebook finds ‘no evidence that hackers accessed third party Apps via user logins’, from last week’s security breach How far will Facebook go to fix what it broke: Democracy, Trust, Reality
Read more
  • 0
  • 0
  • 1809

article-image-google-employees-walkout-for-real-change-today-these-are-their-demands
Natasha Mathur
01 Nov 2018
5 min read
Save for later

Google employees ‘Walkout for Real Change’ today. These are their demands.

Natasha Mathur
01 Nov 2018
5 min read
More than 1500 Google employees, around the world, are planning to walk out of their respective Google offices today, to protest against Google’s handling of sexual misconduct within the workplace, according to the New York Times. This is a part of the “women’s walkout” that was organized by more than 200 Google engineers, earlier this week as a response to Google’s handling of sexual misconduct in the recent past, that employees found as inadequate. The planning for the walkout was done last Friday, where Claire Stapleton, product marketing manager at Google’s YouTube created an internal mailing list to organize the walkout according to the New York Times. As the walkout was organized, more than 200 employees had joined in over the weekend, which has since grown to more than 1,500. The organizers took to Twitter, yesterday, to lay out five demands for change within the workplace. The protest has already started at Google’s Tokyo and Singapore office. Google employees and contractors, across the globe, will be leaving work at 11:10 AM in their respective time zones.   Here are some glimpses from the walkout: https://twitter.com/GoogleWalkout/status/1058199862502612993 https://twitter.com/EmmaThomson2/status/1058180157804994562 https://twitter.com/GoogleWalkout/status/1058018104930897920 https://twitter.com/GoogleWalkout/status/1058010748444700672 https://twitter.com/GoogleWalkout/status/1058003099581853697 The demands laid out by the Google employees are as follows: An end to Forced Arbitration in cases of harassment and discrimination for all current and future employees. This means that Google should no longer require people to waive their right to sue. In fact, every co-worker should be given the right to bring a co-worker, representative, or supporter of their choice when meeting with HR for filing a harassment claim. A commitment to end pay and opportunity inequity. This includes making sure that there are women of color at all the levels of the organization. There should also be transparent data on the gender, race, and ethnicity compensation gap, across both level and years of industry experience.  The methods and techniques that have been used to aggregate such data should also be transparent. A publicly disclosed sexual harassment transparency report. This includes the number of harassment claims at Google over time, types of claims submitted, how many victims and accused have left Google, details about exit packages and their worth. A clear, uniform, and globally inclusive process for reporting sexual misconduct safely and anonymously. This is because the current process in place is not working. HR’s performance is assessed by senior management and directors, which forces them to put the management’s interest ahead of the employees that report harassment and discrimination. Accountability, safety, and ability to report regarding unsafe working conditions should not be dictated by the employment status. Elevate the Chief Diversity Officer to answer directly to the CEO and make recommendations directly to the Board of Directors. Appoint an Employee Rep to the Board. The frustration among the Google employees surfaced after the New York Times report brought to light the shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. As per the report, Rubin was accused of misbehavior in 2014 and the allegations were confirmed by Google. Due to this, he was asked to leave by former Google CEO, Mr.Page, but what’s discreditable is the fact that Google paid him $90 million as an exit package. Moreover,  he also received a high profile well-respected farewell by Google in October 2014. Also, the fact that senior executives such as Drummond, Chief Legal Officer, Alphabet, who were mentioned in the NY times report for indulging in “inappropriate relationships” within the organization continues to work in highly placed positions at Google and haven’t faced any real punitive action by Google for their past behavior. “We don’t want to feel that we’re unequal or we’re not respected anymore. Google’s famous for its culture. But in reality, we’re not even meeting the basics of respect, justice, and fairness for every single person here”, Stapleton told the NY Times. Google CEO Sundar Pichai had sent an email to all the Google employees, last Thursday, clarifying that the company has fired 48 people over the last two years for sexual harassment, out of whom, 13  were “senior managers and above”. He also mentioned how none of them received any exit packages. Sundar Pichai, Google’s CEO, further apologized in an email obtained by Axios this Tuesday, saying that the “apology at TGIF didn’t come through, and it wasn’t enough”. Pichai also mentioned that he supports the engineers at Google who have organized a “walkout”. “I am taking in all your feedback so we can turn these ideas into action. We will have more to share soon. In the meantime, Eileen will make sure managers are aware of the activities planned for Thursday and that you have the support you need”, wrote Pichai. The very same day, news of Richard DeVaul, a director at unit X of Alphabet (Google’s parent company) whose name was also mentioned in the New York Times report, resigning from the company came to light. DeVaul had been accused of sexually harassing Star Simpson, a hardware engineer. DeVaul did not receive any exit package on his resignation. Public response to the walkout has been largely positive: https://twitter.com/lizthegrey/status/1057859226100355072 https://twitter.com/amrtgaber/status/1057822987527761920 https://twitter.com/sparker2/status/1057846019122069508 https://twitter.com/LisaIronTongue/status/1057852658948595712 Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices
Read more
  • 0
  • 0
  • 2870

article-image-we-are-not-going-to-withdraw-from-the-future-says-microsofts-brad-smith-on-the-ongoing-jedi-bid-amazon-concurs
Prasad Ramesh
29 Oct 2018
5 min read
Save for later

‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs

Prasad Ramesh
29 Oct 2018
5 min read
The Pentagon has been trying to get a hold of AI and related technologies from tech giants. Google employees had quit over it, Microsoft employees had asked the company to withdraw from the JEDI project. Last Friday, Microsoft President Brad Smith wrote about Microsoft and the US Military and the company’s visions in this area. Amazon, Microsoft, IBM, and Oracle are the companies who have bid for the Joint Enterprise Defense Infrastructure (JEDI) project. JEDI is a department wide cloud computing infrastructure that will give the Pentagon access to weapons systems enhanced with artificial intelligence and cloud computing. Microsoft believes in defending USA “We are not going to withdraw from the future, in the most positive way possible, we are going to work to help shape it.” said Brad Smith, President at Microsoft indicating that Microsoft intends to provide their technology to the Pentagon. Microsoft did not shy away from bidding in the Pentagon’s JEDI project. This in contrast to Google, which opted out of the same program earlier this month citing ethical concerns. Smith expressed Microsoft’s intent towards providing AI and related technologies to the US defense department saying, “we want the people who defend USA to have access to the nation’s best technology, including from Microsoft”. Smith stated that Microsoft’s work in this area is based on three convictions: Microsoft believes in the strong defense of USA and wants the defenders to have access to USA’s best technology, this includes Microsoft They want to use their ‘knowledge and voice’ to address ethical AI issues via the nation’s ‘civic and democratic processes’. They are giving their employees to opt out of work on these projects given that as a global company they consist of employees from different countries. Smith shared that Microsoft has had a long standing history with the US Department of Defense (DOD). Their tech has been used throughout the US military from the front office to field operations. This includes bases, ships, aircraft and training facilities. Amazon shares Microsoft’s visions Amazon too shares these visions with Microsoft in empowering US law and defense institutions with the latest technology. Amazon already provides cloud services to power the Central Intelligence Agency (CIA). Amazon CEO, Jeff Bezos said: “If big tech companies are going to turn their back on the Department of Defense, this country is going to be in trouble.” Amazon also provides the US law enforcement with their facial recognition technology called Rekognition. This has been a bone of contention for not just civil rights groups but also for some Amazon’s employees. Rekognition will help in identifying and incarcerating undesirable people. But it does not really work with accuracy. In a study by ACLU, Rekognition identified 28 people from the US congress incorrectly. The American Civil Liberties Union (ACLU) has now filed a Freedom of Information Act (FOIA) request which demands the Department of Homeland Security (DHS) to disclose how DHS and Immigration and Customs Enforcement (ICE) use Rekognition for law enforcement and immigration checks. Google’s rationale for withdrawing from the JEDI project Last week, in an interview with the Fox Network, Oracle founder Larry Ellison stated that it was shocking how Google viewed this matter. Google withdrew from the JEDI project following strong backlash from many of its employees. In the official statement, they have stated the reason for dropping out of the JEDI contract bidding as an ethical value misalignment and also that they don’t fully have all necessary clearance to work on Government projects.’ However, Google is open to launching a customized search engine in China that complies with China’s rules of censorship including potential to surveil Chinese citizens. Should AI be used in weapons? This question is the at the heart of the contentious topic of the tech industry working with the military. It is a serious topic that has been debated over the years by educated scientists and experienced leaders. Elon Musk, researchers from DeepMind and other companies even pledged to not build lethal AI. Personally, I side with the researchers and believe AI should be used exclusively for the benefit of mankind, to enhance human lives and solve problems that would prosper people’s lives. And not against each other in a race to build weapons or to become a superpower. But then again what would I know? Leading nations are in an AI arms race as we speak, with sophisticated national AI plans and agendas. For more details on Microsoft’s interest in working with the US Military visit the Microsoft website. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract
Read more
  • 0
  • 0
  • 2355

article-image-what-we-learnt-from-the-github-octoverse-2018-report
Amey Varangaonkar
24 Oct 2018
8 min read
Save for later

What we learnt from the GitHub Octoverse 2018 Report

Amey Varangaonkar
24 Oct 2018
8 min read
Highlighting key accomplishments over the last one year, Microsoft’s recent major acquisition GitHub released their yearly Octoverse report. The last 365 days have seen GitHub grow from strengths to strengths as the world’s leading source code management platform. The Octoverse report highlights how developers work and learn on GitHub. It also gives us some interesting, insights into the way the developers and even organizations are collaborating across geographies and time-zones, on a variety of interesting projects. The Octoverse report is based on the data collected from October 1 2017 to September 30, 2018, exactly 365 days from the publication of the last Octoverse report. In this article, we look at some of the key takeaways from the Octoverse 2018 report. Asia is home to GitHub’s fastest growing community GitHub developers who are currently based in Asia can feel proud of themselves. Octoverse 2018 states that more open source projects have been created in Asia than anywhere else in the world. While developers all over the world are joining and using GitHub, most new signups over the last year have come from countries such as China, India, and Japan. At the same time, GitHub usage is also growing quite rapidly in Asian countries such as Hong Kong, Singapore, Bangladesh, and Malaysia. This is quite interesting, considering the growth of AI has become part of the national policies in countries such as China, Hong Kong, and Japan. We can expect these trends to continue, and developing countries such as India and Bangladesh to contribute even more going forward. An ever-growing developer community squashes doubts on GitHub’s credibility When Microsoft announced their plans to buy GitHub in a deal worth $7.5 billion, many eyebrows were raised. Given Microsoft’s earlier stance against Open Source projects, some developers were skeptical of this move. They feared that Microsoft would exploit GitHub’s popularity and inject some kind of a subscription model into GitHub in order to recover the huge investment. Many even migrated their projects from GitHub on to rival platforms such as BitBucket and GitLab in protest. However, the numbers presented in the Octoverse report seem to suggest otherwise. According to the report, the number of new registrations last year alone was more than the number of registrations in the first 6 years of GitHub, which is quite impressive. The number of active contributors on GitHub has increased by more than 1.5 times over the last year, suggesting GitHub is still the undisputed leader when it comes to code management and collaboration. With more than 1.1 billion contributions across private and public projects over one year, I think we all know where major developers’ loyalty lies. Not just developers, organizations love GitHub too The Octoverse report states that 2.1 million organizations are using GitHub in some capacity, across public and private repositories. This number is a staggering 40% increase from 2017 - indicating the huge reliance on GitHub for effective code management and collaboration between the developers. Not just that, over 150,000 developers and organizations are using the apps and tools available on the GitHub marketplace for quick, efficient and seamless code development and management. GitHub had also launched a new feature called Security Alerts way back in November 2017. This feature alerted developers of any vulnerabilities in their project dependencies, and also suggested fixes for them from the community. Many organizations have found this feature to be an invaluable offering by GitHub, as it allowed for the development of secure, bug-free applications. Their faith in GitHub will be reinforced even more now that the report has revealed that over the last year, more than 5 million vulnerabilities were detected and communicated across to the developers. The report also suggests that members of an organization make substantial contributions to the projects and are twice as much active when they install and use the company app on GitHub. This suggests that GitHub offers them the best environment and the luxury to develop apps just as they want. All these insights only point towards one simple fact - Organizations and businesses trust GitHub. Microsoft are walking the talk with active open source contribution Microsoft joined the Linux Foundation after its initial (and vehement) opposition to the Open Source movement. With a change in leadership and the long-term vision came the realization that open source is essential for them - and the world - to progress. Eventually, they declared their support for the cause by going platinum with the Open Source initiative. That is now clearly being reflected in their achievements of the past year. Probably the most refreshing takeaway from the Octoverse report was to see Microsoft leading the pack when it comes to active open source contribution. The report states that Microsoft’s VSCode was the top open source project with 19,000 contributors. Also, it declared that the open source documentation of Azure was the fastest growing project on GitHub. Top open source projects on GitHub (Image courtesy: GitHub State of Octoverse 2018 Report) If this was not enough evidence to suggest Microsoft has amped up their claims of supporting the Open Source movement wholeheartedly, there’s more. Over 7000 Microsoft employees have contributed to various open source projects over the past one year, making it the top-most organization with the most Open Source contribution. Open source contribution by organization (Image source: GitHub State of Octoverse 2018 Report) When we said that Microsoft’s acquisition of GitHub was a good move, we were right! React Native and Machine Learning are red hot right now React Native has been touted to be the future of mobile development by many. This claim is corroborated by some strong activity on its GitHub repository over the last year. With over 10k contributors, React Native is one of the most active open source projects right now. With JavaScript continuing to rule the roost for the 5th straight year when it comes to being the top programming language, it comes as no surprise that the cross-platform framework for building native apps is now getting a lot of traction. Top languages over time (Image source: GitHub State of Octoverse 2018 Report) With the rise in popularity of Artificial Intelligence and specifically Machine Learning, the report also highlighted the continued rise of Tensorflow and PyTorch. While Tensorflow is the third most popular open source project right now with over 9000 contributors, Pytorch is one of the fastest growing projects on GitHub. The report also showed that Google and Facebook’s experimental frameworks for machine learning, called Dopamine and Detectron respectively are getting deserved attention thanks to how they are simplifying machine learning. Given the scale at which AI is being applied in the industry right now, these tools are expected to make developers’ lives easier going forward. Hence, it is not surprising to see their interest centered around these tools. GitHub’s Student Developer Pack to promote learning is a success According to the Octoverse report, over 1 million developers have honed their skills by learning best coding practices on GitHub. With over 600,000 active developer students learning how to write effective code through their Student Developer Pack, GitHub continue to give free access to the best development tools so that the students learn by doing and get valuable hands-on experience. In the academia, yet another fact that points to GitHub’s usefulness when it comes to learning is how teachers use the platform to implement real-world workflows for teaching. Over 20,000 teachers in over 18000 schools and universities have used GitHub to create over 200,000 assignments till date. Safe to say that this number is only going to grow in the near future. You can read more about how GitHub is promoting learning in their GitHub Education Classroom Report. GitHub’s competition has some serious catching up to do Since Google’s parent company Alphabet lost out to Microsoft in the race to buy GitHub, they have diverted their attention to GitHub’s competitor GitLab. Alphabet have even gone on to suggest that GitLab can surpass GitHub. According to the Octoverse report, Google are only behind Microsoft when it comes to the most open source contributions by any organization. With Gitlab joining forces with Google by moving their operations to Google Cloud Platform from Azure cloud, we might see Google’s contribution to GitHub reduce significantly over the next few years. Who knows, the next Octoverse report might not feature Google at all! That said, the size of the GitHub community, along with the volume of activity that happens on the platform on a per day basis - are both staggering and no other platforms come even close. This fact was supported by the enormity of some of the numbers that the report presented, such as: There are over 31 million developers on the platform till date. More than 96 million repositories are currently being hosted on GitHub There have been 65 million pull requests created in the last one year alone, contributing to almost 33% of the total number of pull requests created till date These numbers dwarf the other platforms such as GitLab, BitBucket and others, in comparison. Not only is GitHub the world’s most popular code collaboration and version control platform, it is currently the #1 choice of tool for most of the developers in the world. It will take some catching up for the likes of GitLab and others, to come even close to GitHub. In 5 years, machines will do half of our job tasks of today; 1 in 2 employees need reskilling/upskilling now – World Economic Forum survey. Survey reveals how artificial intelligence is impacting developers across the tech landscape What the IEEE 2018 programming languages survey reveals to us
Read more
  • 0
  • 0
  • 4426
article-image-epics-public-voice-coalition-announces-universal-guidelines-for-artificial-intelligence-ugai-at-icdppc-2018
Natasha Mathur
23 Oct 2018
5 min read
Save for later

EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018

Natasha Mathur
23 Oct 2018
5 min read
The Public Voice Coalition, an organization that promotes public participation in decisions regarding the future of the Internet, came out with guidelines for AI, namely, Universal Guidelines on Artificial Intelligence (UGAI), today. The UGAI were announced at the currently ongoing, 40th International Data Protection and Privacy Commissioners Conference (ICDPPC), in Brussels, Belgium, today. The ICDPPC is a worldwide forum where independent regulators from around the world come together to explore high-level recommendations regarding privacy, freedom, and protection of data. These recommendations are addressed to governments and international organizations. The 40th ICDPPC has speakers such as Tim Berners Lee (director of the world wide web), Tim Cook (Apple Inc, CEO), Giovanni Butarelli (European Data Protection Supervisor), and Jagdish Singh Khehar (44th Chief Justice of India) among others attending the conference. The UGAI combines the elements of human rights doctrine, data protection law, as well as ethical guidelines. “We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems”, reads the announcement page. The UGAI comprises twelve different principles for AI governance that haven’t been previously covered in similar policy frameworks. Let’s have a look at these principles in UGAI. Transparency principle Transparency principle puts emphasis on an individual’s right to interpret the basis of a particular AI decision concerning them. This means all individuals involved in a particular AI project should have access to the factors, the logic, and techniques that produced the outcome. Right to human determination The Right to human determination focuses on the fact that individuals and not machines should be responsible when it comes to automated decision-making. For instance, during the operation of an autonomous vehicle, it is impractical to include a human decision before the machine makes an automated decision. However, if an automated system fails, then this principle should be applied and human assessment of the outcome should be made to ensure accountability. Identification Obligation This principle establishes the foundation of AI accountability and makes the identity of an AI system and the institution responsible quite clear. This is because an AI system usually knows a lot about an individual. But, the individual might now even be aware of the operator of the AI system. Fairness Obligation The Fairness Obligation puts an emphasis on how the assessment of the objective outcomes of the AI system is not sufficient to evaluate an AI system. It is important for the institutions to ensure that AI systems do not reflect unfair bias or make any discriminatory decisions. Assessment and accountability Obligation This principle focuses on assessing an AI system based on factors such as its benefits, purpose, objectives, and the risks involved before and during its deployment. An AI system should be deployed only after this evaluation is complete. In case the assessment reveals substantial risks concerning Public Safety and Cybersecurity, then the AI system should not be deployed. This, in turn, ensures accountability. Accuracy, Reliability, and Validity Obligations This principle focuses on setting out the key responsibilities related to the outcome of automated decisions by an AI system. Institutions must ensure the accuracy, reliability, and validity of decisions made by their AI system. Data Quality Principle This puts an emphasis on the need for institutions to establish data provenance. It also includes assuring the quality and relevance of the data that is fed into the AI algorithms. Public Safety Obligation This principle ensures that institutions assess the public safety risks arising from AI systems that control different devices in the physical world. These institutions must implement the necessary safety controls within such AI systems. Cybersecurity Obligation This principle is a follow up to the Public Safety Obligation and ensures that institutions developing and deploying these AI systems take cybersecurity threats into account. Prohibition on Secret Profiling This principle states that no institution shall establish a secret profiling system. This is to ensure the possibility of independent accountability. Prohibition on Unitary Scoring This principle states that no national government shall maintain a general-purpose score on its citizens or residents. “A unitary score reflects not only a unitary profile but also a predetermined outcome across multiple domains of human activity,” reads the guideline page. Termination Obligation Termination Obligation states that an institution has an affirmative obligation to terminate the AI system built if human control of that system is no longer possible. For more information, check out the official UGAI documentation. The ethical dilemmas developers working on Artificial Intelligence products must consider Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms Introducing Deon, a tool for data scientists to add an ethics checklist
Read more
  • 0
  • 0
  • 2946

article-image-following-linux-gnu-publishes-kind-communication-guidelines-to-benefit-members-of-disprivileged-demographics
Sugandha Lahoti
23 Oct 2018
5 min read
Save for later

Following Linux, GNU publishes ‘Kind Communication Guidelines’ to benefit members of ‘disprivileged’ demographics

Sugandha Lahoti
23 Oct 2018
5 min read
The GNU project published Kind Communication Guidelines, yesterday, to encourage contributors to be kinder in their communication to fellow contributors, especially to women and other members of disprivileged demographics. This news follows the recent changes in the Code of Conduct for the Linux community. Last month, Linux maintainers revised its Code of Conflict, moving instead to a Code of Conduct. The change was committed by Linus Torvalds, who shortly after the change took a  self-imposed leave from the project to work on his behavior. By switching to a Code of Conduct, Linux placed emphasis on how contributors and maintainers work together to cultivate an open and safe community that people want to be involved in. However, Linux’s move was not received well by many of its developers. Some even threatened to pull out their blocks of code important to the project to revolt against the change. The main concern was that the new CoC could be randomly or selectively used as a tool to punish or remove anyone from the community. Read the summary of developers views on the Code of Conduct that, according to them, justifies their decision. GNU is taking an approach different from Linux in evolving its community into a more welcoming place for everyone. As opposed to a stricter code of conduct, which enforces people to follow rules or suffer punishments, the Kind communication guidelines will guide people towards kinder communication rather than ordering people to be kind. What do Stallman’s ‘Kindness’ guidelines say? In a post, Richard Stallman, President of the Free Software Foundation, said “People are sometimes discouraged from participating in GNU development because of certain patterns of communication that strike them as unfriendly, unwelcoming, rejecting, or harsh. This discouragement particularly affects members of disprivileged demographics, but it is not limited to them.” He further adds, “Therefore, we ask all contributors to make a conscious effort, in GNU Project discussions, to communicate in ways that avoid that outcome—to avoid practices that will predictably and unnecessarily risk putting some contributors off.” Stallman encourages contributors to lead by example and apply the following guidelines in their communication: Do not give heavy-handed criticism Do not criticize people for wrongs that you only speculate they may have done. Try and understand their work. Please respond to what people actually said, not to exaggerations of their views. Your criticism will not be constructive if it is aimed at a target other than their real views. It is helpful to show contributors that being imperfect is normal and politely help them in fixing their problems. Reminders on problems should be gentle and not too frequent. Avoid discrimination based on demographics Treat other participants with respect, especially when you disagree with them. He requests people to identify and acknowledge people by the names they use and their gender identity. Avoid presuming and making comments on a person’s typical desires, capabilities or actions of some demographic group. These are off-topic in GNU Project discussions. Personal attacks are a big no-no Avoid making personal attacks or adopt a harsh tone for a person. Go out of your way to show that you are criticizing a statement, not a person. Vice versa, if someone attacks or offends your personal dignity, please don't “hit back” with another personal attack. “That tends to start a vicious circle of escalating verbal aggression. A private response, politely stating your feelings as feelings, and asking for peace, may calm things down.” Avoid arguing unceasingly for your preferred course of action when a decision for some other course has already been made. That tends to block the activity's progress. Avoid indulging in political debates Contributors are required to not raise unrelated political issues in GNU Project discussions. The only political positions that the GNU Project endorses are that users should have control of their own computing (for instance, through free software) and supporting basic human rights in computing. Stallman hopes that these guidelines, will encourage more contribution to GNU projects, and the subsequent discussions will be friendlier and reach conclusions more easily. Read the full guidelines on GNU blog. People’s reactions to GNU’s move has been mostly positive. https://twitter.com/MatthiasStrubel/status/1054406791088562177 https://twitter.com/0xUID/status/1054506057563824130 https://twitter.com/haverdal76/status/1054373846432673793 https://twitter.com/raptros_/status/1054415382063316993 Linus Torvalds and Richard Stallman have been the fathers of the open source movement since its inception over twenty years ago. As such, these moves underline that open source indeed has a toxic culture problem, but is evolving and sincerely working to make it more open and welcoming to all to easily contribute to projects. We’ll be watching this space closely to see which approach to inclusion works more effectively and if there are other approaches to making this transition smooth for everyone involved. Stack Overflow revamps its Code of Conduct to explain what ‘Be nice’ means – kindness, collaboration, and mutual respect. Linux drops Code of Conflict and adopts new Code of Conduct. Mozilla drops “meritocracy” from its revised governance statement and leadership structure to actively promote diversity and inclusion  
Read more
  • 0
  • 0
  • 2666

article-image-inside-googles-project-dragonfly-china-ambitions
Aarthi Kumaraswamy
16 Oct 2018
8 min read
Save for later

OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?

Aarthi Kumaraswamy
16 Oct 2018
8 min read
Wired has managed to do what Congress couldn’t - bring together tech industry leaders in the US and ask the pressing questions of our times, in a safe and welcoming space. Just for this, they deserve applause. Yesterday at Wired 25 summit, Sundar Pichai, Google’s CEO, among other things, opened up to Backchannel’s Editor in chief, Steven Levy, about Project Dragonfly for the first time in public. Project Dragonfly is the secretive search engine that Google is allegedly developing which will comply with the Chinese rules of censorship. The following is my analysis of why Google is deeply invested in project Dragonfly.  Google’s mission since its inception has been to organize the world’s information and to make it universally accessible, as Steven puts it. When asked if this has changed in 2018, Pichai responded saying Google’s mission remains the same, and so do its founding values. However what has changed is the scale at which their operation, their user base, and their product portfolio. In effect, this means the company now views everything it does from a wider lens instead of just thinking about its users. [embed]https://www.facebook.com/wired/videos/vb.19440638720/178516206400033/?type=2&theater[/embed] For Google, China is an untapped source of information “We are compelled by our mission [to] provide information to everyone, and [China is] 20 percent of the world's population”,  said Pichai. He believes China is a highly innovative and underserved market that is too big to be ignored. For this reason, according to Pichai at least, Google is obliged to take a long-term view on the subject. But there are a number of specific reasons that make China compelling to Google right now. China is a huge social experiment at scale, with wide-scale surveillance and monitoring - in other words, data. But with the Chinese government keen to tightly control information about the country and its citizens, its not necessarily well understood by businesses from outside the country. This means moving into China could be an opportunity for Google to gain a real competitive advantage in a number of different ways. Pichai confirmed that internal tests show that Google can serve well over 99 percent of search queries from users in China. This means they probably have a good working product prototype to launch soon, should a window of opportunity arise. These lessons can then directly inform Google’s decisions about what to do next in China. What can Google do with all that exclusive knowledge? Pichai wrote earlier last week to some Senate members who wanted answers on Project Dragonfly that Google could have “broad benefits inside and outside of China.” He did not go into detail, but these benefits are clear. Google would gain insight into a huge country that tightly controls information about itself and its citizens. Helping Google to expand into new markets By extension, this will then bring a number of huge commercial advantages when it comes to China. It would place Google in a fantastic position to make China another huge revenue stream. Secondly, the data harvested in the process could provide a massive and critical boost to Google’s AI research, products and tooling ecosystems that others like Facebook don’t have access to. The less obvious but possibly even bigger benefits for Google are the wider applications of its insights. These will be particularly useful as it seeks to make inroads into other rapidly expanding markets such as India, Brazil, and the African subcontinent. Helping Google to consolidate its strength in western nations As well as helping Google expand, it’s also worth noting that Google’s Chinese venture could support the company as it seeks to consolidate and reassert itself in the west. Here, markets are not growing quickly, but Google could do more to advance its position within these areas using what it learns from business and product innovations in China. The caveat: Moral ambivalence is a slippery slope Let’s not forget that the first step into moral ambiguity is always the hardest. Once Google enters China, the route into murky and morally ambiguous waters actually gets easier. Arguably, this move could change the shape of Google as we know it. While the company may not care if it makes a commercial impact, the wider implications of how tech companies operate across the planet could be huge. How is Google rationalizing the decision to re-enter China Letting a billion flowers bloom and wither to grow a global forest seems to be at the heart of Google’s decision to deliberately pursue China’s market. Following are some ways Google has been justifying its decision We never left China When asked about why Google has decided to go back to China after exiting the market in 2010, Pichai clarified that Google never left China. They only stopped providing search services there. Android, for example, has become one of the popular mobile OSes in China over the years. He might as well have said ‘I already have a leg in the quicksand, might as well dip the other one’. Instead of assessing the reasons to stay in China through the lens of their AI principles, Google is jumping into the state censorship agenda. Being legally right is morally right “Any time we are working in countries around the world, people don't understand fully, but you're always balancing a set of values... Those values include providing access to information, freedom of expression, and user privacy… But we also follow the rule of law in every country,” said Pichai in the Wired 25 interview. This seems to imply that Google sees legal compliance analogous ethical practices. While the AI principles at Google should have guided them regarding situations precisely like this one, it has reduced to an oversimplified ‘don’t create killer AI’ tenet.  Just this Tuesday, China passed a law that is explicit about how it intends to use technology to implement extreme measures to suppress free expression and violate human rights. Google is choosing to turn a blind eye to how its technology could be used to indirectly achieve such nefarious outcomes in an efficient manner. We aren’t the only ones doing business in China Another popular reasoning, though not mentioned by Google, is that it is unfair to single out Google and ask them to not do business in China when others like Apple have been benefiting from such a relationship over the years. Just because everyone is doing something, it does not make it intrinsically right. As a company known for challenging the status quo and for stand by its values, this marks the day when Google lost its credentials to talk about doing the right thing. Time and tech wait for none. If we don’t participate, we will be left behind Pichai said, “Technology ends up progressing whether we want it to or not. I feel on every important technology it is important that you work aggressively to make sure the outcome is good.” Now that is a typical engineering response to a socio-philosophical problem. It reeks of hubris that most tech executives in Silicon Valley wear as badges of honor. We’re making information universally accessible and thus enriching lives Pichai observed that in China there are many areas, such as cancer treatment options, where Google can provide better and more authentic information than what products and services available. I don’t know about you, but when an argument leans on cancer to win its case, I typically disregard it. All things considered, in the race for AI domination, China’s data is the holy grail. An invitation to watch and learn from close quarters is an offer too good to refuse, for even Google. Even as current and former employees, human rights advocacy organizations, and Senate members continue to voice their dissent strongly, Google is sending a clear message that it isn’t going to back down on Project Dragonfly. The only way to stop this downward moral spiral at this point appears to be us, the Google current users, as the last line of defense to protect human rights, freedom of speech and other democratic values. That gives me a sinking feeling as I type this post in Google docs, use Chrome and Google search to gather information just way I have been doing for years now. Are we doomed to a dystopian future, locked in by tech giants that put growth over stability, viral ads over community, censorship, and propaganda over truth and free speech? Welcome to 1984.
Read more
  • 0
  • 0
  • 4380
article-image-is-mozilla-the-most-progressive-tech-organization-on-the-planet-right-now
Richard Gall
16 Oct 2018
7 min read
Save for later

Is Mozilla the most progressive tech organization on the planet right now?

Richard Gall
16 Oct 2018
7 min read
2018, according to The Economist, has been the year of the techlash. scandals, protests, resignations, congressional testimonies - many of the largest companies in the world have been in the proverbial dock for a distinct lack of accountability. Together, these stories have created a narrative where many are starting to question the benefits of unbridled innovation. But Mozilla is one company that seems to have bucked that trend. In recent weeks there have been a series of news stories that suggest Mozilla is a company thinking differently about its place in the world, as well as the wider challenges technology poses society. All of these come together to present Mozilla in a new light. Cynics might suggest that much of this is little more than some smart PR work, but it's a little unfair to dismiss what some impressive work. So much has been happening across the industry that deserves scepticism at best and opprobrium at worst. To see a tech company stand out from the tiresome pattern of stories this year can only be a good thing. Mozilla on education: technology, ethical code, and the humanities Code ethics has become a big topic of conversation in 2018. And rightly so - with innovation happening at an alarming pace, it has become easy to make the mistake of viewing technology as a replacement for human agency, rather than something that emerges from it. When we talk about code ethics it reminds us that technology is something built from the decisions and actions of thousands of different people. It’s for this reason that last week’s news that Mozilla has teamed up with a number of organizations, including the Omidyar Network to announce a brand new prize for computer science students feels so important. At a time when the likes of Mark Zuckerberg dance around any notion of accountability, peddling a narrative where everything is just a little bit beyond Facebook’s orbit of control, the ‘Responsible Computer Science Challenge’ stands out. With $3.5 million up for grabs for smart computer science students, it’s evidence that Mozilla is putting its money where its mouth is and making ethical decision making something which, for once, actually pays. Mitchell Baker on the humanities and technology Mitchell Baker’s comments to the Guardian that accompanied the news also demonstrate a refreshingly honest perspective from a tech leader. “One thing that’s happened in 2018,” Baker said, “is that we’ve looked at the platforms, and the thinking behind the platforms, and the lack of focus on impact or result. It crystallised for me that if we have STEM education without the humanities, or without ethics, or without understanding human behaviour, then we are intentionally building the next generation of technologists who have not even the framework or the education or vocabulary to think about the relationship of STEM to society or humans or life.” Baker isn’t, however, a crypto-luddite or an elitist that wants full stack developer classicists. Instead she’s looking forward at the ways in which different disciplines can interact and inform one another. It’s arguably an intellectual form of DevOps. It is a way of bridging the gap between STEM skills and practices, and those rooted in the tradition of the humanities. The significance of this intervention shouldn’t be understated. It opens up a dialogue within society and the tech industry that might get us to a place where ethics is simply part and parcel of what it means to build and design software, not an optional extra. Mozilla’s approach to internal diversity: dropping meritocracy The respective cultures of organizations and communities across tech has been in the spotlight over the last few months. Witness the bitter furore over Linux change to its community guidelines to see just how important definitions and guidelines are to the people within them. That’s why Mozilla’s move to drop meritocracy from its guidelines of governance and leadership structures was a small yet significant move. It’s simply another statement of intent from a company eager to ensure it helps develop a culture more open and inclusive than the tech world has been over the last thirty decades. In a post published on the Mozilla blog at the start of October, Emma Irwin (D&I Strategy, Mozilla Contributors and Communities) and Larissa Shapiro (Head of Global Diversity & Inclusion at Mozilla) wrote that “Meritocracy does not consider the reality that tech does not operate on a level playing field.” The new governance proposal actually reflects Mozilla’s apparent progressiveness pretty well. In it, it states that “the project also seeks to debias this system of distributing authority through active interventions that engage and encourage participation from diverse communities.” While there has been some criticism of the change, it’s important to note that the words used by organizations of this size does have an impact on how we frame and focus problems. From this perspective, Mozilla’s decision could well be a vital small step in making tech more accessible and diverse. The tech world needs to engage with political decision makers Mozilla isn't just a 'progressive' tech company because of the content of its political beliefs. Instead, what's particularly important is how it appears to recognise that the problems that technology faces and engages with are, in fact, much bigger than technology itself. Just consider the actions of other tech leaders this year. Sundar Pichai didn't attend his congressional hearing, Jack Dorsey assured us that Twitter has safety at its heart while verifying neo-Nazis, while Mark Zuckerberg suggested that AI can fix the problems of election interference and fake news. The hubris has been staggering. Mozilla's leadership appears to be trying hard to avoid the same pitfalls. We shouldn’t be surprised that Mozilla actually embraced the idea of 2018’s ‘techlash.' The organization used the term in the title of a post directed at G20 leaders in August. Written alongside The Internet Society and the Web Foundation, it urged global leaders to “reinject hope back into technological innovation.” Implicit in the post is an acknowledgement that the aims and goals of much of the tech industry - improving people’s lives, making infrastructure more efficient - can’t be purely solved by the industry itself. It is a subtle stab at what might be considered hubris. Taking on government and regulation But this isn’t to say Mozilla is completely in thrall to government and regulation. Most recently (16 October), Mozilla voiced its concerns about current decryption laws being debated in Australian Parliament. The organization was clear, saying "this is at odds with the core principles of open source, user expectations, and potentially contractual license obligations.” At the beginning of September Mozilla also spoke out against EU copyright reform. The organization argued that “article 13 will be used to restrict the freedom of expression and creative potential of independent artists who depend upon online services to directly reach their audience and bypass the rigidities and limitations of the commercial content industry.”# While opposition to EU copyright reform came from a range of voices - including those huge corporations that have come under scrutiny during the ‘techlash’ - Mozilla is, at least, consistent. The key takeaway from Mozilla: let’s learn the lessons of 2018’s techlash The techlash has undoubtedly caused a lot of pain for many this year. But the worst thing that could happen is for the tech industry to fail to learn the lessons that are emerging. Mozilla deserve credit for trying hard to properly understand the implications of what’s been happening and develop a deliberate vision for how to move forward.
Read more
  • 0
  • 0
  • 4992

article-image-privacy-experts-urge-the-senate-commerce-committee-for-a-strong-federal-privacy-bill-that-sets-a-floor-not-a-ceiling
Sugandha Lahoti
11 Oct 2018
9 min read
Save for later

Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling”

Sugandha Lahoti
11 Oct 2018
9 min read
The Senate Commerce Committee held a hearing yesterday on consumer data privacy. The hearing focused on the perspective of privacy advocates and other experts. These advocates encouraged federal lawmakers to create strict data protection regulation rules, giving consumers more control over their personal data. The major focus was on implementing a strong common federal consumer privacy bill “that sets a floor, not a ceiling.” Representatives included Andrea Jelinek, the chair of the European Data Protection Board; Alastair Mactaggart, the advocate behind California's Consumer Privacy Act; Laura Moy, executive director of the Georgetown Law Center on Privacy and Technology; and Nuala O'Connor, president of the Center for Democracy and Technology. The Goal: Protect user privacy, allow innovation John Thune, the Committee Chairman said in his opening statement, “Over the last few decades, Congress has tried and failed to enact comprehensive privacy legislation. Also in light of recent security incidents including Facebook’s Cambridge Analytica and another security breach, and of the recent data breach in Google+, it is increasingly clear that industry self-regulation in this area is not sufficient. A national standard for privacy rules of the road is needed to protect consumers.” Senator Edward Markey, in his opening statement, spoke on data protection and privacy saying that “Data is the oil of the 21st century”. He further adds, “Though it has come with an unexpected cost to the users, any data-driven website that uses their customer’s personal information as a commodity, collecting, and selling user information without their permission.” He said that the goal of this hearing was to give users meaningful control over their personal information while maintaining a thriving competitive data ecosystem in which entrepreneurs can continue to develop. What did the industry tell the Senate Commerce Committee in the last hearing on the topic of consumer privacy? A few weeks ago, the Commerce committee held a discussion with Google, Facebook, Amazon, AT&T, and other industry players to understand their perspective on the same topic. The industry unanimously agreed that privacy regulations need to be put in place However, these companies pushed for the committee to make online privacy policy at the federal level rather than at the state level to avoid a nightmarish patchwork of policies for businesses to comply by. They also shared that complying by GDPR has been quite resource intensive. While they acknowledged that it was too soon to assess the impact of GDPR, they cautioned the Senate Commerce Committee that policies like the GDPR and CCPA could be detrimental to growth and innovation and thereby eventually cost the consumer more. As such, they expressed interest in being part of the team that formulates the new federal privacy policy. Also, they believed that the FTC was the right body to oversee the implementation of the new privacy laws. Overall, the last hearing’s meta-conversation between the committee and the industry was heavy with defensive stances and scripted almost colluded recommendations. The Telcos wanted tech companies to do better. The message was that user privacy and tech innovation are too interlinked and there is a need to strike a delicate balance to make privacy work practically. The key message from yesterday’s Senate Commerce Committee hearing with privacy advocates and EU regulator This time, the hearing was focused solely on establishing strict privacy laws and to draft clear guidelines regarding, definitions of ‘sensitive’ data, prohibited uses of data, and establishing limits for how long corporations can hold on to consumer data for various uses. A focal point of the hearing was to give users the key elements of Knowledge, Notice, and No. Consumers need knowledge that their data is being shared and how it is used, notice when their data is compromised, and the ability to say no to the entities that want their personal information. It should also include limits on how companies can use consumer’s information. The bill should prohibit companies from giving financial incentives to users in exchange for their personal information. Privacy must not become a luxury good that only the fortunate can afford. The bill should also ban “take it or leave it” offerings, in which a company requires a consumer to forfeit their privacy in order to consume a product. Companies should not be able to coerce users into providing their personal information by threatening to deprive them of a service. The law should include individual rights like the ability to access, correct, delete, and remove information. Companies should only collect user data which is absolutely necessary to carry out the service and keep that private information safe and secure. The legislation should also include special protections for children and teenagers. The federal government should be given strong enforcement powers and robust rule-making authority in order to ensure rules keep pace with changing technologies. Some of the witnesses believed that the FTC may not the right body to do this and that a new entity focused on this aspect may do a better and more agile job. “We can’t be shy about data regulation”, Laura Moy Laura Moy, Deputy Director of the Privacy and Technology center at Georgetown University law center talked at length about Data regulation. “This is not a time to be shy about data regulation,” Moy said. “Now is the time to intervene.” She emphasized that information should not in any way be used for discrimination. Nor it should be used to amplify hate speech, be sold to data brokers or used to target misinformation or disinformation. She also talked about Robust Enforcement, where she said she plans to call for legislation to “enable robust enforcement both by a federal agency and state attorneys general and foster regulatory agility.” She also addressed the question of whether companies should be able to tell consumers that if they don’t agree to share non-essential data, they cannot receive products or service? She disagreed saying that if companies do so, they have violated the idea of “Free choice”. She also addressed issues as to whether companies should be eligible for offering financial initiatives in exchange for user personal information, “GDPR was not a revolution, but just an evolution of a law [that existed for 20 years]”, Andrea Jelinek Andrea Jelinek, Chairperson, European Data Protection Board, highlighted the key concepts of GDPR and how it can be an inspiration to develop a policy in the U.S. at the federal level. In her opening statements, she said, “The volume of Digital information doubles every two years and deeply modifies our way of life. If we do not modify the roots of data processing gains with legislative initiatives, it will turn into a losing game for our economy, society, and each individual.” She addressed the issue of how GDPR is going to be enforced in the investigation of Facebook by Ireland’s Data protection authority. She also gave stats on the number of GDPR investigations opened in the EU so far. From the figures dating till October 1st, GDPR has 272 cases regarding identifying the lead supervisory authority and concern supervisory authority. There are 243 issues on mutual assistance according to Article 61 of the GDPR. There are also 223 opinions regarding data protection impact assessment. Company practices that have generated the most complaints and concerns from consumers revolved around “User Consent”. She explained why GDPR went with the “regulation route”, choosing one data privacy policy for the entire continent instead of each member country having their own. Jelinek countered Google’s point about compliance taking too much time and effort from the team by saying that given Google’s size, it would have taken around 3.5 hours per employee to get the compliance implemented. She also observed that it could have been reduced a lot, had they followed good data practices, to begin with. She also clarified that GDPR was not a really new or disruptive regulatory framework. In addition to the two years provided to companies to comply with the new rules, there was a 20-year-old data protection directive already in place in Europe in various forms. In that sense she said, GDPR was not a revolution, but just an evolution of a law that existed for 20 years. Californians for Consumer Privacy Act Alastair McTaggart, Chairman of Californians for consumer privacy, talked about CCPA’s two main elements. First, the Right to know, which allows Californians to know the information corporations have collected concerning them. Second, the Right to say no to businesses to stop selling their personal information. He said, “CCPA puts the focus on giving choice back to the consumer and enforced data security, a choice which is sorely needed." He also addressed questions like, “If he believes federal law should also grant permission for 13, 14, and 15-year-old?” What should the new Federal Privacy law look like according to CDT’s O’Connor Center for Democracy and Technology (CDT) President and CEO, Laura O'Connor said, "As with many new technological advancements and emerging business models, we have seen exuberance and abundance, and we have seen missteps and unintended consequences. International bodies and US states have responded by enacting new laws, and it is time for the US federal government to pass omnibus federal privacy legislation to protect individual digital rights and human dignity, and to provide certainty, stability, and clarity to consumers and companies in the digital world." She also highlighted five important pointers that should be kept in mind while designing the new Federal Privacy law. A comprehensive federal privacy law should apply broadly to all personal data and unregulated commercial entities, not just to tech companies. The law should include individual rights like the ability to access, correct, delete, and remove information. Congress should prohibit the collection, use, and sharing of certain types of data when not necessary for the immediate provision of the service. The FTC should be expressly empowered to investigate data abuses that result in discriminatory advertising and other practices. A federal privacy law should be clear on its face and provide specific guidance to companies and markets about legitimate data practices. It is promising to see the Senate Commerce committee sincerely taking in notes from both industry and privacy advocates to enable building strict privacy standards. They are hoping this new legislation is more focused on protecting consumer data than the businesses that profit from it. Only time will tell if a bipartisan consensus to this important initiative will be reached. For a detailed version of this story, it is recommended to hear the full Senate Commerce Committee hearing. Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee. Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy. Facebook, Twitter open up at Senate Intelligence hearing, the committee does ‘homework’ this time.
Read more
  • 0
  • 0
  • 2457