Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-apache-software-foundation-finally-joins-the-github-open-source-community
Amrata Joshi
30 Apr 2019
3 min read
Save for later

Apache Software Foundation finally joins the GitHub open source community

Amrata Joshi
30 Apr 2019
3 min read
In 2016, Apache decided to start integrating GitHub’s repository and tooling with their own services. After working on the integration over the years, they made a move towards simplifying how they work and move all Git projects to GitHub. By February, this year, Apache completed the migration to GitHub and enabled all the projects with a simple platform to host and review code, collaborate on projects, and build software alongside developers around the world. Greg Stein, ASF Infrastructure Administrator, said, "In 2016, the Foundation started integrating GitHub's repository and tooling, with our own services. This enabled selected projects to use GitHub's excellent tools. Over time, we improved, debugged, and solidified this integration. In late 2018, we asked all projects to move away from our internal git service, to that provided by GitHub. This shift brought all of their tooling to our projects, while we maintain a backup mirror on our infrastructure." Yesterday, Apache Software Foundation (ASF) finally joined the GitHub open source community. Apache Software Foundation has about 200M+ lines of code that are managed by an all-volunteer community of 730 individuals. Nat Friedman, Chief Executive Officer of GitHub, said on this announcement, "We're proud to have such a long standing member of the Open Source community migrate to GitHub. Whether we're working with individual Open Source maintainers and contributors or some of the world's largest Open Source foundations like Apache, GitHub's mission is to be the home for all developers by supporting Open Source communities, addressing their unique needs, and helping Open Source projects thrive." Initially, Apache projects had two version control services, Apache Subversion and Git. As the number of projects increased, ASF communities wanted to see their source code available on GitHub because the codes were read-only mirrors. Also, the ability to use the GitHub tools around those repositories was very limited. This made Apache take the decision to join GitHub. Greg Stein further added, "We continue to experiment and expand the set of services that GitHub can provide to our communities, given our own needs and requirements. The Foundation has started working closely with GitHub management to explore ways to make this happen, and what will be possible in the future." Many users think that the reason for Apache to migrate to GitHub was the increasing cost of managing the code and infrastructure. A user commented on HackerNews, “Apparently, one of the big motivating reasons for this was "cost". The foundation’s 2018 five-year strategic plan noted that infrastructure services account for more than 80 percent of the total ASF expense budget, adding: Increasingly, project communities have infrastructure requirements that strain the capabilities of the ASF. The report noted that, given burgeoning costs, encouraging the use of more externally provided services was its best option.” Another comment reads, “Holy shit, they're spending $800k a year on infrastructure! Honestly, it's difficult to understand why they haven't sooner moved to GitHub, or even GitLab or the like - it feels reckless. That money could be put to far greater use - as an Apache supporter who hasn't ever felt the need to look at their costs, I have to say that I'm very disappointed.” To know more about this news, check out Apache’s blog post. Microsoft and GitHub employees come together to stand with the 996.ICU repository ‘Developers’ lives matter’: Chinese developers protest over the “996 work schedule” on GitHub GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch    
Read more
  • 0
  • 0
  • 2658

article-image-riot-games-is-struggling-with-sexism-and-lack-of-diversity-employees-plan-to-walkout-in-protest
Sugandha Lahoti
30 Apr 2019
7 min read
Save for later

Riot games is struggling with sexism and lack of diversity; employees plan to walkout in protest

Sugandha Lahoti
30 Apr 2019
7 min read
Update 23rd August 2019: Riot Games has finally settled a class-action lawsuit filed by Riot workers on grounds of sexual harassment and discrimination faced at their workspace. "This is a very strong settlement agreement that provides meaningful and fair value to class members for their experiences at Riot Games," said Ryan Saba of Rosen Saba, LLP, the attorney representing the plaintiffs. "This is a clear indication that Riot is dedicated to making progress in evolving its culture and employment practices. A number of significant changes to the corporate culture have been made, including increased transparency and industry-leading diversity and inclusion programs. The many Riot employees who spoke up, including the plaintiffs, significantly helped to change the culture at Riot." "We are grateful for every Rioter who has come forward with their concerns and believe this resolution is fair for everyone involved," said Nicolo Laurent, CEO of Riot Games. "With this agreement, we are honoring our commitment to find the best and most expeditious way for all Rioters, and Riot, to move forward and heal." Update as on 6 May, 2019: Riot Games announced early Friday that they will soon start giving new employees the option to opt-out of some mandatory arbitration requirements when they are hired. The catch - The arbitration will initially narrowly focused on a specific set of employees for a specific set of causes. Riot games employees are planning to walkout in protest of the company’s sexist culture and lack of diversity. Riot has been in the spotlight since Kotaku published a detailed report highlighting how five current and former Riot employees filed lawsuits against the company citing the sexist culture that fosters in Riot. Out of the five employees, two were women. Per Kotaku, last Thursday, Riot filed a motion to force two of those women, whose lawsuits revolved around the California Equal Pay Act, into private arbitration. In their motions, Riot’s lawyer argues that these employees waived their rights to a jury trial when they signed arbitration agreements upon their hiring. Private arbitration makes these employees less likely to win against Riot. In November last year, 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment encountered at Google’s workplace. This Walkout lead to Google ending forced arbitration for its full-time employees. Google employees are also organizing a phone drive, in a letter published on Medium, to press lawmakers to legally end forced arbitration. Per the Verge, “The employees are organizing a phone bank for May 1st and asking for people to make three calls to lawmakers — two to the caller’s senators and one to their representative — pushing for the FAIR Act, which was recently reintroduced in the House of Representatives.” https://twitter.com/endforcedarb/status/1122864987243012097 Following Google, Facebook also made changes to its forced arbitration policy for sexual harassment claims Not only sexual harassment, game developers also undergo unfair treatment in terms of work conditions, job instability, and inadequate pay. In February, The American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), published an open letter on Kotaku. The letter urges the video game industry workers to unionize and voice their support for better treatment within the workplace. Following this motion, Riot employees have organized to walkout in protest demanding Rio leadership to end force arbitration against the two current employees. This walkout is planned for Monday, May 6. An internal document from Riot employees as seen by Kotaku describes the demands laid out by walkout organizers a clear intention to end forced arbitration, a precise deadline (within 6 months) by which to end it a commitment to not force arbitration on the women involved in the ongoing litigations against Riot. Riot’s sexist culture and lack of diversity The investigation conducted by Kotaku last year unveiled some major flaws in Riot’s culture and in gaming companies, in general. Over 28 current and former Riot employees, spoke to Kotaku with stories that echoed of Riot’s female employees being treated unfairly and being on the receiving end of gender discrimination. An employee named Lucy told Kotaku that on thinking of hiring a woman in the leadership role she heard plenty of excuses for why her female job candidates weren’t Riot material. Some were “ladder climbers.” Others had “too much ego.” Most weren’t “gamer enough.” A few were “too punchy,” or didn’t “challenge convention”, she told Kotaku. She also shared her personal experiences facing discrimination. Often her manager would imply that her position was a direct result of her appearance. Every few months, she said, a male boss of hers would comment in public meetings about how her kids and husband must really miss her while she was at work. Women are often told they don’t fit in the company’s ‘bro culture’; an astonishing eighty percent of Riot employees are men, according to data Riot collected from employees’ driver’s licenses. “The ‘bro culture’ there is so real,” said one female source to Kotaku, who said she’d left the company due to sexism. “It’s agonizingly real. It’s like working at a giant fraternity.” Among other people Kotaku interviewed, stories were told on how women were being groomed for promotions, and doing jobs above their title and pay grade, until men were suddenly brought in to replace them. Another women told Kotaku, “how a colleague once informed her, apparently as a compliment, that she was on a list getting passed around by senior leaders detailing who they’d sleep with.” Two former employees also added that they “felt pressure to leave after making their concerns about gender discrimination known.” Many former Riot employees also refused to come forward to share their stories and refrained from participating in the walkout. For some, this was in fear of retaliation from Riot’s fanbase; Riot is the creator of the popular game League of Legends. Others told that they were restricted from talking on the record because of non-disparagement agreements they signed before leaving the company. The walkout threat spread far enough that it prompted a response from Riot’s chief diversity officer, Angela Roseboro, in the company’s private Slack over the weekend reports Waypoint. In a copy of the message obtained by Waypoint, Roseboro says“ We’re also aware there may be an upcoming walkout and recognize some Rioters are not feeling heard. We want to open up a dialogue on Monday and invite Rioters to join us for small group sessions where we can talk through your concerns, and provide as much context as we can about where we’ve landed and why. If you’re interested, please take a moment to add your name to this spreadsheet. We’re planning to keep these sessions smaller so we can have a more candid dialogue.” Riot CEO Nicolo Laurent also acknowledged the talk of a walkout in a statement "We’re proud of our colleagues for standing up for what they believe in. We always want Rioters to have the opportunity to be heard, so we’re sitting down today with Rioters to listen to their opinions and learn more about their perspectives on arbitration. We will also be discussing this topic during our biweekly all-company town hall on Thursday. Both are important forums for us to discuss our current policy and listen to Rioter feedback, which are both important parts of evaluating all of our procedures and policies, including those related to arbitration." Tech worker union, Game workers unite, Googlers for ending forced arbitration have stood up in solidarity with Riot employees. “Forced arbitration clauses are designed to silence workers and minimize the options available to people hurt by these large corporations” https://twitter.com/GameWorkers/status/1122933899590557697 https://twitter.com/endforcedarb/status/1123005582808682497 “Employees at Riot Games are considering a walkout, and the organization efforts has prompted an internal response from company executives", tweeted Coworker.org https://twitter.com/teamcoworker/status/1122936953698160640 Others have also joined in support. https://twitter.com/theminolaur/status/1122931099057950720 https://twitter.com/LuchaLibris/status/1122929166037471233 https://twitter.com/floofyscorp/status/1122955992268967937 #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google DataCamp reckons with its #MeToo movement; CEO steps down from his role indefinitely Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination
Read more
  • 0
  • 0
  • 2515

article-image-ai-can-now-help-speak-your-mind-uc-researchers-introduce-a-neural-decoder-that-translates-brain-signals-to-natural-sounding-speech
Bhagyashree R
29 Apr 2019
4 min read
Save for later

AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural-sounding speech

Bhagyashree R
29 Apr 2019
4 min read
In a research published in the Nature journal on Monday, a team of neuroscientists from the University of California, San Francisco, introduced a neural decoder that can synthesize natural-sounding speech based on brain activity. This research was led by Gopala Anumanchipalli, a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It is being developed in the laboratory of Edward Chang, a Neurological Surgery professor at University of California. Why is this neural decoder being introduced? There are many cases of people losing their voice because of stroke, traumatic brain injury, or neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis. Currently,assistive devices that track very small eye or facial muscle movements to enable people with severe speech disabilities express their thoughts by writing them letter-by-letter, do exist. However, generating text or synthesized speech with such devices is often time consuming, laborious, and error-prone. Another limitation these devices have is that they only permit generating a maximum of 10 words per minute, compared to the 100 to 150 words per minute of natural speech. This research shows that it is possible to generate a synthesized version of a person’s voice that can be controlled by their brain activity. The researchers believe that in future, this device could be used to enable individuals with severe speech disability to have fluent communication. It could even reproduce some of the “musicality” of the human voice that expresses the speaker’s emotions and personality. “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.” How does this system work? This research is based on another study by Josh Chartier and Gopala K. Anumanchipalli, which shows how the speech centers in our brain choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech. In this new study, Anumanchipalli and Chartier asked five patients being treated at the UCSF Epilepsy Center to read several sentences aloud. These patients had electrodes implanted into their brains to map the source of their seizures in preparation for neurosurgery. Simultaneously, the researchers recorded activity from a brain region known to be involved in language production. The researchers used the audio recordings of volunteer’s voice to understand the vocal tract movements needed to produce those sounds. With this detailed map of sound to anatomy in hand, the scientists created a realistic virtual vocal tract for each volunteer that could be controlled by their brain activity. The system comprised of two neural networks: A decoder for transforming brain activity patterns produced during speech into movements of the virtual vocal tract. A synthesizer for converting these vocal tract movements into a synthetic approximation of the volunteer’s voice. Here’s a video depicting the working of this system: https://www.youtube.com/watch?v=kbX9FLJ6WKw&feature=youtu.be The researchers observed that the synthetic speech produced by this system was much better as compared to the synthetic speech directly decoded from the volunteer’s brain activity. The generated sentences were also understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform. The system is still in its early stages. Explaining its limitations, Chartier said, “We still have a ways to go to perfectly mimic spoken language. We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.” Read the full report on UCSF’s official website. OpenAI introduces MuseNet: A deep neural network for generating musical compositions Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza Google open-sources GPipe, a pipeline parallelism Library to scale up Deep Neural Network training  
Read more
  • 0
  • 0
  • 16450

article-image-uk-lawmakers-to-social-media-accessories-radicalization-crimes-hearing-spread-extremist-content
Fatema Patrawala
29 Apr 2019
10 min read
Save for later

UK lawmakers to social media: “You’re accessories to radicalization, accessories to crimes”, hearing on spread of extremist content

Fatema Patrawala
29 Apr 2019
10 min read
Representatives from Facebook, YouTube, and Twitter were grilled and admonished on 23rd April, Tuesday by UK lawmakers on the spread of extremist and criminal content on social media platforms. Facebook’s Public Policy Officer, Neil Potts; Twitter’s Head of UK Govt, Public Policy and Philanthropy, Katy Minshall and YouTube’s Public Policy Director, Marco Pancini represented their companies in front of the UK Home Affairs committee to discuss why social media companies are “actively” pushing their users to consume extremist content in order to drive up profits. The hearing was chaired by Chairwoman Yvette Cooper and other committee members namely; MP Stephen Doughty, MP Tim Loughton, MP Stuart McDonald and other select members of the home affairs. The hearing was spurred by the spread of the graphic Christchurch shooting video, which the platforms struggled to contain . The shooter, who killed 50 people and injured 50 more at two mosques in New Zealand, live-streamed the attack on Facebook. And then there were multiple versions of the attack which were spread across various mediums which the social media companies failed to take down from their platform. They only figured it out when it was originally uploaded and shared on the platform and reacted quickly to take it down, said all the three tech companies in their responses. The committee also acknowledged the preceding weekend ban of social media in Sri Lanka in the wake of coordinated terrorist attacks that claimed over 250 lives and left many more seriously injured. On extremist content takedown rates The committee members slammed the companies for allowing hateful content to proliferate, especially in the case of YouTube, that it actually promotes its visibility through its recommendation algorithms. Chairwoman Yvette Cooper strongly said, “You are making these crimes possible, you are facilitating these crimes," chairwoman Yvette Cooper said. "Surely that is a serious issue.” "What on Earth are you doing!? You’re accessories to radicalization, accessories to crimes," MP Stephen Doughty debated. https://twitter.com/MarkDiStef/status/1121009074307465216 https://twitter.com/MarkDiStef/status/1121003221227646976 Neil Potts, Facebook representative, repeats his defense on every matter that it now has 30,000 staff working on safety and security, including engineers building best in class AI algorithms, language and subject matter experts and 15,000 content moderators. But when asked whether people spreading the terrorist propaganda had been reported to police, Mr Potts said proactive referrals were only made to law enforcement when there was an “imminent threat”. In addition to removing the original live-streamed video, Facebook said it removed 1.5 million instances of the video, with 1.2 million of those videos blocked at upload within 24 hours of the attack. Katy Minshall, Twitter representative said, 1.4 million tweets had been removed for promoting terrorism and the social network actively enforces its rules rather than relying on reports. Twitter has 1,500 people working on policy enforcement and moderation around the world, and is removing more content but is “never going to get a 100% success rate”, she said. She added: “There is a likely risk in the next few years that the better our tools get, the more users are removed, the more they will migrate to parts of the internet where nobody is looking.” Facebook's Neil Potts said that he could not rule out that there were still versions of the Christchurch shooting on the platform. And YouTube's Marco Pancini, acknowledged that the platform's recommendation algorithms were driving people towards more extremist content — even if that's not what they "intended." On reporting crimes to law enforcement Chairwoman Cooper was particularly upset after Facebook said it doesn't report all crimes to the police. Potts said that Facebook reports crimes when there is a threat to life, and assessed crimes committed on the platform on a "case by case basis." Twitter and YouTube said they had similar policies. "There are different scales of crimes," Potts said. To which Cooper responded. "A crime is a crime... who are you to decide what’s a crime that should be reported, and what crime shouldn’t be reported?" On algorithms recommending extremist or hateful content Further MPs took it upon themselves to test how YouTube's algorithm promotes extremist content. Prior to the hearing, they had searched terms like "British news," and in each case were directed to far-right, inflammatory content by the recommendation engine. “You are maybe being gamed by extremists, you are effectively providing a platform for extremists, you are enabling extremism on your platforms,” Cooper said. "Yet you are continuing to provide platforms for this extremism, you are continuing to show that you are not keeping up with it, and frankly, in the case of YouTube, you are continuing to promote it. To promote radicalization that has huge damaging consequences to families, lives and to communities right across the country." One of the members from the committee also accused YouTube, Facebook and Twitter of “not giving a damn” about fuelling radicalisation in the wake of the massacres in Sri Lanka and New Zealand. MPs took particular aim at YouTube over the way its algorithms promote videos and create playlists for viewers that they accused of becoming increasingly extreme. The site has been repeatedly criticised for showing a variety of inflammatory comment in the recommendations pane next to videos. Due to this MPs said it could easily radicalize young people who begin watching innocent videos. On promoting radicalization being embedded into platform success MP Tim Loughton says tests showed a benign search could end with “being signposted to a Nazi sympathiser group”. He added: “There seems to be a systemic problem that you are actively signposting and promoting extremist sites." YouTube representative responded that YouTube uses an algorithm to find out related and engaging content, so that users will stay on the site by clicking through videos. He further did not reveal the details of that algorithm, but mentioned that it allows YouTube to generate profits by showing more advertising the longer its users stay on the site. MPs described how that chain of related videos would lead to more and more extreme in content, even if the first video had been relatively innocuous. Ms Cooper described clicking through videos and finding that “with each one the next one being recommended for me was more extreme”, going from right-wing sites to racist and radical accounts. “The algorithms that you profit from are being used to poison debate,” she said. Can prioritizing authoritative content for breaking news offset effects of radicalization? Marco Pancini, gave an explanation to this that the logic behind its algorithms “works for 90 per cent of experience of users on the platform”. And he also said that they are “aware of the challenge this represents for breaking news and political speech”, and was working to prioritise authoritative content and reduce the visibility of extremists. He pointed to the work it has done to prioritise authoritative sources when people are searching for political speech or breaking news. Some of that has led to controversies of its own, such as when YouTube accidentally linked a video of the Notre Dame fire to video of the 9/11 attacks. Mr Doughty accused YouTube of becoming “accessories to radicalisation” and crime, but Mr Pancini replied: “That is not our intention … we have changed our policies.” He said the company was working with non-governmental organisations in 27 European countries to improve detection of offensive content. On continuing to platform known extremist accounts and sites MP Stephen Doughty said he found links to the websites of “well-known international organisations” and videos calling for the stoning of gay people on YouTube and other platforms. “Your systems are simply not working and, quite frankly, it's a cesspit,” he added. “It feels like your companies really don't give a damn. ”You give a lot of words, you give a lot of rhetoric, you don't actually take action … all three of you are not doing your jobs.” Representatives of Facebook, Twitter and YouTube said they had increased efforts against all kinds of extremism, using both automated technology and human moderators. Further MPs pointed out to the the Islamist militant group that carried out church and hotel bombings that left more than 300 people dead in Sri Lanka still has a Twitter account, and its YouTube channel was not deleted until two days after one of the world’s deadliest terror attacks. Ms Cooper also showed reports that clips of the Christchurch shooter’s Facebook Live video were still circulating and said she had been recommended Tommy Robinson videos on YouTube after a supposed crackdown. MP Stephen Doughty also revealed that overnight, he had been alerted to weeks-old posts on a closed Facebook group with 30,000 members that spoke about she and her family should be shot as “criminals”. “Kill them all, every f***ing one, military coup National Socialism year one – I don’t care as long as they are eradicated,” read another post that remained online. Ms Cooper accused the social media giants of “providing safe places to hide for individuals and organisations who spread hate”. On inaction, delay and deflection tactics employed by social media Yvette Cooper said we are “raising the same issues again and again over several years and it feels like now the government has lost trust and confidence in you that you are doing anything to sort this in anyway.” In response to this when representatives of YouTube, Facebook and Twitter, outlined action taken against extremist content earlier this month which, MPs countered by provided fresh examples of neo-Nazi, Islamist and far-right posts on their platforms during the hearing. “We have taken evidence from your representatives several times over several years, and we feel like we are raising the same issues again and again,” the former shadow home secretary added. “We recognise you have done some additional work but we are coming up time and again with so many examples of where you are failing, where you may be being gamed by extremists or where you are effectively providing a platform for extremism ... very little has changed.” Ms Cooper said material on Facebook, Twitter and YouTube “leads to people being hurt and killed”. The UK government has proposed the creation of an independent regulator to create a code of practice for tech companies, and enforce it with fines and blocks. Representatives of Facebook, Twitter and YouTube said they supported measures proposed in the Online Harms white paper, which is currently under consultation. They repeatedly insisted that they are working to increase both automated and human content moderation, building new tools and hiring thousands of employees. But lawmakers asserted that these are bandaids on systemic problems, and extremists are using the services exactly as they were meant to be used: to spread and share content, ignite passions, and give everyone a platform. Jack Dorsey engages in yet another tone deaf “public conversation” to better Twitter Online Safety vs Free Speech: UK’s “Online Harms” white paper divides the internet and puts tech companies in government crosshairs Tech companies in EU to face strict regulation on Terrorist content: One hour take down limit; Upload filters and private Terms of Service
Read more
  • 0
  • 0
  • 1905

article-image-uber-introduces-base-web-an-open-source-unified-design-system-for-building-websites-in-react
Bhagyashree R
27 Apr 2019
2 min read
Save for later

Uber introduces Base Web, an open source “unified” design system for building websites in React

Bhagyashree R
27 Apr 2019
2 min read
Uber’s design and engineering team has introduced a universal system called Base Web design system, which was open sourced in 2018. Base Web is a suite of React components implementing the “base” design language quickly and easily creating web applications. At Uber, developers, product managers, operations teams, and other employees have to interact with different web applications on a daily basis. As all of these web applications function differently, it puts an additional overhead of learning how to interact with them most effectively. To reduce this time and effort, Uber wanted an universal system, which will act as “a foundation, a basis for initiating, evolving, and unifying web products”. Having a universal design system helps teams of engineers, designers, and product managers to easily work together. It also helps new engineers and designers to quickly get an hang of the possible components and design tokens used by a given engineering organization. One of the key reasons for introducing Base Web was to make it easy for developers to reuse components. Uber’s design and engineering team after talking to its engineers determined that they mainly needed access to: Style customizations The ability to modify the rendering of a component So, they introduced a unified overrides API, which comes with the following benefits: Eliminates top-level properties API overload There is no longer extra properties proxying inconsistently across the composable components Allows you to completely replace the components. Uber is now using Base Web across teams to create its web applications. “Open sourced in 2018 to enable others to experience the benefits of this solution, Base Web is now used across Uber, ensuring a seamless development experience across our web applications,” reads the announcement. To read the official announcement, visit Uber’s official website. Uber open-sources Peloton, a unified Resource Scheduler Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot Uber and Lyft drivers strike in Los Angeles  
Read more
  • 0
  • 0
  • 4235

article-image-facebook-open-sources-f14-algorithm-for-faster-and-memory-efficient-hash-tables
Amrata Joshi
27 Apr 2019
7 min read
Save for later

Facebook open-sources F14 algorithm for faster and memory-efficient hash tables

Amrata Joshi
27 Apr 2019
7 min read
On Thursday, the team at Facebook open sourced F14, an algorithm for faster and memory-efficient hash tables. F14 helps the hash tables provide a faster way for maintaining a set of keys or map keys to values, even if the keys are objects, like strings. The team at Facebook aimed at simplifying the process of selecting the right hash table with the help of F14. The algorithm F14 focuses on the 14-way probing hash table within Folly, Facebook’s open source library of C++ components. These F14 hash tables are better in performance as compared to the previous tools the team had. According to Facebook F14 is can be used as default for all sorts of use cases. There are many factors that need to be considered while choosing a C++ hash table. These factors could be about keeping long-lived references or pointers to the entries, the size of the keys, the size of the tables, etc. The team at Facebook suggests if the developers aren’t planning to keep long-lived references to entries then they may start with the option- folly::F14FastMap/Set or, they can opt for folly::F14NodeMap/Set. Challenges with the existing hash table algorithms Usually the hash tables start by computing a numeric hash code for each key and uses that number for indexing into an array. The hash code for a key remains the same but the hash codes for different keys are different. These keys in a hash table are distributed randomly across the slots in the array. So there are chances of collisions between the keys that map to the same array. Using Chaining method Most of the hash table algorithms handle these collisions by using the chaining method. The chaining method uses a secondary data structure such as a linked list for storing all the keys for a slot. This helps in storing the keys directly in the main array and further checking new slots if there is a collision. If the number of keys are divided by the size of the main array, we get a number called the load factor which the measure of the hash table’s fullness. By decreasing the load factor and making the main array larger, it’s possible to reduce the number of collisions. The problem with this method is that it will waste memory. Using STL method The next method is using the standard template library (STL) for C++ that provides hash tables via std::unordered_map and std::unordered_set. The method does guarantee reference stability which means the references and pointers to the keys and values in the hash table remains valid until the corresponding key is removed. Thus the entries must be indirect and individually allocated but that would add a substantial CPU overhead. Folly comes with a fast C++ class without reference stability and a slower C++ class that allocates each entry in a separate node. According to Facebook, the node-based version is not fully standard compliant, but it is still compatible with the standard version for all the codes. F14 reduces collisions with vector instructions F14 provide practical improvements for both performance and memory by using the vector instructions available on modern CPUs. F14 uses the hash code for mapping the keys to a block of slots instead of to a single slot and then searches within the chunk in parallel. For this intra-chunk search, F14 uses vector instructions (SSE2 or NEON) that filters all the slots of the chunk at the same time. The team at Facebook has named this algorithm as F14 because it filters 14 slots at once. This F14  algorithm performs collision resolution in case a chunk overflows or if two keys both pass the filtering step. With F14 there is a low probability of collision taking place within instruction pipelining. The team used Chunking strategy for lowering the collision rate. To explain this better, the chance that 15 of the table’s keys would map to a chunk with 14 slots is quite lower than the chance that two keys would map to one slot. For instance, imagine you are in a room with 180 people. The chance that one other person has the same birthday as you is about 50 percent, but the chance that there are 14 people who were born in the same fortnight as you is much lower than 1 percent. Chunking keeps the collision rate low even for load factors above 80 percent. Even if there were 300 people in the room, the chance of a fortnight “overflow” is still less than 5 percent. F14 uses reference-counted tombstones for empty slots Most of the strategies for reducing the collisions keep looking for an empty slot until they find one but that is a little difficult to execute. In this case, the algorithm should either leave a tombstone, an empty slot that doesn’t terminate the probe search or it has to slide down the later keys in the probe sequence. This is again very complex and difficult to execute. In workloads that mix insert and erase, the tombstones can get accumulated. And accumulated tombstones increase the load factor from the performance perspective. F14 uses a strategy that acts similar to reference-counted tombstones. This strategy is based on an auxiliary bit for each slot suggested by Amble and Knuth in their 1974 article “Ordered hash tables.” A bit is set whenever their insertion routine passes a slot that is already full and the bit records that a slot has overflowed. A tombstone corresponds to an empty slot with the overflow bit set. The overflow bit makes searches faster as the search for a key can be stopped at a full slot where the overflow bit is clear, even if the following slot is not empty. The team at Facebook has worked towards having the count the number of active overflows. The overflow bits are set when a displaced key is inserted rather than when the key that did the displacing is removed. This makes it easy to keep track of the number of keys relying on an overflow bit. Each of the F14 chunks uses 1 byte of metadata for counting the number of keys that wanted to be placed in the chunk but are currently stored in a different chunk. And when a key gets erased, it decrements the overflow counter on all the chunks that are on its probe sequence by cleaning them up. F14 optimizes memory effectively It is important to reduce memory waste for improving performance and allowing more of a program’s data to fit in the cache. The two common strategies used for hash table memory layouts are indirect which uses a pointer stored in the main array and direct which uses a memory of the keys and values incorporated directly into the main hash array. F14 uses pointer-indirect (F14Node) and direct storage (F14Value) versions, and index-indirect (F14Vector). The Facebook team uses STL container std::unordered_set that never wastes any data space as it waits until the last moment to allocate nodes. The F14NodeSet executes a separate memory allocation for every value, like std::unordered_set. It also stores pointers to the values in chunks and uses F14’s probing collision resolution to ensure that there are no chaining pointers and no per-insert metadata. The  F14ValueSet stores the values inline and has lesser data waste due to using a higher maximum load factor. F14ValueSet hence achieves memory-efficiency easily. While testing, the team encountered unit test failures and production crashes. Though the team has randomized the code for debug builds. F14 now randomly chooses among all the empty slots in a chunk when inserting a new key. Also, the entry order isn’t completely randomized and according to the team, the shuffle is good enough to catch a regressing test with only a few runs. To know more about this news, check out Facebook’s post. New York AG opens investigation against Facebook as Canada decides to take Facebook to Federal Court for repeated user privacy violations Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson Facebook sets aside $5 billion in anticipation of an FTC penalty for its “user data practices”
Read more
  • 0
  • 0
  • 3904
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-python-in-visual-studio-code-released-with-enhanced-variable-explorer-data-viewer-and-more
Amrata Joshi
27 Apr 2019
3 min read
Save for later

Python in Visual Studio Code released with enhanced Variable Explorer, Data Viewer, and more!

Amrata Joshi
27 Apr 2019
3 min read
This week, the team at Python announced the release of Python Extension for Visual Studio Code. This release comes with enhanced variable explorer and data viewer and improvements to the Python Language Server. What’s new in Python in Visual Studio Code? Enhanced Variable Explorer and Data Viewer This release comes with a built-in Variable Explorer along with a Data Viewer, which will help the users to easily view, inspect and filter the variables in the application, including lists, NumPy arrays, pandas data frames, and more. This release shows a section for variables while running code and cells in the Python Interactive window. On expanding it, users can see a list of the variables in the current Jupyter session. More variables will automatically show up as they get used in the code. And users can sort the variables in columns by clicking on each column header. Users can now double-click on each row or use the “Show variable in Data Viewer” button in order to view full data of each variable in the newly-added Data Viewer and can perform a simple search over its values. Improvements to debug configuration In this release, the process of configuring the debugger has now been simplified. If a user starts debugging through the Debug Panel and no debug configuration exists, then the users will now be prompted to create a debug configuration for their application. Instead of manually configuring the launch.json file, users can now create a debug configuration through a set of menus. Improvements to the Python Language Server This release comes with fixes and improvements to the Python Language Server. The team has added back the features that were removed in the 0.2 release including “Rename Symbol”, “Go to Definition” and “Find All References”. Also, the loading time and memory usage have been improved while importing scientific libraries such as pandas, Plotly, PyQt5, especially while running in full Anaconda environments.   Read Also: Visualizing data in R and Python using Anaconda [Tutorial] Major changes In this release, the default behavior of debugger has been changed to display return values. “Unit Test” has been renamed to “Test” or “Testing”. The debugStdLib setting has been replaced with justMyCode. This release comes with setting to just enable/disable the data science codelens. The reliability of test discovery while using pytest has been improved. Bug Fixes The issues with cell spacing have been resolved. Problems with errors not showing up for import have been fixed. Issues with the tabs in the comments section have been fixed. To know more about this news, check out Microsoft’s official blog post. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Debugging and Profiling Python Scripts [Tutorial]  
Read more
  • 0
  • 0
  • 10997

article-image-akqa-a-global-innovation-agency-introduces-speedgate-an-ai-designed-outdoor-sport
Bhagyashree R
26 Apr 2019
2 min read
Save for later

AKQA, a global innovation agency, introduces Speedgate, an AI-designed outdoor sport

Bhagyashree R
26 Apr 2019
2 min read
Earlier this month, AKQA, a global innovation agency, introduced a new outdoor sport called Speedgate, which is created by an AI system built by them. This AI system was trained on more than 400 sports including Rugby, Soccer, and football to form the rules and regulations for Speedgate. In Speedgate, each team has six players consisting of forwards and defenders. The teams playing the game have to score goals by kicking through two consecutive gates. When a player kicks the ball through an X gate, the center gate will unlock the goal gate. After the center gate is unlocked, the team in possession can score by kicking the ball through the end gate in any direction. Here’s a video showing how this game actually works: https://www.youtube.com/watch?v=Uj4CQiuX8GM&feature=youtu.be Developers at AKQA trained a recurrent neural network and a deep convolutional generative adversarial network on over 400 sports. It uses NVIDIA Tesla GPUs for training the neural networks as well as for inferencing. Additionally, the neural network was also trained on 10,400 logos to generate the official Speedgate logo. The model was able to generate over 1,000 different sport concepts. Though many of them were interesting, the team wanted the AI system to come up with a game that was in addition to being fun and easy to understand was also active and accessible. And, Speedgate checked all the boxes for them. Kathryn Webb, AI Practice Lead at AKQA, said, “GPU technology helped us to condense training and generation phases down to a fraction of what they would’ve been. We would not have been able to achieve so many unique ML contributions in the project without that speed. It gave us more time to test, learn and adapt, and ultimately helped to produce the best final result.” Read more in detail, visit AKQA’s official website. OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers Google announces Stadia, a cloud-based game streaming service, at GDC 2019 Microsoft announces Game stack with Xbox Live integration to Android and iOS  
Read more
  • 0
  • 0
  • 1803

article-image-openai-introduces-musenet-a-deep-neural-network-for-generating-musical-compositions
Bhagyashree R
26 Apr 2019
4 min read
Save for later

OpenAI introduces MuseNet: A deep neural network for generating musical compositions

Bhagyashree R
26 Apr 2019
4 min read
OpenAI has built a new deep neural network called MuseNet for composing music, the details of which it shared in a blog post yesterday. The research organization has made a prototype of MuseNet-powered co-composer available for users to try till May 12th. https://twitter.com/OpenAI/status/1121457782312460288 What is MuseNet? MuseNet uses the same general-purpose unsupervised technology as OpenAI’s GPT-2 language model, Sparse Transformer. This transformer allows MuseNet to predict the next note based on the given set of notes. To enable this behavior, Sparse Transformer uses something called “Sparse Attention”, where each of the output position computes weightings from a subset of input positions. For audio pieces, a 72-layer network with 24 attention heads is trained using the recompute and optimized kernels of Sparse Transformer. This provides the model long context that enables it to remember long term structure in a piece. For training the model, the researchers have collected training data from various sources. The dataset includes the MIDI files donated by ClassicalArchives and BitMidi. The dataset also includes data from online collections, including Jazz, Pop, African, Indian, and Arabic styles. The model is capable of generating 4-minute musical compositions with 10 different instruments and is aware of different music styles from composers like Bach, Mozart, Beatles, and more. It can also convincingly blend different music styles to create a completely new music piece. The MuseNet prototype, which is made available for users to try, only comes with a small subset of options. It supports two modes: In simple mode, users can listen to the uncurated samples generated by OpenAI. To generate a music piece yourself, you just need to choose a composer or style and an optional start of a famous piece. In advanced mode, users can directly interact with the model. Generating music in this mode will take much longer but will give an entirely new piece. Here’s how the advanced mode looks like: Source: OpenAI What are its limitations? The music generation tool is still a prototype so it does has some limitations: To generate each note, MuseNet calculates the probabilities across all possible notes and instruments. Though the model gives more priority to your instrument choices, there is a possibility that it will choose something else. MuseNet finds it difficult to generate a music piece in case of odd pairings of styles and instruments. The generated music will sound more natural if you pick instruments closest to the composer or band’s usual style. Many users have already started testing out the model. While some users are pretty impressed by the AI-generated music, some think that it is quite evident that the music is machine generated and lacks the emotional factor. Here’s an opinion shared by a Redditor for different music styles: “My take on the classical parts of it, as a classical pianist. Overall: stylistic coherency on the scale of ~15 seconds. Better than anything I've heard so far. Seems to have an attachment to pedal notes. Mozart: I would say Mozart's distinguishing characteristic as a composer is that every measure "sounds right". Even without knowing the piece, you can usually tell when a performer has made a mistake and deviated from the score. The Mozart samples sound... wrong. There are parallel 5ths everywhere. Bach: (I heard a bach sample in the live concert) - It had roughly the right consistency in the melody, but zero counterpoint, which is Bach's defining feature. Conditioning maybe not strong enough? Rachmaninoff: Known for lush musical textures and hauntingly beautiful melodies. The samples got the texture approximately right, although I would describe them more as murky more than lush. No melody to be heard.” Another user commented, “This may be academically interesting, but the music still sounds fake enough to be unpleasant (i.e. there's no way I'd spend any time listening to this voluntarily).” Though this model is in the early stages, an important question that comes in mind is who will own the generated music. “When discussing this with my friends, an interesting question came up: Who owns the music this produces? Couldn't one generate music and upload that to Spotify and get paid based off the number of listens?.” another user added. To know more in detail, visit the OpenAI’s official website. Also, check out an experimental concert by MuseNet that was live-streamed on Twitch. OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence OpenAI Five bots destroyed human Dota 2 players this weekend OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers
Read more
  • 0
  • 0
  • 5866

article-image-gitlab-11-10-releases-with-enhanced-operations-dashboard-pipelines-for-merged-results-and-much-more
Amrata Joshi
26 Apr 2019
3 min read
Save for later

GitLab 11.10 releases with enhanced operations dashboard, pipelines for merged results and much more!

Amrata Joshi
26 Apr 2019
3 min read
Yesterday, the team at GitLab released GitLab 11.10, a web-based DevOps lifecycle tool. This release comes with new features including pipelines on the operations dashboard, pipelines for merged results, and much more. What’s new in GitLab 11.10? Enhanced operations dashboard GitLab 11.10 enhances the operations dashboard with a powerful feature that provides an overview of pipeline status. This becomes useful while looking at a single project's pipeline as well as for multi-project pipelines. With this release, users can now get instant visibility at a glance into the health of all of the pipelines on the operations dashboard. Run pipelines against merged results Users can now run pipelines against the merged result prior to merging. This allows the users to quickly catch errors for much quicker resolution of pipeline failures and more efficient usage of GitLab Runners. By having the merge request pipeline automatically create a new ref that contains the combined merge result of the source and target branch, it is possible to have the combined result valid. Scoped labels Scoped labels allow teams to use apply labels on issues, merge requests, and epics and custom workflow states. These labels can be configured by using a special double colon syntax in the label title. Few users think that new updates won’t be as successful as the team expects it to be and there is still a long way to go for GitLab. A user commented on HackerNews, “Can't help but feel that their focus on moving all operation insights into gitlab itself will not be as successful as they want it to be (as far as I read, their goal was to replace their operations and monitoring tools with gitlab itself[1]). I've worked with the ultimate edition for a year and the kubernetes integration is nowhere close to the insight you would get from google, amazon or azure in terms of insight and integration with ops-land. I wish all these hours were spent on improving the developer lifecycle instead.” Few others are happy with this news and they think that GitLab has progressed well. A comment reads, “GitLab has really come a long way in the past few years. The days of being a github-alike are long gone. Very happy to see them continue to find success.” To know more about this news, check out GitLab’s post. GitLab considers moving to a single Rails codebase by combining the two existing repositories Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more  
Read more
  • 0
  • 0
  • 1643
article-image-google-announces-new-policy-changes-for-employees-to-report-misconduct-amid-complaints-of-retaliation-and-harassment
Sugandha Lahoti
26 Apr 2019
4 min read
Save for later

Google announces new policy changes for employees to report misconduct amid complaints of retaliation and harassment

Sugandha Lahoti
26 Apr 2019
4 min read
Google has announced new policy changes to address employee concerns about misconduct and harassment, following a series of massive protests. These changes were announced in an email, sent out by Google’s global director of diversity, equity, and inclusion Melonie Park to Googlers. Later yesterday, a blog post was published publicly introducing the new policy updates. “We want every Googler to walk into a workplace filled with dignity and respect,” Melonie Parker, wrote in the email. Google’s misconduct-related policy updates First, the company is building a new dedicated website to help employees bring up issues on misconduct and harassment in a simpler and clearer way. They will also be providing a similar site for temp and vendor workers, which is scheduled to go live by June. They have also internally published their fifth annual Investigations Report, a summary of employee-related misconduct investigations. They have also shared (internally) a new Investigations Practice Guide outlining how concerns are handled within Employee Relations to explain what employees can expect during the investigations process. What is shared publicly is Google’s workplace policies on harassment, discrimination, retaliation, standards of conduct, and workplace conduct. Google is also expanding its Support Person Program where Googlers can bring a trusted colleague during their harassment and discrimination investigations. They have also rolled a new Investigations Care Program to provide better care to Googlers during and after an investigation. Google’s workplace issues since the past year In November last year, 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment that they encountered at Google’s workplace. The Walkout was planned after The New York Times brought to light the shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. In the six months since the walkout, Google’s workplace issues have steadily continued. Just last week, two Google Walkout organizers accused the company of retaliation against them over the protest. The two Google employees, Claire Stapleton, YouTube Marketing Manager and Meredith Whittaker the head of Google’s Open Research were told their roles would change dramatically including calls to abandon AI ethics work, demotion, and more. Yesterday, an unidentified individual filed a complaint with the National Labor Relations Board accusing Google of violating federal law by retaliating against an employee. The case involves an alleged violation of a New Deal-era ban on punishing employees for involvement in collective action related to working conditions, according to Bloomberg. Google has previously partially acknowledged only one demand from the walkout organizers' original demands: ending forced arbitration for all its full-time employees but not for Google’s temporary and contract workers. It has also lifted the ban on class action lawsuits for the employees. [box type="shadow" align="" class="" width=""]The complete demands laid out by the Google employees are as follows: -An end to Forced Arbitration in cases of harassment and discrimination for all current and future employees. A commitment to end pay and opportunity inequity. -A publicly disclosed sexual harassment transparency report. -A clear, uniform, and globally inclusive process for reporting sexual misconduct safely and anonymously. -Elevate the Chief Diversity Officer to answer directly to the CEO and make recommendations directly to the Board of Directors. Appoint an Employee Rep to the Board.[/box] Yet, the fight is not over for Walkout organizers. Two of their original demands for putting an employee representative on the company’s board of directors and having the chief diversity officer report directly to the CEO has received no response from Google. Currently, Melonie Parker reports to VP of people operations Eileen Naughton instead of CEO Sundar Pichai. There were some scathing comments about Eileen by an x-googler who quit earlier this year on ethical grounds. https://twitter.com/mcmillen/status/1121418157313409024 https://twitter.com/mcmillen/status/1121420553787723776 People also raised questions about how Google is going to make up for their misconduct towards its ex-employees? https://twitter.com/VidaVakil/status/1121581294972878848 Techworkersco also added, “What we see here is Google management scrambling to placate workers in the face of serious claims that the company retaliates." Google employees ‘Walkout for Real Change’ today. These are their demands. #GoogleWalkout organizers face backlash at work, tech workers show solidarity Google TVCs write an open letter to Google’s CEO; demands for equal benefits and treatement.
Read more
  • 0
  • 0
  • 2236

article-image-stripe-updates-its-product-stack-to-prepare-european-businesses-for-sca-compliance
Bhagyashree R
26 Apr 2019
3 min read
Save for later

Stripe updates its product stack to prepare European businesses for SCA-compliance

Bhagyashree R
26 Apr 2019
3 min read
On Tuesday, Stripe, the online payments platform provider, announced that it has upgraded its products to be compliant with Strong Customer Authentication (SCA) under the second Payment Services Directive (PSD2). This announcement comes just after Stripe confirmed that it has acquired Touchtech Payments, a Dublin-based payments start-up. Touchtech Payments is a provider of advanced SCA-compliant authentication technology for Europe's fintechs and challenger banks, like N26, TransferWise, and many more. From 14 September 2019, all the authenticating online payments in Europe will be required to comply with the SCA, which is a new European regulation introduced to reduce fraud and make only payments safer. It will be applicable to customer-initiated online payments within Europe, which includes most card payments and all bank transfers. To be SCA compliant, online payments platform need to have additional authentication mechanism in their payment flow. It should have at least two of the following requirements: Something the customer knows like a password or PIN Something the customer has like phone or hardware token Something the customer is like fingerprint or face recognition Making online payment platforms compliant with this regulation will not be an easy task for individual banks and payment providers across Europe. Additionally, a new step in the authentication can also cause some friction in payments and hinder user experience. So, to ease this process, the Stripe payments platform will take up the responsibility of analyzing each transaction to check whether it needs an additional authentication required or not. If required, Stripe will authenticate the transaction with appropriate new technologies. Updates are made in the following products: The Payment Intents API This new Payment Intents API will enable businesses to easily build SCA-compliant fully-customized, dynamic payment flows. This API tracks the state of payment and triggers additional authentication when needed. Upgraded Stripe Checkout Stripe Checkout, a smart payments page, enables businesses to start accepting payments with just a few lines of code. The latest version of Stripe Checkout is capable of dynamically detecting when SCA is required and triggers authentication when necessary. Dynamic 3D Secure provides an additional layer of authentication for credit card transactions. 3D Secure 2 support Stripe supports 3D Secure 2 on the new Payments Intent API and Checkout. 3D Secure 2 aims to address all the limitations in 3D Secure 1 by introducing “less disruptive authentication and better user experience.” With this authentication process, businesses and their payment providers are can send more data elements on each transaction to the cardholder’s bank. This data may include payment-specific info like shipping address, the customer’s device ID, or previous transaction history. The cardholder’s bank can then use this data to calculate the risk level of the transaction and take a suitable response. Upgraded Stripe Billing Billing makes recurring billing process for SaaS and subscription-based companies smoother. Along with SCA-compliance, the company also announced that the product is now available for all the businesses in Europe. Tara Seshan, product manager for Stripe Billing, said in a press release, “With Stripe Billing, companies of all sizes now have access to advanced invoicing tools that will also help them comply with SCA and VAT requirements.” In the next few weeks, the company plans to roll out tools in the Stripe Dashboard for business already using Stripe to make them ready for SCA. Read the official announcement on Stripe’s website. Former Google Cloud CEO joins Stripe board just as Stripe joins the global Unicorn Club Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes 3D Secure v2: a new authentication protocol supported by Stripe for frictionless authentication and better user experience
Read more
  • 0
  • 0
  • 2599

article-image-facebook-sets-aside-5-billion-in-anticipation-of-an-ftc-penalty-for-its-user-data-practices
Savia Lobo
25 Apr 2019
4 min read
Save for later

Facebook sets aside $5 billion in anticipation of an FTC penalty for its “user data practices”

Savia Lobo
25 Apr 2019
4 min read
Yesterday, Facebook in its first quarter financial reports revealed that it has to pay a sum of  $5 billion, a fine levied by the US Federal Trade Commission (FTC). This penalty is “in connection with the inquiry of the FTC into our platform and user data practices”, the company said. The company, in their report, mentioned that the expenses result in a 51% year-over-year decline in net income, to just $2.4bn. If they minus this one-time expense, Facebook’s earnings per share would have beaten analyst expectations, and its operating margin (22%) would have been 20 points higher. Facebook said, “We estimate that the range of loss in this matter is $3.0bn to $5.0bn. The matter remains unresolved, and there can be no assurance as to the timing or the terms of any final outcome.” In the wake of the Cambridge Analytica scandal, the FTC had commenced their investigation into Facebook’s privacy practices last year in March. This investigation was focussed whether the data practices that allowed Cambridge Analytica to obtain Facebook user data violated the company’s 2011 agreement with the FTC. “Facebook and the FTC have reportedly been negotiating over the settlement, which will dwarf the prior largest penalty for a privacy lapse, a $22.5m fine against Google in 2012”, The Guardian reports. Read Also: European Union fined Google 1.49 billion euros for antitrust violations in online advertising “Levying a sizable fine on Facebook would go against the reputation of the United States of not restraining the power of big tech companies”, The New York Times reports. Justin Brookman, a former official for the regulator who is currently a director of privacy at Consumers Union, nonprofit consumer advocacy group, said, “The F.T.C. is really limited in what they can actually do in enforcing a consent decree, but in the case of Facebook, they had public pressure on their side.” Christopher Wylie, a Research director at H&M and the Cambridge Analytica Whistleblower, voiced against Facebook by tweeting, “Facebook, you banned me for whistleblowing. You threatened @carolecadwalla and the Guardian. You tried to cover up your incompetent conduct. You thought you could simply ignore the law. But you can’t. Your house of cards will topple.” https://twitter.com/chrisinsilico/status/1121150233541525505 Senator Richard Blumenthal, Democrat of Connecticut, mentioned in a tweet, “Facebook must be held accountable — not just by fines — but also far-reaching reforms in management, privacy practices, and culture.” Debra Aho Williamson, an e-marketer analyst, warned that the expectation of an FTC fine may portend future trouble. “This is a significant development, and any settlement with the FTC may impact the ways advertisers can use the platform in the future,” she said. Jessica Liu, a marketing analyst for Forrester said that Facebook has to show signs that it’s improving on user data practices and content management. “Its track record has been atrocious. No more platitudes. What action is Facebook Inc actually taking?” “For Facebook, a $5 billion fine would amount to a fraction of its $56 billion in annual revenue. Any resolution would also alleviate some of the regulatory pressure that has been intensifying against the company over the past two and a half years”, the New York Times reports. To know more about this news in detail visit Facebook’s official press release. Facebook hires a new general counsel and a new VP of global communications even as it continues with no Chief Security Officer Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson “Is it actually possible to have a free and fair election ever again?,” Pulitzer finalist, Carole Cadwalladr on Facebook’s role in Brexit
Read more
  • 0
  • 0
  • 2088
article-image-brave-launches-its-brave-ads-platform-sharing-70-of-the-ad-revenue-with-its-users
Bhagyashree R
25 Apr 2019
4 min read
Save for later

Brave launches its Brave Ads platform sharing 70% of the ad revenue with its users

Bhagyashree R
25 Apr 2019
4 min read
In January this year, Brave announced that it is previewing its new advertising feature, Brave Ads. It opened this feature to all users of its desktop browser for macOS, Windows, and Linux yesterday. Brave Ads is an opt-in digital advertising feature built with user privacy in mind. https://twitter.com/brave/status/1121081425254473728 Previously, we have seen many pay-to-surf sites, but most of them eventually disappeared because of the dot-com bubble. However, Brendan Eich, the CEO, and co-founder of Brave software is pretty confident about his plan. He said, “With Brave Ads, we are launching a digital ad platform that is the first to protect users’ data rights and to reward them for their attention.” He further adds, “Brave Ads also aims to improve the economics and conversion of the online advertising industry, so that publishers and advertisers can thrive without the intermediaries that collect huge fees and that contribute to web-wide surveillance. Privacy by design and no tracking are integral to our mission to fix the Web and its funding model.” Brave is working with various ad networks and brands to create Brave ads catalog inventory. These catalogs are pushed to available devices on a recurring basis. The ads for these catalogs are supplied by Vice, Home Chef, Ternio BlockCard, MyCrypto, eToro, BuySellAds, TAP Network, AirSwap, Fluidity, and Uphold. How Brave Ads work? Brave is based on Chromium that blocks tracking scripts and other technologies that spy on your online activity. So advertisements are generally not shown by default when one uses the Brave browser. Now, Brave Ads puts users in control by allowing them to decide how many ads they would like to see. It ensures user privacy by doing ad matching directly on the users’ device so that their personal data is not leaked to anyone. Out of the revenue generated by viewing these ads, users will get a 70% share and the remaining 30% will go to Brave. This 70% percent cut is estimated to be about $5 per month according to Eich. Users will be paid with Brave’s bitcoin-style "cryptocurrency” called Basic Attention Tokens (BAT). Users can claim these tokens at the close of every Brave Rewards monthly cycle. To view Brave Ads, users are required to enable Brave Rewards by going to the Brave Rewards “setting” page (brave://rewards/). Those who are already using Brave Rewards will get a notification screen to enable this feature. Once a user opts into Brave Rewards, they are presented with offers in the form of notifications. When users click on these notifications, they will be directed to a full page ad in a new ad tab. Right now, users can auto-contribute their earned rewards to their favorite websites or content creators. The browser will soon allow users to use BAT for premium content and also redeem it for real-world rewards such as hotel stays, restaurant vouchers, and gift cards. It also plans to bring an option that will let users convert their BAT into local fiat currency through exchange partners. Brave Ads have received a very mixed reaction from the users. While some compare its advertising model with that of YouTube, others think that the implementation is unethical. One user on Reddit commented, “This idea is very interesting. It reminds me of how YouTube shares their ad revenue with content creators, and that in turn grows YouTube's network and business...The more one browsed or shared of their data, the more one would get paid. It's simple business.” A skeptical user said, “I'm a fan of Brave's mission, and the browser itself is great (basically Chromium but faster), but the practice of hiding publisher's ads but showing their own, which may or may not end up compensating the publisher, seems fairly unethical.” For more details, check out the official announcement by Brave. Brave introduces Brave Ads that share 70% revenue with users for viewing ads Brave Privacy Browser has a ‘backdoor’ to remotely inject headers in HTTP requests: HackerNews Brave 0.55, ad-blocking browser with 22% faster load time and is generally available and works on Chromium
Read more
  • 0
  • 0
  • 4625

article-image-the-first-release-candidate-of-rails-6-0-0-is-now-out
Vincy Davis
25 Apr 2019
2 min read
Save for later

The first release candidate of Rails 6.0.0 is now out!

Vincy Davis
25 Apr 2019
2 min read
The first release candidate for Rails 6.0.0 was out yesterday. Rails 6.0.0 rc1 is the polished version of all the previous beta releases. Main features include Action Mailbox, Action Text, multiple database support, parallel testing, and Webpacker handling JavaScript by default. The latest  beta release, Rails 6.0.0.beta3 was released last month. In early January, the first release of Rails 6 was announced. Two new major frameworks are added in Rails 6.0 called Action Mailbox and Action Text. There are also two scalable upgrades in the form of multiple database support and parallel testing. Action Mailbox guides incoming emails to controller-like mailboxes in order for processing to take place in Rails. Action Text brings rich text and enables editing such files in Rails. Though the team at Rails couldn't meet their aspirational release schedule, they did manage to include around 1000 commits in Rails 6.0.0 rc1. The crew at The Pragmatic Programmers, particularly Sam Ruby, David Bryant Copeland have also come up with beta of Agile Web Development with Rails 6  to coincide with the release of rc1. For more information on the release, check out their official announcement. GitLab considers moving to a single Rails codebase by combining the two existing repositories Uber releases AresDB, a new GPU-powered real-time Analytics Engine Niantic, of the Pokemon Go fame, releases a preview of its AR platform
Read more
  • 0
  • 0
  • 2696