Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-spotify-forsakes-indie-artists-by-shutting-down-its-popular-upload-beta-program
Vincy Davis
04 Jul 2019
4 min read
Save for later

Spotify forsakes indie artists by shutting down its popular upload beta program

Vincy Davis
04 Jul 2019
4 min read
Earlier this week, Spotify announced that they are closing their upload beta program at the end of this month, less than a year after it was launched. This means that individual musicians will not be able to upload their songs on the streaming service directly. Spotify has notified artists that they will have to move their already released content to another distributor and will only be paid for streams on their uploaded content till July 2019. Launched in September 2018, the upload beta program enabled independent artists to upload their music to the Spotify streaming service by adding tracks and its accompanying metadata with just a few clicks. Many users, especially artists, are upset with Spotify’s decision. https://twitter.com/SherLibraLady59/status/1146680962715213824 https://twitter.com/burning_lillies/status/1146493960594042882 There are many speculations doing the rounds, on what might have led Spotify to take this decision of not supporting independent artists anymore, something for which Spotify has always been famous for. Spotify pleasing its distribution partners Spotify says that based on artist’s feedback, “The most impactful way we can improve the experience of delivering music to Spotify for as many artists and labels as possible, is to lean into the great work our distribution partners are already doing to serve the artist community.” Even after uploading music to Spotify, artists still had to turn to other distribution tools for uploading their music on other streaming services. In a statement to TechCrunch, Spotify states that, “There were a few hundred artists who had actually uploaded music to the beta, and a few hundred more that had been invited to the test but hadn’t yet uploaded. And all those who had used the Direct Upload Beta did have to use another distribution service to get their music on other platforms.”  Another reason for this move could be that Spotify now wants to support its “preferred” distribution partners - Distrokid (Spotify has a small investment in this company), Cdbaby and Emubands. As Distrokid supports cross-platform uploads, it would have caused an overlap and redundancy with Spotify’s own upload beta program. This could be Spotify’s way of avoiding a conflicting scenario. Spotify bowing down to giant music labels For decades, music labels like Universal, Sony and Warner had controlled the way for artists to gain universal stardom in the music industry, which obviously came at a cost. Spotify’s way of doing business has always garnered hostile reactions from many. Spotify offered two main advantages to independent artists: a bigger financial cut and ownership of their recordings. Spotify’s deals used to provide artists the freedom to license their songs to other streaming companies, like Apple Music and Amazon. It also gave artists the freedom to choose when their new tracks should “go live” on Spotify. According to the New York Times, music labels have signaled their disapproval of Spotify’s initiative through many ways, in the past. Thus shutting down the upload beta programme could also be Spotify’s way of pleasing music label giants. A user shared an insightful opinion on Hacker News. “These companies are not against independent artists. The big rights holders are. -what happens when an artist who is signed with Warner Music starts uploading their music directly? - what happens when the Big Three start viewing Spotify (or Google, or anyone, really) as their competitor in music distribution? Somehow almost literally no one focuses on how Warner, Sony, and Universal have a death grip on both artists and distribution companies. But everyone is willing to vilify Spotify.” For more details, head over to Spotify’s blog. Spotify files an EU antitrust complaint against Apple; Apple says Spotify’s aim is to make more money off others’ work Spotify acquires Gimlet and Anchor to expand its podcast services Spotify releases Chartify, a new data visualization library in python for easier chart creation
Read more
  • 0
  • 0
  • 1316

article-image-react-native-0-60-releases-with-accessibility-improvements-androidx-support-and-more
Bhagyashree R
04 Jul 2019
4 min read
Save for later

React Native 0.60 releases with accessibility improvements, AndroidX support, and more

Bhagyashree R
04 Jul 2019
4 min read
Yesterday, the team behind React Native announced the release of React Native 0.60. This release brings accessibility improvements, a new app screen, AndroidX support, CocoaPods in iOS by default, and more. Following are some of the updates introduced in React Native 0.60: Accessibility improvements This release ships with several improvements to accessibility APIs both on Android and iOS. As the new features directly use APIs provided by the underlying platform, they’ll easily integrate with native assistance technologies. Here are some of the accessibility updates to React Native 0.60: A number of missing roles have been added for various components. There’s a new Accessibility States API for better web support in the future. AccessibilityInfo.announceForAccessibility is now supported on Android. Extended accessibility actions will now include callbacks that deal with accessibility around user-defined actions. iOS accessibility flags and reduce motion are now supported on iOS. A clickable prop and an onClick callback are added for invoking actions via keyboard navigation. A new start screen React Native 0.60 comes with a new app screen, which is more user-friendly. It shows useful instructions like editing App.js, links to the documentation, how you can start the debug menu, and also aligns with the upcoming website redesign. https://www.youtube.com/watch?v=ImlAqMZxveg CocoaPods are now part of React Native's iOS project React Native for iOS now comes with CocoaPods by default, which is an application level dependency manager for Swift and Objective-C Cocoa projects. Developers are recommended to open the iOS platform code using the ‘xcworkspace’ file from now on. Additionally, the Pod specifications for the internal packages have been updated to make them compatible with the Xcode projects, which will help with troubleshooting and debugging. Lean Core removals In order to bring the React Native repository to a manageable state, the team started the Lean Core project. As a part of this project, they have extracted WebView and NetInfo into separate repositories. With React Native 0.60, the team has finished migrating them out of the React Native repository. Geolocation has also been extracted based on the community feedback about the new App Store policy. Autolinking for iOS and Android React Native libraries often consist of platform-specific or native code. The autolinking mechanism enables your project to discover and use this code. With this release, the React Native CLI team has made major improvements to autolinking. Developers using React Native before version 0.60, are advised to unlink native dependencies from a previous install. Support for AndroidX (Breaking change) With this release, React Native has been migrated to AndroidX (Android Extension library). As this is a breaking change, developers need to migrate all their native code and dependencies as well. The React Native community has come up with a temporary solution for this called “jetifier”, an AndroidX transition tool in npm format, with a react-native compatible style. Many users are excited about the release and considered it to be the biggest RN release. https://twitter.com/cipriancaba/status/1146411606076792833 Other developers shared some tips for migrating to AndroidX, which is an open source project that maps the original support library API packages into the androidx namespace. We can’t use both AndroidX and the old support library together, which means “you are either all in or not in at all.” Here’s a piece of good advice shared by a developer on Reddit: “Whilst you may be holding off on 0.60.0 until whatever dependency you need supports X you still need to make sure you have your dependency declarations pinned down good and proper, as dependencies around the react native world start switching over if you automatically grab a version with X when you are not ready your going to get fun errors when building, of course this should be a breaking change worthy of a major version number bump but you never know. Much safer to keep your versions pinned and have a googlePlayServicesVersion in your buildscript (and only use libraries that obey it).” Considering this release has major breaking changes, others are also suggesting to wait for some time till 0.60.2 comes out. “After doing a few major updates, I would suggest waiting for this update to cool down. This has a lot of breaking changes, so I would wait for at least 0.60.2 to be sure that all the major requirements for third-party apps are fulfilled ( AndroidX changes),” a developer commented on Reddit. Along with these exciting updates, the team and community have introduced a new tool named Upgrade Helper to make the upgrade process easier. To know more in detail check out the official announcement. React Native VS Xamarin: Which is the better cross-platform mobile development framework? Keeping animations running at 60 FPS in a React Native app [Tutorial] React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial]  
Read more
  • 0
  • 0
  • 6088

article-image-facebook-instagram-and-whatsapp-suffered-a-major-outage-yesterday-people-had-trouble-uploading-and-sending-media-files
Sugandha Lahoti
04 Jul 2019
3 min read
Save for later

Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files

Sugandha Lahoti
04 Jul 2019
3 min read
Facebook and it’s sibling platforms Instagram and Whatsapp suffered a major outage most of yesterday relating to image display. The issues started around 3:04 pm Wednesday, PT. Users were unable to send and receive images, videos and other files over these social media platforms. This marks the third major outage of Facebook and its family of apps this year. Source: Down detector Instagram users reported that their feed might load, but they were unable to post anything new into it. Doing so brings up an error message indicating that "Photo Can't Be Posted", according to users experiencing the problems. For Whatsapp, texts were going through, but for videos and images, users saw a message reading "download failed" and the content did not arrive. https://twitter.com/Navid_kh/status/1146419297385713665 Issues were particularly focused on the east coast of the US, according to the tracking website Down Detector. But they were reported across the world, with significant numbers of reports from Europe, South America and East Asia. More than 14,000 users reported issues with Instagram, while more than 7,500 and 1,600 users complained about Facebook and WhatsApp noted Down Detector. What was the issue? According to ArsTechnica, the issue was because of a bad timestamp data being fed to the company's CDN in some image tags. All broken images had different timestamp arguments embedded in the same URLs. Loading an image from fbcdn.net with bad "oh=" and "oe=" arguments—or no arguments at all—results in an HTTP 403 "Bad URL timestamp". Interestingly, because of this image outage people were able to see how Facebook's AI automatically tags photos behind the scenes. The outage stopped social-media images from loading and left in their place descriptions like: "image may contain: table, plant, flower, and outdoor" and "image may contain: tree, plant, sky." https://twitter.com/zackwhittaker/status/1146456836998144000 https://twitter.com/jfruh/status/1146460397009924101 According to Reuters who talked to Facebook representatives, “During one of our routine maintenance operations, we triggered an issue that is making it difficult for some people to upload or send photos and videos,” Facebook said. Around 6 PM PT services were restored, with Facebook and Instagram both tweeting that the problems are resolved. There was no response from Whatsapp’s twitter account about the acknowledgement or resolution of the outage. https://twitter.com/instagram/status/1146565551520534528 https://twitter.com/facebook/status/1146571015872552961   Twitter also suffered an unexplained downtime in its direct messaging service. https://twitter.com/TwitterSupport/status/1146447958952439809 The latest string of outages follow a recurring trend of issues hitting social media over the past six months. It started in March when Facebook family of apps were hit with a  14 hours outage, longest in its history. Then in June, Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services. This month Verizon caused a major internet outage affecting Amazon, Facebook, CloudFare among others. In the same week, Cloudflare suffered it’s 2nd major internet outage. Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule. Why did Slack suffer an outage on Friday? Facebook tweet explains ‘server config change’ for 14-hour outage on all its platforms
Read more
  • 0
  • 0
  • 3418

article-image-npm-inc-after-a-third-try-settles-former-employee-claims-who-were-fired-for-being-pro-union-the-register-reports
Fatema Patrawala
04 Jul 2019
5 min read
Save for later

Npm Inc, after a third try, settles former employee claims, who were fired for being pro-union, The Register reports

Fatema Patrawala
04 Jul 2019
5 min read
Yesterday, reports from The Register confirmed that the Javascript Package registry NMP Inc. and the 3 former employees who were fired, agreed on a settlement.  NPM which stands for Node Package Manager, is the company behind the widely used NPM JavaScript package repository. In March, the company laid off 5 employees in a unprofessional and unethical manner. In April, 3 out of 5 former staffers – Graham Carlson, Audrey Eschright, and Frédéric Harper – had formally accused NPM Inc of union busting in a complaint to the US National Labor Relations Board. https://twitter.com/bram_parsons/status/1146230097617178625 The deal was settled after the third round of negotiations between the two parties as per The Register. The filing posted on the NLRB website, administrative law judge Gerald Etchingham said he had received a letter from one of the attorneys involved in the dispute that both sides had agreed to settle. The terms of the deal were not disclosed but as per the NLRB settlements, such cases usually involve a pay back, job restoration or additional compensation. However, it is highly unlikely that none of the former employees will agree for job restore and will not return to npm. Other than this, NPM Inc is also required to share a letter with current employees accepting the ways in which it violated the laws. But there are no reports of this action yet from Npm inc. https://twitter.com/techworkersco/status/1146255087968239616 Audrey Eschright, one of the plaintiffs  complained on Twitter about the company's behaviour and former rejections to settle on claims. "I'm amazed that NPM has rejected their latest opportunity to settle the NLRB charges and wants to take it to court," she wrote. "Doing so continues the retaliation I and my fellow claimants experienced. We're giving up our own time, making rushed travel plans, and putting in a lot of effort because we believe our rights as workers are that important." According to Eschright, NPM Inc refused to settle because the CEO has taken the legal challenge personally. "Twice their lawyers have spent hours to negotiate an agreement with the NLRB, only to withdraw their offer," she elaborated on Twitter. "The only reason we've heard has been about Bryan Bogensberger's hurt feelings." The Register also mentioned that last week NPM Inc had tried to push back a hearing to be held on 8th July citing the reason that management was traveling for extensive fund raising. But NLRB denied the request and said that the reason is not justified.  NLRB also mentioned that NPM Inc "ignores the seriousness of these cases, which involve three nip-in-the-bud terminations at the onset of an organizing drive." It is indeed true that NPM Inc ignores the seriousness of this case but also oversees the fact that npm registry coordinates the distribution of hundreds of thousands of modules used by some 11 million JavaScript developers around the world. The management of NPM Inc is making irrational decisions and behaving notoriously, due to which the code for the npm command-line interface (CLI) suffers from neglect, unfixed bugs piling up and pull requests languishing.  https://twitter.com/npmstatus/status/1146055266057646080 On Monday, there were reports of Npm 6.9.1 bug which was caused due to .git folder present in the published tarball. The Community architect at the time, Kat Marchán had to release npm 6.9.2 to fix the issue. Shortly after, Marchán, who was a CLI and Community Architect at npm has also quit the company. Marchán made this announcement yesterday on Twitter, adding that she is no longer a maintainer on npm CLI or its components.  https://twitter.com/maybekatz/status/1146208849206005760 Another ex-npm employee noted on Marchán’s resignation, that every modern web framework depends on npm, and npm is inseparable from Kat’s passionate brilliance. https://twitter.com/cowperthwait/status/1146209348135161856 NPM Inc. now not only needs to fix bugs but majorly it also needs to fix its relationship and reputation among the Javascript community. Update on 20th September - NPM Inc. CEO resigns Reports from news sources came about NPM CEO, Bryan Bogensberger to resign effective immediately in order to pursue new opportunities. NPM's Board of directors have commenced a search for a new CEO. The company's leadership will be managed collaboratively by a team comprised of senior npm executives. "I am proud of the complete transformation we have been able to make in such a short period of time," said Bogensberger. "I wish this completely revamped, passionate team monumental success in the years to come!" Before joining npm, Inc., Bogensberger spent three years as CEO and co-founder of Inktank, a leading provider of scale-out, open source storage systems that was acquired by Red Hat, Inc. for $175 million in 2014. He also has served as vice president of business strategy at DreamHost, vice president of marketing at Joyent, and CEO and co-founder of Reasonablysmart, which Joyent acquired in 2009. To know more, check out PR Newswire website. Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems? Surprise NPM layoffs raise questions about the company culture  
Read more
  • 0
  • 0
  • 2404

article-image-samsung-speeds-up-on-device-ai-processing-with-a-4x-lighter-and-8x-faster-algorithm
Vincy Davis
03 Jul 2019
4 min read
Save for later

Samsung speeds up on-device AI processing with a 4x lighter and 8x faster algorithm

Vincy Davis
03 Jul 2019
4 min read
Yesterday, Samsung announced an on-device AI lightweight algorithm that can deliver optimization of low-power and high-speed computations. It uses an NPU (Neural Processing Unit) solution for speeding processing to enable 4 times lighter and 8 times faster computing than the existing algorithms of 32-bit deep learning data used in servers. Last month, Samsung Electronics had announced their goal of expanding its proprietary NPU technology development, in order to strengthen Samsung’s leadership in the global system semiconductor industry by 2030. Recently, the company also delivered an update to this goal, at the conference on Computer Vision and Pattern Recognition (CVPR), with a paper titled “Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss”. A Neural Processing Unit (NPU) is a processor which is optimized for deep learning algorithm computation, and designed to efficiently process thousands of computations simultaneously. The Vice President and head of Computer Vision Lab of Samsung Advanced Institute of Technology, Chang-Kyu Choi says that, “Ultimately, in the future we will live in a world where all devices and sensor-based technologies are powered by AI. Samsung’s On-Device AI technologies are lower-power, higher-speed solutions for deep learning that will pave the way to this future. They are set to expand the memory, processor and sensor market, as well as other next-generation system semiconductor markets.” Last year, Samsung had introduced Exynos 9 (9820), which featured a Samsung NPU inside the mobile System on Chip (SoC). This product allows mobile devices to perform AI computations independent of any external cloud server. Samsung uses Quantization Interval Learning (QIL) to retain data accuracy The Samsung Advanced Institute of Technology (SAIT) developed the on-device AI lightweight technology by adjusting data into groups of under 4 bits, while maintaining accurate data recognition. The technology is using ‘Quantization Interval Learning (QIL)’ to retain data accuracy. The QIL allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit width reduction like 3-bit and 2-bit. The 4-bit networks preserve the accuracy of the full-precision networks with various architectures. The 3-bit networks yield comparable accuracy to the full-precision networks. The 2-bit networks suffer from minimal accuracy loss. The quantizer also achieves good quantization performance that outperforms the existing methods even when trained on a heterogeneous dataset and applied to a pretrained network. When the data of a deep learning computation is presented in bit groups lower than 4 bits, computations of ‘and’ and ‘or’ are allowed, on top of the simpler arithmetic calculations of addition and multiplication. By using the QIL process, the 4-bit computation gives the same results as existing processes while using 1/40 to 1/120 fewer transistors. As the system requires less hardware and less electricity, it can be mounted directly in-device at the place where the data for an image or fingerprint sensor is being obtained. Benefits of Samsung’s on-device AI technology A large amount of data can be computed at a high speed without consuming excessive amounts of electricity. Samsung’s system semiconductor capacity will be developed and strengthened by directly computing data from within the device itself. By reduction in the cost of cloud construction for AI operations, Samsung’s on-device AI technology will provide quick and stable performance for use cases such as virtual reality and autonomous driving. It will save personal biometric information used for device authentication, such as fingerprint, iris and face scans, onto mobile devices safely. Earlier this month, Samsung Electronics announced a multi-year strategic partnership with AMD. The strategic alliance is for ultra low power, high-performance mobile graphics IP based on AMD Radeon graphics technologies. Surprisingly though, many users are not impressed with Samsung’s new technology, due to poor performances of Samsung’s previous devices. https://twitter.com/Wayfarerathome/status/1146013820051218433 https://twitter.com/JLP20/status/1146279124408971264 https://twitter.com/ronEgee/status/1146052914315706368 This technology is not yet implemented in Samsung phones. It remains to be seen if the new on-device AI technology can make users change their opinion about Samsung. Visit the Samsung Newsroom site for more details. Samsung AI lab researchers present a system that can animate heads with one-shot learning Facebook app is undeletable on Samsung phones and can possibly track your movements, reports Bloomberg Samsung opens its AI based Bixby voice assistant to third-party developers
Read more
  • 0
  • 0
  • 2607

article-image-introducing-vector-a-high-performance-data-router-written-in-rust
Amrata Joshi
03 Jul 2019
3 min read
Save for later

Introducing Vector, a high-performance data router, written in Rust

Amrata Joshi
03 Jul 2019
3 min read
Yesterday, the team at Timber.io, a cloud-based logging platform, released Vector, a high-performance observability data router that makes transforming, collecting, and sending logs, metrics, and events easy. One of the reasons behind building Vector was to integrate mtail's functionality into a bigger project. mtail is a tool which is used for extracting metrics from application logs. Licensed under the Apache License, Version 2.0, Vector decouples data collection and routing from user services that give users the control and data ownership. Vector which written in Rust, compiles to a single static binary and it has been designed to be deployed across the entire infrastructure.  Concepts of Vector Following is a diagram depicting the basic concepts that Vector comprises of: Image source: Vector Sources When Vector ingests data it proceeds to normalize that data into a record, which sets the stage for easy and consistent processing of the data. Examples of sources include syslog, tcp, file, and stdin. Transforms Transform modifies an event or the stream as a whole like a filter,  parser, sampler, or aggregator.  Sinks A sink is a destination for events and its design and transmission method is controlled by the downstream service it is interacting with. For instance, the TCP sink will stream individual records, while the S3 sink will buffer and flush data. Features of Vector Memory efficient and fast Vector is fast and memory-efficient and doesn't have a runtime and garbage collector. Test cases Vector involves performance and correctness tests, where the performance tests measure performance and capture detailed performance data, whereas, correctness tests verify behavior.  The team behind Vector has also invested in a robust test harness that provides a data-driven testing environment.  Here are the test results: Image source: GitHub Processing data Vector is used for collecting data from various sources in various shapes. It also sets the stage for easy and consistent processing of the data. Serves as a single tool It serves as a light-weight agent as well as a service that works as a single tool for users. Guarantee support matrix It features a guarantee support matrix that helps users understand their tradeoffs. Easy deployment Vector cross-compiles to a single static binary without any runtime. Users seem to be happy about this news as they think Vector is useful for them. A user commented on HackerNews, "I'm learning Rust and eventually plan to build such a solution but I think a lot of this project can be repurposed for what I asked much faster than building a new one. Cheers on this open source project. I will contribute whatever I can. Thanks!!" It seems more metrics-focused sources and Sinks are expected in Vector in the future. A member from the Vector project commented, "It's still slightly rough around the edges, but Vector can actually ingest metrics today in addition to deriving metrics from log events. We have a source component that speaks the statsd protocol which can then feed into our prometheus sink. We're planning to add more metrics-focused sources and sinks in the future (e.g. graphite, datadog, etc), so check back soon!" To know more about this news, check out Vector's page. Implementing routing with React Router and GraphQL [Tutorial] TP-Link kept thousands of vulnerable routers at risk of remote hijack, failed to alert customers Amazon buys ‘Eero’ mesh router startup, adding fuel to its in-house Alexa smart home ecosystem ambitions
Read more
  • 0
  • 0
  • 3270
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-cloudflare-suffers-2nd-major-internet-outage-in-a-week-this-time-due-to-globally-deploying-a-rogue-regex-rule
Savia Lobo
03 Jul 2019
4 min read
Save for later

Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule.

Savia Lobo
03 Jul 2019
4 min read
For the second time in less than a week, Cloudflare was part of the major internet outage affecting many websites for about an hour, yesterday due to a software glitch. Last week, Cloudflare users faced a major outage when Verizon accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. Cloudflare’s CTO John Graham-Cumming wrote yesterday’s outage was due to a massive spike in CPU utilization in the network. Source: Cloudflare Many users complained of seeing "502 errors" displayed in their browsers when they tried to visit its clients. Downdetector, the website which updates users of the ongoing outages, service interruptions also flashed a 502 error message. https://twitter.com/t_husoy/status/1146058460141772802 Graham-Cumming wrote, “This CPU spike was caused by a bad software deploy that was rolled back. Once rolled back the service returned to normal operation and all domains using Cloudflare returned to normal traffic levels”. A single misconfigured rule, the actual cause of the outage What must have been the cause of the outage is a single misconfigured rule within the Cloudflare Web Application Firewall (WAF), deployed during a routine deployment of new Cloudflare WAF Managed rules. Though the company has automated systems to run test suites and a procedure for deploying progressively to prevent incidents, these WAF rules were deployed globally in one go and caused yesterday’s outage. https://twitter.com/mjos_crypto/status/1146168236393807872 These new rules were to improve the blocking of inline JavaScript that is used in attacks. “Unfortunately, one of these rules contained a regular expression that caused CPU to spike to 100% on our machines worldwide. This 100% CPU spike caused the 502 errors that our customers saw. At its worst traffic dropped by 82%”, Graham-Cumming writes. After finding out the actual cause of the issue, Cloudflare issued a ‘global kill’ on the WAF Managed Rulesets, which instantly dropped CPU back to normal and restored traffic at 1409 UTC. They also ensured that the problem was fixed correctly and re-enabled the WAF Managed Rulesets at 1452 UTC. https://twitter.com/SwiftOnSecurity/status/1146260831899914247 “Our testing processes were insufficient in this case and we are reviewing and making changes to our testing and deployment process to avoid incidents like this in the future”, the Cloudflare blog states. A user said Cloudflare should have been careful of rolling out the feature globally while it was staged for a rollout. https://twitter.com/copyconstruct/status/1146199044965797888 Cloudflare confirms the outage was ‘a mistake’ and not an attack Cloudflare also received speculations that this outage was caused by a DDoS from China, Iran, North Korea, etc. etc, which Graham-Cumming tweeted were untrue and “It was not an attack by anyone from anywhere”. CloudFare’s CEO, Matthew Prince, also confirmed that the outage was not a result of the attack but a “mistake on our part.” https://twitter.com/jgrahamc/status/1146078278278635520 Many users have applauded that Cloudflare has accepted the fact that it was an organizational / engineering management issue and not an individual’s fault. https://twitter.com/GossiTheDog/status/1146188220268470277 Prince told Inc., “I'm not an alarmist or a conspiracy theorist, but you don't have to be either to recognize that it is ultimately your responsibility to have a plan. If all it takes for half the internet to go dark for 20 minutes is some poorly deployed software code, imagine what happens when the next time it's intentional.” To know more about this news in detail, read Cloudflare’s official blog. A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Email app Superhuman allows senders to spy on recipients through tracking pixels embedded in emails, warns Mike Davidson
Read more
  • 0
  • 0
  • 3790

article-image-google-suffers-another-outage-as-google-cloud-servers-in-the-us-east1-region-are-cut-off
Amrata Joshi
03 Jul 2019
3 min read
Save for later

Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off

Amrata Joshi
03 Jul 2019
3 min read
Yesterday, Google Cloud servers in the us-east1 region were cut off from the rest of the world as there was an issue reported with Cloud Networking and Load balancing within us-east1. These issues with Google Cloud Networking and Load Balancing have caused physical damage to multiple concurrent fiber bundles that serve network paths in us-east1. At 10:25 am PT yesterday, the status was updated that the “Customers may still observe traffic through Global Load-balancers being directed away from back-ends in us-east1 at this time.” It was later posted on the status dashboard that the mitigation work was underway for addressing the issue with Google Cloud Networking and Load Balancing in us-east1. However, the rate of errors was decreasing at the time but few users faced elevated latency. Around 4:05 pm PT, the status was updated, “The disruptions with Google Cloud Networking and Load Balancing have been root caused to physical damage to multiple concurrent fiber bundles serving network paths in us-east1, and we expect a full resolution within the next 24 hours. In the meantime, we are electively rerouting traffic to ensure that customers' services will continue to operate reliably until the affected fiber paths are repaired. Some customers may observe elevated latency during this period. We will provide another status update either as the situation warrants or by Wednesday, 2019-07-03 12:00 US/Pacific tomorrow.” This outage seems to be the second major one that hit Google's services in recent times. Last month, Google Calendar was down for nearly three hours around the world. Last month Google Cloud suffered a major outage that took down a number of Google services including YouTube, GSuite, Gmail, etc. According to a person who works on Google Cloud, the team is experiencing an issue with a subset of the fiber paths that supply the region and the team is working towards resolving the issue. They have mostly removed all the Google.com traffic out of the Region to prefer GCP customers. A Google employee commented on the HackerNews thread, “I work on Google Cloud (but I'm not in SRE, oncall, etc.). As the updates to [1] say, we're working to resolve a networking issue. The Region isn't (and wasn't) "down", but obviously network latency spiking up for external connectivity is bad. We are currently experiencing an issue with a subset of the fiber paths that supply the region. We're working on getting that restored. In the meantime, we've removed almost all Google.com traffic out of the Region to prefer GCP customers. That's why the latency increase is subsiding, as we're freeing up the fiber paths by shedding our traffic.” Google Cloud users are tensed about this outage and awaiting the services to get restored back to normal. https://twitter.com/IanFortier/status/1146079092229529600 https://twitter.com/beckynagel/status/1146133614100221952 https://twitter.com/SeaWolff/status/1146116320926359552 Ritiko, a cloud-based EHR company is also experiencing issues because of the Google Cloud outage, as they host their services there. https://twitter.com/ritikoL/status/1146121314387857408 As of now there is no further update from Google on if the outage is resolved, but they expect a full resolution within the next 24 hours. Check this space for new updates and information. Google Calendar was down for nearly three hours after a major outage Do Google Ads secretly track Stack Overflow users? Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard  
Read more
  • 0
  • 0
  • 3046

article-image-email-app-superhuman-allows-senders-to-spy-on-recipients-through-tracking-pixels-embedded-in-emails-warns-mike-davidson
Bhagyashree R
03 Jul 2019
6 min read
Save for later

Email app Superhuman allows senders to spy on recipients through tracking pixels embedded in emails, warns Mike Davidson

Bhagyashree R
03 Jul 2019
6 min read
Update: Added response from Rahul Vohra, CEO of Superhuman. Last week, Mike Davidson, the former VP of design at Twitter and founder of Newsvine, questioned the ethics and responsibility of Superhuman, one of Silicon Valley’s most talked about email app in a blog post. He called the app a “surveillance tool” that embed tracking pixels inside emails sent by its customers.  https://twitter.com/mikeindustries/status/1146092247437340672 Superhuman was founded in 2017 by Rahul Vohra with the aims to reinvent the email experience. It is an invitation-only service, mainly targeted towards business users that costs $30/month. Last month, the startup was able to raise a $33 million investment round that was led by Mr. Andreessen’s firm, Andreessen Horowitz and is now valued at $260 million. https://twitter.com/Superhuman/status/1144380806036516864   “Superhuman teaches its users to surveil by default” The email app bundles many modern features like snoozing, scheduling, undo send, insights from social networks, and more. The feature that Davidson talked about was “Read Receipts”, which is an opt-in common feature we see in many messaging email clients that indicates the read/unread status.  Davidson highlights that Superhuman gives you this read/unread status in a very detailed way. It allows sending and receiving emails embedded with tracking pixels, which is a small and hidden image in an email. When the recipient clicks on the email, the image reports a running log of every single time the recipient has opened the mail, including their location, regardless of the email client the recipient is using. The worst part is that it is on by default and many users do not usually bother to change the default settings. Here’s a log that Davidson shared in his post: Source: Mike Davidson’s blog post What do people think of this feature? Many people felt that sharing the number of times an email was read, geolocation of the recipient, and other information is intrusive and violates user privacy. In his post, Davidson talked about several “bad things” people can do using this technology, that the developers might have not even intended for. Some users agreed to this and pointed out that sharing such personal information can prove to be very dangerous for the recipients.  https://twitter.com/liora_/status/1146122407737876481 Others gave the rationale that many email clients are doing the same thing including Gmail, Apple Mail, and Outlook. Embedding tracking pixels in an email is also very commonly used by email marketing platforms.  https://twitter.com/nickabouzeid/status/1144296483778228224 https://twitter.com/bentruyman/status/1146137938121543680 https://twitter.com/chrisgrayson/status/1146319066493313024   As a response to this, Davidson rightly said, “The main point here is: just because technology is being used unethically by others does not mean you should use it unethically yourself. Harmful pesticides have also been around for years. That doesn’t mean you should use them yourself.” Davidson further explained what making such unethical decisions means for a company in the long run. In the beginning days of a company, there are no set principles for its people to make decisions. It is basically what the founders think is right for the company. At that time,  every decision that you make, whether it is good or bad, makes the foundation of what Davidson calls as “decision genome”. He adds, “With each decision a company makes, its “decision genome” is established and subsequently hardened.” He says the decisions that seem small in the beginning actually become the basis of many other big decisions you will make in the future. This will ultimately affect your company’s ethical trajectory. “The point here is that companies decide early on what sort of companies they will end up being. The company they may want to be is often written in things like “core values” that are displayed in lunch rooms and employee handbooks, but the company they will be is a product of the actual decisions they make — especially the tough decisions,” he adds. Many agreed on the point Davidson makes here, and think that this is not just limited to a single company but in fact, the entire ecosystem. David Heinemeier Hansson, the creator of Ruby on Rails, believes that Silicon Valley especially is in serious need for recalibration. https://twitter.com/dhh/status/1146403794214883328 What can be some possible solutions One workaround can be disabling images in email by default since the tracking pixels are sent as images. However, Superhuman does not even allow that. “Superhuman doesn’t even let its own customers turn images off. So merely by using Superhuman, you are vulnerable to the exact same spying that Superhuman enables you to do to others,” Davidson mentions. The next step for Superhuman, Davidson suggests is to apologize and remove this feature. He further recommends that Superhuman should, in fact, protect its users from emails that have tracking pixels. Another mitigation he suggests is to add a “Sent via Superhuman”  signature so that the receiver is aware that their data will be sent to the sender. https://twitter.com/mikeindustries/status/1144360664275673088 If these do not suffice, Davidson gave a harsh suggestion to publicly post surveilled email on Twitter or other websites: https://twitter.com/mikeindustries/status/1144315861919883264 How Superhuman has responded to this criticism Yesterday, Rahul Vohra, the CEO of Superhuman responded that the company understands the severity of sharing such personal information, especially the state or country level location. He further shared what steps the company is taking to address the concerns raised against the feature. He listed the following changes:  We have stopped logging location information for new email, effective immediately. We are releasing new app versions today that no longer show location information. We are deleting all historical location data from our apps. We are keeping the read status feature, but turning it off by default. Users who want it will have to explicitly turn it on. We are prioritizing building an option to disable remote image loading. Many Twitter users appreciated Vohra’s quick response: https://twitter.com/chadloder/status/1146564393884254209 https://twitter.com/yuvalb/status/1146542900559405056 https://twitter.com/kmendes/status/1146569165211234304 Read Davidson’s post to know more in detail. Google announces the general availability of AMP for email, faces serious backlash from users A security researcher reveals his discovery on 800+ Million leaked Emails available online VFEMail suffers complete data wipe out!
Read more
  • 0
  • 0
  • 3558

article-image-azure-devops-report-how-a-bug-caused-sqlite3-for-python-to-go-missing-from-linux-images
Vincy Davis
03 Jul 2019
3 min read
Save for later

Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images

Vincy Davis
03 Jul 2019
3 min read
Yesterday, Youhana Naseim the Group Engineering Manager at Azure Pipelines provided a post-mortem of the bug, due to which a sqlite3 module in the Ubuntu 16.04 image for Python went missing from May 14th. The Azure DevOps team identified the bug on May 31st and fixed it on June 26th. Naseim apologized to all the affected customers for the delay in detecting and fixing the issue. https://twitter.com/hawl01475954/status/1134053763608530945 https://twitter.com/ProCode1/status/1134325517891411968 How Azure DevOps team detected and fixed the issue The Azure DevOps team upgraded the versions of Python, which were included in the Ubuntu 16.04 image with M151 payload. These versions of Python’s build scripts consider sqlite3 as an optional module, hence the builds were carried out successfully despite the missing sqlite3 module. Naseim says that, “While we have test coverage to check for the inclusion of several modules, we did not have coverage for sqlite3 which was the only missing module.” The issue was first reported by a user who received the M151 deployment containing the bug via the Azure Developer Community on May 20th. But the Azure support team escalated, only after receiving more reports during the M152 deployment on May 31st. The support team then proceed with the M153 deployment, after posting a workaround for the issue, as the M152 deployment would take at least 10 days. Further, due to an internal miscommunication, the support team didn’t start the M153 deployment to Ring 0 until June 13th. [box type="shadow" align="" class="" width=""]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. [/box]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. The team then resumed deployment to Ring 1 on June 17th and reached Ring 2 by June 20th. Finally, after a few failures, the team fully deployed the M153 deployment by June 26th. Azure’s future workarounds to deliver timely fixes The Azure team has set out plans to make improvements to their deployment and hotfix processes with an aim to deliver timely fixes. Their long term plan is to provide customers with the ability to choose to revert to the previous image as a quick workaround for issues introduced in new images. The detailed medium and short plans are as given below: Medium-term plans Add the ability to better compare what changed on the images to catch any unexpected discrepancies that our test suite might miss. Increase the speed and reliability of deployment process. Short term plans Build a full CI Pipeline for image generation for verifying images daily. Add test coverage for all modules in the Python standard library including sqlite3. Improving the support team's communication with the support team to escalate issues more quickly. Add telemetry, so it would be possible to detect and diagnose issues more quickly. Implement measures, which will enable reverting to prior image versions quickly and mitigate issues faster. Visit the Azure Devops status site for more details. Read More Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32
Read more
  • 0
  • 0
  • 3398
article-image-facebook-fined-2-3-million-by-germany-for-providing-incomplete-information-about-hate-speech-content
Sugandha Lahoti
03 Jul 2019
4 min read
Save for later

Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content

Sugandha Lahoti
03 Jul 2019
4 min read
Yesterday, German authorities said that they have imposed a 2 million-euro ($2.3 million) fine on Facebook under a law designed to combat hate speech. German authorities said that Facebook had provided "incomplete" information in mandatory transparency reports about illegal content, such as hate speech. Facebook received 1,704 complaints and removed 362 posts between January 2018 and June 2018. In the second half of 2018, the company received 1,048 complaints. In a statement to Reuters, Germany’s Federal Office of Justice said that by tallying only certain categories of complaints, the web giant had created a skewed picture of the extent of violations on its platform. It says, "The report lists only a fraction of complaints about illegal content which created a distorted public image about the size of the illegal content and the way the social network deals with the complaints.” The agency said Facebook’s report did not include complaints relating to anti-Semitic insults and material designed to incite hatred against persons or groups based on their religion or ethnicity. Germany’s NetzDG law has been criticized by experts The NetzDG law, under which Facebook was fined, is Germany's internet transparency law passed in 2017 for combating agitation and fake news in social networks. Under this law, commercial social networks are obliged to establish a transparent procedure for dealing with complaints about illegal content and are subject to a reporting and documentation obligation. Per the law, social media platform should check complaints immediately, delete "obviously illegal" content within 24 hours, delete any illegal content within 7 days after checking and block access to it. The deleted content must be stored for at least ten weeks for evidence purposes. In addition, providers must provide a service agent in Germany, both to the authorities and for civil proceedings and submit a six-monthly report on complaints received and how they have been dealt with. However, the law has been on the receiving end of constant criticism from various experts, journalists, social networks, UN, and the EU. Experts said that short and rigid deletion periods and the high threat of fines would compromise freedom of speech of individuals. The social networks will be forced to remove contributions in case of doubt, even if they require a context-related consideration. Facebook had also criticized the NetzDG draft. In a statement sent to the German Bundestag at the end of May 2017, the company stated, "The constitutional state must not pass on its own shortcomings and responsibility to private companies. Preventing and combating hate speech and false reports is a public task from which the state must not escape." In response to the fine, Facebook said, "We want to remove hate speech as quickly and effectively as possible and work to do so. We are confident our published NetzDG reports are in accordance with the law, but as many critics have pointed out, the law lacks clarity.” “ We will analyze the fine notice carefully and reserve the right to appeal,” Facebook added. Facebook is also facing privacy probes over its policies and data breaches and was fined by the EU for failing to give correct information during the regulatory review of its WhatsApp takeover. Last week, Italy's privacy regulator fined Facebook €1 million for violations connected to the Cambridge Analytica scandal. The agency said 57 Italians had downloaded a personality test app called ThisIsYourDigitalLife, which was used to collect Facebook information on both themselves and their Facebook friends. The app was then used to provide data to Cambridge Analytica, for targeting voters during the 2016 U.S. presidential election. Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily. Mark Zuckerberg is a liar, fraudster, unfit to be the C.E.O. of Facebook, alleges Aaron Greenspan. YouTube’s new policy to fight online hate and misinformation misfires due to poor execution
Read more
  • 0
  • 0
  • 2530

article-image-llvm-webassembly-backend-will-soon-become-emscriptens-default-backend-v8-announces
Bhagyashree R
02 Jul 2019
3 min read
Save for later

LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, the team behind V8, an open source JavaScript engine, shared the work they with the community have been doing to make LLVM WebAssembly the default backend for Emscripten. LLVM is a compiler framework and Emscripten is an LLVM-to-Web compiler. https://twitter.com/v8js/status/1145704863377981445 The LLVM WebAssembly backend will be the third backend in Emscripten. The original compiler was written in JavaScript which used to parse LLVM IR in text form. In 2013, a new backend was written called Fastcomp by forking LLVM, which was designed to emit asm.js. It was a big improvement in code quality and compile times. According to the announcement the LLVM WebAssembly backend beats the old Fastcomp backend on most metrics. Here are the advantages this backend will come with: Much faster linking The LLVM WebAssembly backend will allow incremental compilation using WebAssembly object files. Fastcomp uses LLVM Intermediate Representation (IR) in bitcode files, which means that at the time of linking the IR would be compiled by LLVM. This is why it shows slower link times. On the other hand, WebAssembly object files (.o) already contain compiled WebAssembly code, which accounts for much faster linking. Faster and smaller code The new backend shows significant code size reduction as compared to Fastcomp.  “We see similar things on real-world codebases that are not in the test suite, for example, BananaBread, a port of the Cube 2 game engine to the Web, shrinks by over 6%, and Doom 3 shrinks by 15%!,” shared the team in the announcement. The factors that account for the faster and smaller code is that LLVM has better IR optimizations and its backend codegen is smart as it can do things like global value numbering (GVN). Along with that, the team has put their efforts in tuning the Binaryen optimizer which also helps in making the code smaller and faster as compared to Fastcomp. Support for all LLVM IR While Fastcomp could handle the LLVM IR generated by clang, it often failed on other sources. On the contrary, the LLVM WebAssembly backend can handle any IR as it uses the common LLVM backend infrastructure. New WebAssembly features Fastcomp generates asm.js before running asm2wasm. This makes it difficult to handle new WebAssembly features like tail calls, exceptions, SIMD, and so on. “The WebAssembly backend is the natural place to work on those, and we are in fact working on all of the features just mentioned!,” the team added. To test the WebAssembly backend you just have to run the following commands: emsdk install latest-upstream emsdk activate latest-upstream Read more in detail on V8’s official website. V8 7.2 Beta releases with support for public class fields, well-formed JSON.stringify, and more V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon
Read more
  • 0
  • 0
  • 4368

article-image-microsoft-will-not-support-windows-registry-backup-by-default-to-reduce-disk-footprint-size-from-windows-10-onwards
Vincy Davis
02 Jul 2019
3 min read
Save for later

Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards

Vincy Davis
02 Jul 2019
3 min read
After the release of Windows 10 in October 2018, it was speculated that Windows 10 might have a bug which is preventing the successful execution of the registry backup task, usually enabled by default on PCs running the operating system. After eight months, Microsoft has now come back with an answer to this speculation, by stating that it was not a bug but a change in “design” that prevented the execution of registry backups. All along these eight months, users were not notified about this change in feature by Microsoft. Around 800M Windows 10 users would have lost their data, if by any chance, the Windows System Restore point would have failed. Last week, Microsoft released a support document stating Windows 10 version 1803 onwards, Windows will no longer back the system registry to the RegBack folder, by default. Also it has been said that this change is “intended to help reduce the overall disk footprint size of Windows.” If browsed through the Windows\System32\config\RegBack folder, all registry hives are still present, however with each having 0kb file size. Registry backups are extremely important for users as they are the only option available, if the Windows System Restore point fails. How to manually switch back automatic registry backups Though Windows will not support registry backups by default, Microsoft has not entirely  removed the feature. Users can still create registry backups automatically by using a system restore point. Windows 10 users can change the new default behavior using the following steps: First configure a new  REG_DWORD registry entry at HKLM\System\CurrentControlSet\Control\Session Manager\Configuration Manager\EnablePeriodicBackup. Assign it to value 1. After restarting the system, Windows will back up the registry to the RegBack folder. A RegIdleBackup task will be created to manage subsequent backups. Windows will store the task information in the Scheduled Task Library, in Microsoft\Windows\Registry folder. The task has the following properties: Image Source: Microsoft Document Users are skeptical that Microsoft has removed registry backups, for saving disk footprint space. A user on Hacker News comments that, “50-100MB seems like a miniscule amount of space to warrant something like this. My WinSxS folder alone is almost 10GB. If they wanted to save space, even a modest improvement in managing updates would yield space saving results orders of magnitude greater than this.” Another user adds, “Of all the stuff crammed automatically on Windows 10 install .. they can't be serious about saving space.” Another user wrote that, “This sort of thinking might have been understandable back during the '90's. However, today, people have plenty of free space on their hard disk. The track record of Windows 10 has been so poor lately that it's surprising that MS got so overconfident that they decided that they didn't need safeguards like this any longer.” Read the Microsoft support document for more details. Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months
Read more
  • 0
  • 0
  • 1991
article-image-google-open-sources-its-robots-txt-parser-to-make-robots-exclusion-protocol-an-official-internet-standard
Bhagyashree R
02 Jul 2019
3 min read
Save for later

Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, Google announced that it has teamed up with the creator of Robots Exclusion Protocol (REP), Martijn Koster and other webmasters to make the 25 year old protocol an internet standard. The REP, better known as robots.txt, is now submitted to IETF (Internet Engineering Task Force). Google has also open sourced its robots.txt parser and matcher as a C++ library. https://twitter.com/googlewmc/status/1145634145261051906 REP was created back in 1994 by Martijn Koster, a software engineer who is known for his contribution in internet searching. Since its inception, it has been widely adopted by websites to indicate whether web crawlers and other automatic clients are allowed to access the site or not. When any automatic client wants to visit a website it first checks for robots.txt that shows something like this: User-agent: * Disallow: / The User-agent: * statement means that this applies to all robots and Disallow: / means that the robot is not allowed to visit any page of the site. Despite being used widely on the web, it is still not an internet standard. With no set in stone rules, developers have interpreted the “ambiguous de-facto protocol” differently over the years. Also, it has not been updated since its creation to address the modern corner cases. This proposed REP draft is a standardized and extended version of REP that gives publishers fine-grained controls to decide what they like to be crawled on their site and potentially shown to interested users. The following are some of the important updates in the proposed REP: It is no longer limited to HTTP and can be used by any URI-based transfer protocol, for instance, FTP or CoAP. Developers need to at least parse the first 500 kibibytes of a robots.txt. This will ensure that the connections are not open for too long to avoid any unnecessary strain on servers. It defines a new maximum caching time of 24 hours after which crawlers cannot use robots.txt. This allows website owners to update their robots.txt whenever they want and also avoid the overloading robots.txt requests by crawlers. It also defines a provision for cases when a previously accessible robots.txt file becomes inaccessible because of server failures. In such cases the disallowed pages will not be crawled for a reasonably long period of time. This updated REP standard is currently in its draft stage and Google is now seeking feedback from developers. It wrote, “we uploaded the draft to IETF to get feedback from developers who care about the basic building blocks of the internet. As we work to give web creators the controls they need to tell us how much information they want to make available to Googlebot, and by extension, eligible to appear in Search, we have to make sure we get this right.” To know more in detail check out the official announcement by Google. Also, check out the proposed REP draft. Do Google Ads secretly track Stack Overflow users? Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers
Read more
  • 0
  • 0
  • 1785

article-image-gitlab-faces-backlash-from-users-over-performance-degradation-issues-tied-to-redis-latency
Vincy Davis
02 Jul 2019
4 min read
Save for later

GitLab faces backlash from users over performance degradation issues tied to redis latency

Vincy Davis
02 Jul 2019
4 min read
Yesterday, GitLab suffered major performance degradation in terms of 5x increased error rate and site slow down. The degradation was identified and rectified within few hours of its discovery. https://twitter.com/gabrielchuan/status/1145711954457088001 https://twitter.com/lordapo_/status/1145737533093027840 The GitLab engineers promptly started investigating the slowdown on GitLab.com and notified users that the slow down is in redis and lru cluster, thus impacting all web requests serviced by the rails front-end. What followed next was a very comprehensive detailing about the issue, its causes, who’s handling what kind of issue and more. GitLab’s step by step response looked like this: First, they investigated slow response times on GitLab. Next, they added more workers to alleviate the symptoms of the incident. Then, they investigated jobs on shared runners that were being picked up at a low rate or appeared being stuck. Next, they tracked CI issues and observed performance degradation as one incident. Over the time, they continued to investigate the degraded performance and CI pipeline delays. After a few hours, all services restored to normal operation and the CI pipelines continued to catch up from delays earlier with nearly normal levels. David Smith, the Production Engineering Manager at GitLab also updated users that the performance degradation was due to few issues tied to redis latency. Smith also added that, “We have been looking into the details of all of the network activity on redis and a few improvements are being worked on. GitLab.com has mostly recovered.” Many users on Hacker News wrote about their unpleasant experience with GitLab.com. A user states that, “I recently started a new position at a company that is using Gitlab. In the last month I've seen a lot of degraded performance and service outages (especially in Gitlab CI). If anyone at Gitlab is reading this - please, please slow down on chasing new markets + features and just make the stuff you already have work properly, and fill in the missing pieces.” Another user comments, “Slow down, simplify things, and improve your user experience. Gitlab already has enough features to be competitive for a while, with the Github + marketplace model.” Later, a GitLab employee by the username, kennyGitLab commented that GitLab is not losing sight and is just following the company’s new strategy of ‘Breadth over depth’. He further added that, “We believe that the company plowing ahead of other contributors is more valuable in the long run. It encourages others to contribute to the polish while we validate a future direction. As open-source software we want everyone to contribute to the ongoing improvement of GitLab.” Users were indignant by this response. A user commented, “"We're Open Source!" isn't a valid defense when you have paying customers. That pitch sounds great for your VCs, but for someone who spends a portion of their budget on your cloud services - I'm appalled. Gitlab is a SaaS company who also provides an open source set of software. If you don't want to invest in supporting up time - then don't sell paid SaaS services.” Another comment read, “I think I understand the perspective, but the messaging sounds a bit like, ‘Pay us full price while serving as our beta tester; sacrifice the needs of your company so you can fulfill the needs of ours’.” Few users also praised GitLab for prompt action and for providing everybody with in-depth detailing about the investigation. A user wrote that, “This is EXACTLY what I want to see when there's a service disruption. A live, in-depth view of who is doing what, any new leads on the issue, multiple teams chiming in with various diagnostic stats, honestly it's really awesome. I know this can't be expected from most businesses, especially non-open sourced ones, but it's so refreshing to see this instead of the typical "We're working on a potential service disruption" that we normally get.” GitLab goes multicloud using Crossplane with kubectl Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note
Read more
  • 0
  • 0
  • 3061