Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-eclipse-foundation-releases-updates-on-its-jakarta-ee-rights-to-java-trademarks
Vincy Davis
07 May 2019
3 min read
Save for later

Eclipse foundation releases updates on its Jakarta EE Rights to Java trademarks

Vincy Davis
07 May 2019
3 min read
Last week, Eclipse Foundation made an announcement about regarding an update on Jakarta EE Rights to Java Trademarks. This announcement also gives an update on the complex and confidential negotiations between the Eclipse Foundation and Oracle, which includes a summary of all the progress till date and the implications of the agreement and use of Java trademarks and the javax namespace. In 2017, Oracle announced the migration of Java EE to the Eclipse Foundation. However, the process has been slow and constant. The mutual intention of the Eclipse Foundation and the Oracle team was to allow the evolution of the javax package namespace in Jakarta EE specifications. Unfortunately, they could not reach a mutual agreement on the same. Read More: Jakarta EE: Past, Present, and Future It has now been decided that the javax package namespace and the Java trademarks such as the existing specification names cannot be evolved or used by the Jakarta EE community. This is believed to be the best possible outcome for the community by the Eclipse Foundation and Oracle. In its official post, Eclipse Foundation states that Oracle’s Java trademarks are the property of Oracle only. Hence, the Eclipse Foundation has no rights to use them. They have further mentioned some implications including: The javax package namespace may be used within Jakarta EE specifications but may be used “as is” only. No modification to the javax package namespace is permitted within Jakarta EE component specifications. Jakarta EE specifications that continue to use the javax package namespace must remain TCK compatible with the corresponding Java EE specifications. Jakarta EE component specifications using the javax package namespace may be omitted entirely from future Jakarta EE Platform specifications. Specification names must be changed from a “Java EE” naming convention to a “Jakarta EE” naming convention. This includes acronyms such as EJB, JPA or JAX-RS. Additionally, any specification which uses the javax namespace will continue to carry the certification and container requirements which Java EE has had in the past. Also, the Jakarta EE Working Group along with Oracle will continue to work on the Jakarta EE 8 specification and is looking forward to future versions of the Jakarta EE specifications. The team is also sure that many application servers will be certified as Jakarta EE 8 compatible. After Jakarta EE 8, the main aim of Jakarta EE 9 will be to maximize compatibility for future versions which will help in not suppressing any innovation. There have been mixed reactions to this announcement. Some feel that this is a great change towards openness and avoids confusion, whereas others believe that lawyers of tech companies are making it difficult for software to get developed. A Reddit user has commented, “With these changes, it is more likely that developer would stop using it and switch to other frameworks.” To know more about this news in detail, visit The Eclipse Foundation’s official blog post. Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more Eclipse announces support for Java 12 Stack Overflow survey data further confirms Python’s popularity as it moves above Java in the most used programming language list
Read more
  • 0
  • 0
  • 1756

article-image-uber-and-lyft-drivers-go-on-strike-day-before-uber-ipo-roll-out
Fatema Patrawala
07 May 2019
4 min read
Save for later

Uber and Lyft drivers go on strike a day before Uber IPO roll-out

Fatema Patrawala
07 May 2019
4 min read
New York Times reported last Friday that Uber and Lyft drivers are planning a two-hour strike in several major cities around the world on Wednesday. This is a collaborated effort meant to coincide with Uber’s forthcoming IPO. Labor groups organizing the strike are protesting the companies’ poor payment and labor practices. They have planned to switch off the apps during the crucial morning rush hour on the day before Uber is expected to roll out the public offering of $90 billion. Uber drivers in New York City, Philadelphia, Boston, and Los Angeles are scheduled to go on strike from 7AM to 9AM on Wednesday, May 8th, according to the New York Taxi Workers Alliance. Drivers have also participated in the work stoppage in several cities of UK, including London, Birmingham, Nottingham, and Glasgow, reported by The Independent. Other reports by Premium Times say that Uber drivers went on strike in Abuja, the capital city of Nigeria in Africa. According to the alliance, workers are demanding fewer driver deactivations, an end to upfront pricing, and a cap on the per-fare commission taken by ride-hail companies. Sonam Lama, one of the Uber drivers and a member of the alliance group says, “The gig economy is all about exploiting workers by taking away our rights. And it has to stop.” Uber knows it has a driver problem. In its filing with the US Securities and Exchange Commission which declared its intention to go public last month, Uber said that driver dissatisfaction was likely to increase as the company sought to reduce the amount of money it spends on driver incentives. “Further, we are investing in our autonomous vehicle strategy, which may add to Driver dissatisfaction over time, as it may reduce the need for Drivers,” the company notes. Uber has a complicated history with driver strikes. In January 2017, Verge reported about the New York Taxi Workers Alliance announcing a strike at JFK Airport. This was in protest of President Donald Trump’s ban on refugees from six Muslim-majority countries. Uber was then accused of breaking the strike, sparking backlash from riders who tweeted photos of themselves deleting the Uber app with the hashtag #DeleteUber. Again this year on March 25th, Uber and Lyft drivers went on a strike across Los Angeles in opposition to Uber’s decision to cut rates by 25% in the Los Angeles area. But tomorrow’s strike appears to be more organized and geographically diverse than the earlier protests which were more localized. As the strike is organized by labour groups such as “Rideshare Drivers United”  who are building an organization to fight for the dignity of their work and better lives. These groups are also aided by national advocacy groups such as Gig Workers Rising. It all makes sense, considering the IPO is expected to be the largest since Alibaba’s in 2014. One of the Democratic presidential front-runners has also tweeted in favor of the drivers and said it is reasonable for drivers to ask higher wages when the CEO of the company gets paid $50 million in a year. https://twitter.com/BernieSanders/status/1124385385252040705 An Uber spokesperson responded to NY Times listing some of the perks available to drivers, which include higher earnings and free four-year college, while a spokesperson for Lyft said that driver wages have gone up over the last two years. None of them commented on whether they would plan to use cash incentives to entice drivers to break the strike. It is sure that a significant chunk of drivers will log off their apps during the strike, it remains likely that others will see it as an opportunity to cash in on the disruption. According to one of users on Hackernews, “If a bunch of drivers go on strike then any of the remaining ones automatically get paid more because the supply is lower (surge pricing). The higher pay attracts new drivers, or gets the ones who only drive during surge pricing to come out and work full shifts as long as the good money is there. This is basically the same result for the company as conceding to the drivers' demands, except that it ends as soon as the striking drivers give up and reenter the labor pool.” As of now we can only comment that drivers are classified as independent contractors, and as such, tend to act in their own best self interest. Uber introduces Base Web, an open source “unified” design system for building websites in React Uber open-sources Peloton, a unified Resource Scheduler Uber and GM Cruise are open sourcing their Automation Visualization Systems
Read more
  • 0
  • 0
  • 1591

article-image-net-5-arriving-in-2020
Amrata Joshi
07 May 2019
4 min read
Save for later

.NET 5 arriving in 2020!

Amrata Joshi
07 May 2019
4 min read
Yesterday, on the first day of Microsoft Build 2019, the team behind .NET Core announced that .NET Core 3.0 will be .NET 5, which will also be the next big release in the .NET family. Now there will be just one .NET going forward, and users will be able to use it to target  Linux, macOS, Windows, iOS, Android, tvOS, watchOS and WebAssembly and much more. .NET Core team will also introduce new .NET APIs, runtime capabilities and language features as part of .NET 5 along with the first preview, which is expected in November 2020. .NET 5 takes .NET Core and the best of Mono, runtime for .NET Core, to create a single platform that you can use for all your modern .NET code. This release will be supported with future updates to Visual Studio 2019, Visual Studio Code and Visual Studio for Mac. What is expected in .NET 5? Switch build in runtimes .NET Core has two main runtimes, namely, Mono which is the original cross-platform implementation of .NET and CoreCLR which is primarily targeted at supporting cloud applications, including the largest services at Microsoft. Both runtimes have a lot of similarities, so, the team has decided to make CoreCLR and Mono drop-in replacements for one another. The team plans to make it easier for users to choose between the different runtime options. .NET 5 applications In this release, all the .NET 5 applications will be using the CoreFX framework which will work smoothly with Xamarin and client-side Blazor workloads. These .NET 5 applications will be buildable with the .NET CLI, which will ensure that users have common command-line tooling across projects. Naming The team thought of simplifying the naming as there is only one .NET going forward, so there is no need of clarifying term like “Core”. According to the team, .NET 5 is a shorter name and also communicates that it has uniform capabilities and behaviors. Others ways in which .NET 5 project will improve are: This release will produce a single .NET runtime and framework which has a uniform runtime behaviour and developer experiences and can be used everywhere. This release will also expand the capabilities of .NET by reflecting the best of .NET Core, .NET Framework, Xamarin and Mono. It will also help in building projects out of a single code-base that developers can work on and expand together. Also, the code and project files will look and feel the same no matter which type of app is getting built. Users will continue to get access to the same runtime, API and language capabilities with each app. Users will now have more choice for runtime experiences. This release will come with Java interoperability for all the platforms. In this release, Objective-C and Swift interoperability will be supported on multiple operating systems. What won’t change? NET Core will continue to be open source and community-oriented on GitHub. It will still have cross-platform implementation. This release will also support platform-specific capabilities, such as Windows Forms and WPF on Windows, etc. It will support side-by-side installation and provide high performance. It will also support small project files (SDK-style) and command-line interface (CLI). A glimpse at the future roadmap Image source: Microsoft The blog reads, “The .NET 5 project is an important and exciting new direction for .NET. You will see .NET become simpler but also have a broader and more expansive capability and utility. All new development and feature capabilities will be part of .NET 5, including new C# versions. We see a bright future ahead in which you can use the same.” To know more about this news, check out Microsoft’s blog post. Fedora 31 will now come with Mono 5 to offer open-source .NET support .NET 4.5 Parallel Extensions – Async .NET 4.5 Extension Methods on IQueryable  
Read more
  • 0
  • 0
  • 6393

article-image-microsoft-build-2019-introducing-wsl-2-the-newest-architecture-for-the-windows-subsystem-for-linux
Amrata Joshi
07 May 2019
3 min read
Save for later

Microsoft Build 2019: Introducing WSL 2, the newest architecture for the Windows Subsystem for Linux

Amrata Joshi
07 May 2019
3 min read
Yesterday, on the first day of Microsoft Build 2019, the team at Microsoft introduced WSL 2, the newest architecture for the Windows Subsystem for Linux. With WSL 2, file system performance will increase and users will be able to run more Linux apps. The initial builds of WSL 2 will be available by the end of June, this year. https://twitter.com/windowsdev/status/1125484494616649728 https://twitter.com/poppastring/status/1125489352795201539 What’s new in WSL 2? Run Linux libraries WSL 2 powers Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. This new architecture brings changes to how these Linux binaries interact with Windows and computer’s hardware, but it will still manage to provide the same user experience as in WSL Linux distros With this release, the individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, and can be upgraded or downgraded at any time. Also, users can run WSL 1 and WSL 2 distros side by side. This new architecture uses an entirely new architecture that uses a real Linux kernel. Increases speed With this release, file-intensive operations like git clone, npm install, apt update, apt upgrade, and more will get faster. The initial tests that the team has run have WSL 2 running up to 20x faster as compared to WSL 1, when unpacking a zipped tarball. And it is around 2-5x faster while using git clone, npm install and cmake on various projects. Linux kernel with Windows The team will be shipping an open source real Linux kernel with Windows which will make full system call compatibility possible. This will also be the first time a Linux kernel is shipped with Windows. The team is building the kernel in house and in the initial builds they will ship version 4.19 of the kernel. This kernel is been designed in tune with WSL 2 and it has been optimized for size and performance. The team will service this Linux kernel through Windows updates, users will get the latest security fixes and kernel improvements without needing to manage it themselves. The configuration for this kernel will be available on GitHub once WSL 2 will release. The WSL kernel source will consist of links to a set of patches in addition to the long-term stable source. Full system call compatibility The Linux binaries use system calls for performing functions such as accessing files, requesting memory, creating processes, and more. In WSL 1 the team has created a translation layer that interprets most of these system calls and allow them to work on the Windows NT kernel. It is challenging to implement all of these system calls, where some of the apps don’t run properly in WSL 1. WSL 2 includes its own Linux kernel which has full system call compatibility. To know more about this news, check out Microsoft’s blog post. Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository      
Read more
  • 0
  • 0
  • 7615

article-image-openai-two-new-versions-and-the-output-dataset-of-gpt-2-out
Vincy Davis
07 May 2019
3 min read
Save for later

OpenAI: Two new versions and the output dataset of GPT-2 out!

Vincy Davis
07 May 2019
3 min read
Today, OpenAI have released the versions of GPT-2, which is a new AI model. GPT-2 is capable of generating coherent paragraphs of text without needing any task-specific training. The release includes a medium 345M version and a small 117M version of GPT-2. They have also shared the 762M and 1.5B versions with partners in the AI and security communities who are working to improve societal preparedness for large language models. The earlier version release of GPT was in the year 2018. In February 2019, Open-AI had made an announcement about GPT-2 with many samples and policy implications. Read More: OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words The team at OpenAI has decided on a staged release of GPT-2. Staged release will have the gradual release of family models over time. The reason behind the staged release of GPT-2 is to give people time to assess the properties of these models, discuss their societal implications, and evaluate the impacts of release after each stage. The 345M parameter version of GPT-2 has improved performance relative to the 117M version, though it does not offer much ease of generating coherent text. Also it would be difficult to misuse the 345M version. Many factors like ease of use for generating coherent text, the role of humans in the text generation process, the likelihood and timing of future replication and publication by others, evidence of use in the wild and expert-informed inferences about unobservable uses, etc were considered while releasing this staged 345M version. The team is hopeful that the ongoing research on bias, detection, and misuse will boost them to publish larger models and in six months, they will share a fuller analysis of language models’ societal implications and the heuristics for release decisions. The team at OpenAI is looking for partnerships with academic institutions, non-profits, and industry labs which will focus on increasing societal preparedness for large language models. They are also open to collaborating with researchers working on language model output detection, bias, and publication norms, and with organizations potentially affected by large language models. The output dataset contains GPT-2 outputs from all 4 model sizes, with and without top-k truncation, as well as a subset of the WebText corpus used to train GPT-2. The dataset features approximately 250,000 samples per model/hyperparameter pair, which will be sufficient to help a wider range of researchers perform quantitative and qualitative analysis. To know more about the release, head over to the official release announcement. OpenAI introduces MuseNet: A deep neural network for generating musical compositions OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 4392

article-image-duckduckgo-proposes-do-not-track-act-of-2019-to-require-sites-to-respect-dnt-browser-setting
Sugandha Lahoti
07 May 2019
3 min read
Save for later

DuckDuckGo proposes “Do-Not-Track Act of 2019” to require sites to respect DNT browser setting

Sugandha Lahoti
07 May 2019
3 min read
DuckDuckGo, the browser known for its privacy protection policies, has proposed draft legislation which will require sites to respect the Do Not Track browser setting. Called, the “Do-Not-Track Act of 2019”, this legislation will mandate websites to not track people if they have enabled the DNT signal on their browsers. Per a recent study conducted by DuckDuckGo, a quarter of people have turned on this setting, and most were unaware big sites do not respect it. [box type="shadow" align="" class="" width=""] Do-Not-Track Signal” means a signal sent by a web browser or similar User Agent that conveys a User’s choice regarding online Tracking, reflects a deliberate choice by the user. It complies with the latest Tracking Preference Expression (DNT) specification published by the World Wide Web Consortium (W3C)[/box] DuckDuckGo’s act just comes days after Google announced more privacy control to its users. Last week, Google launched a new feature allowing users to delete all or part of the location history and web and app activity data, manually.  It has a time limit for how long you want your activity data to be saved: 3 or 18 months, before deleting it automatically. However, it does not have an option to not store history automatically. DuckDuckGo’s proposed 'Do-Not-Track Act of 2019' legislation details the following points: No third-party tracking by default. Data brokers would no longer be legally able to use hidden trackers to slurp up your personal information from the sites you visit. And the companies that deploy the most trackers across the web — led by Google, Facebook, and Twitter — would no longer be able to collect and use your browsing history without your permission. No first-party tracking outside what the user expects. For example, if you use Whatsapp, its parent company (Facebook) wouldn't be able to use your data from Whatsapp in unrelated situations (like for advertising on Instagram, also owned by Facebook). As another example, if you go to a weather site, it could give you the local forecast, but not share or sell your location history. The legislation would have exceptions for debugging, auditing, security, non-commercial research, and journalism. However, each of these exceptions would only apply if a site adopts strict data-minimization practices. These include using the least amount of personal information needed, and anonymizing it when possible. Also, restrictions would only come into play only if a consumer has turned on the Do Not Track setting in their browser settings. In case of violation of the Do-Not-Track Act of 2019, DuckDuckGo proposes an amount no less than $50,000 and no more than $10,000,000 or 2% of an Organization’s annual revenue, whichever is greater, can be charged by the legislators. If the act passes into law, sites would be required to cease certain user tracking methods, which means fewer data available to inform marketing and advertising campaigns. The proposal is still quite far from being turning into law but presidential candidate Elizabeth Warren’s recent proposal to regulate “big tech companies”, may give it a much-needed boost. Twitter users complimented the act. https://twitter.com/Bendineliot/status/1123579280892538881 https://twitter.com/jmhaigh/status/1123574469950414848 https://twitter.com/n0ahrabbit/status/1123572013153439745 For the full text, download the proposed Do-Not-Track Act of 2019. DuckDuckGo now uses Apple MapKit JS for its map and location-based searches DuckDuckGo chooses to improve its products without sacrificing user privacy ‘Ethical mobile operating system’ /e/, an alternative for Android and iOS, is in beta
Read more
  • 0
  • 0
  • 3409
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-an-unsupervised-deep-neural-network-cracks-250-million-protein-sequences-to-reveal-biological-structures-and-functions
Vincy Davis
07 May 2019
4 min read
Save for later

An unsupervised deep neural network cracks 250 million protein sequences to reveal biological structures and functions

Vincy Davis
07 May 2019
4 min read
One of the goals for artificial intelligence in biology is the creation of controllable predictive and generative models that can read and generate biology in its native language. Artificial neural networks with proven pattern recognition capabilities, have been utilized in many areas of bioinformatics. Accordingly, research is necessary into methods that can learn intrinsic biological properties directly from protein sequences, which can be transferred to prediction and generation. Last week, Alexander Rives and Rob Fergus from Dept. of Computer Science, New York University, Siddharth Goyal, Joshua Meier, Demi Guo, Myle Ott, C. Lawrence Zitnick and Jerry Ma from Facebook AI Research team together published a paper titled ‘Biological Structure And Function Emerge From Scaling Unsupervised Learning to 250 Million Protein Sequences’. This paper investigates scaling high-capacity neural networks to extract general and transferable information about proteins from raw sequences. The next-generation sequencing (NGS) have revolutionized the biological field. It has also helped in performing a wide variety of applications and study biological systems at a detailed level. Recently due to reductions in the cost of this technology, there has been exponential growth in the size of biological sequence datasets. When data is sampled across diverse sequences, it helps in studying predictive and generative techniques for biology using artificial intelligence. In this paper the team has investigated deep learning across evolution at the scale of the largest available protein sequence databases. What does the research involve Researchers have applied self-supervision to the problem of understanding protein sequences and explore information about representation learning. They have trained a neural network by predicting masked amino acids. For training the neural network, a wide range of datasets containing 250M protein sequences with 86 billion amino acids are used during the research. The resulting model maps raw sequences to representations of biological properties without any prior domain knowledge. The neural network represents the identity of each amino acid in its input and output embeddings. The space of representations learned from sequences provides biological structure information at many levels, including that of amino acids, proteins, groups of orthologous genes, and species. Information about secondary and tertiary structure is internalized and represented within the network in a generalizable form. Observations from the research Finally the paper states that it is possible to adapt networks that have been trained on evolutionary data which will give results using only features that have been learned from sequences i.e., without any prior knowledge. It was also observed that the higher capacity models which were trained, were not fit for the 250M sequences, due to insufficient model capacity. The researchers are certain that using trained network architectures, along with predictive models will help in generating and optimizing new sequences for desired functions. It will also work for sequences that have not been seen before in nature but that are biologically active. They have tried to use unsupervised learning to recover representations that can map multiple levels of biological granularity. https://twitter.com/soumithchintala/status/1123236593903423490 But the result of the paper does not satisfy the community completely. Some are of the opinion that the paper is incomprehensible and has left some information unarticulated. For example, it is not specified which representation of biological properties does the model map. A user on Reddit commented that, “Like some of the other ML/AI posts that made it to the top page today, this research too does not give any clear way to reproduce the results. I looked through the pre-print page as well as the full manuscript itself. Without reproducibility and transparency in the code and data, the impact of this research is ultimately limited. No one else can recreate, iterate, and refine the results, nor can anyone rigorously evaluate the methodology used”. Another user added, “This is cool, but would be significantly cooler if they did some kind of biological follow up. Perhaps getting their model to output an "ideal" sequence for a desired enzymatic function and then swapping that domain into an existing protein lacking the new function”. Create machine learning pipelines using unsupervised AutoML [Tutorial] Rigetti develops a new quantum algorithm to supercharge unsupervised Machine Learning RIP Nils John Nilsson; an AI visionary, inventor of A* algorithm, STRIPS automatic planning system and many more
Read more
  • 0
  • 0
  • 2814

article-image-firefox-releases-v66-0-4-and-60-6-2-to-fix-the-expired-certificate-problem-that-ended-up-disabling-add-ons
Bhagyashree R
06 May 2019
3 min read
Save for later

Firefox releases v66.0.4 and 60.6.2 to fix the expired certificate problem that ended up disabling add-ons

Bhagyashree R
06 May 2019
3 min read
Last week on Friday, Firefox users were left infuriated when all their extensions were abruptly disabled. Fortunately, Mozilla has fixed this issue in their yesterday’s releases, Firefox 66.0.4 and Firefox 60.6.2. https://twitter.com/mozamo/status/1124484255159971840 This is not the first time when Firefox users have encountered such type of problems. A similar issue was reported back in 2016 and it seems that they did not take proper steps to prevent the issue from recurring. https://twitter.com/Theobromia/status/1124791924626313216 Multiple users were reporting that all add-ons were disabled on Firefox because of failed verification. Users were also unable to download any new add-ons and were shown  "Download failed. Please check your connection" error despite having a working connection. This happened because the certificate with which the add-ons were signed expired. The timestamp mentioned in the certificates were: Not Before: May 4 00:09:46 2017 GMT Not After : May 4 00:09:46 2019 GMT Mozilla did share a temporary hotfix (“hotfix-update-xpi-signing-intermediate-bug-1548973”) before releasing a product with the issue permanently fixed. https://twitter.com/mozamo/status/1124627930301255680 To apply this hotfix automatically, users need to enable Studies, a feature through which Mozilla tries out new features before they release to the general users. The Studies feature is enabled by default, but if you have previously opted out of it, you can enable it by navigating to Options | Privacy & Security | Allow Firefox to install and run studies. https://twitter.com/mozamo/status/1124731439809830912 Mozilla released Firefox 66.0.4 for desktop and Android users and Firefox 60.6.2 for ESR (Extended Support Release) users yesterday with a permanent fix to this issue. These releases repair the certificate to re-enable web extensions that were disabled because of the issue. There are still some issues that need to be resolved, which Mozilla is currently working on: A few add-ons may appear unsupported or not appear in 'about:addons'. Mozilla assures that the add-ons data will not be lost as it is stored locally and can be recovered by re-installing the add-ons. Themes will not be re-enabled and will switch back to default. If a user’s home page or search settings are customized by an add-on it will be reset to default. Users might see that Multi-Account Containers and Facebook Container are reset to their default state. Containers is a functionality that allows you to segregate your browsing activities within different profiles. As an aftereffect of this certificate issue, data that might be lost include the configuration data regarding which containers to enable or disable, container names, and icons. Many users depend on Firefox’s extensibility property to get their work done and it is obvious that this issue has left many users sour. “This is pretty bad for Firefox. I wonder how much people straight up & left for Chrome as a result of it,” a user commented on Hacker News. Read the Mozilla Add-ons Blog for more details. Mozilla’s updated policies will ban extensions with obfuscated code Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly
Read more
  • 0
  • 0
  • 3020

article-image-amazon-s3-retiring-support-path-style-api-requests-sparks-censorship-fears
Fatema Patrawala
06 May 2019
5 min read
Save for later

Amazon S3 is retiring support for path-style API requests; sparks censorship fears

Fatema Patrawala
06 May 2019
5 min read
Last week on Tuesday Amazon announced that Amazon S3 will no longer support path-style API requests. Currently Amazon S3 supports two request URI styles in all regions: path-style (also known as V1) that includes bucket name in the path of the URI (example: //s3.amazonaws.com/<bucketname>/key) and virtual-hosted style (also known as V2) which uses the bucket name as part of the domain name (example: //<bucketname>.s3.amazonaws.com/key). Amazon team mentions in the announcement that, “In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format.” They have also asked customers to update their applications to use the virtual-hosted style request format when making S3 API requests. And this should be done before September 30th, 2020 to avoid any service disruptions. Customers using the AWS SDK can upgrade to the most recent version of the SDK to ensure their applications are using the virtual-hosted style request format. They have further mentioned that, “Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. S3 will stop accepting requests made using the path-style request format in all regions starting September 30th, 2020. Any requests using the path-style request format made after this time will fail.” Users on Hackernews see this as a poor development by Amazon and have noted its implications that collateral freedom techniques using Amazon S3 will no longer work. One of them has commented strongly on this, “One important implication is that collateral freedom techniques [1] using Amazon S3 will no longer work. To put it simply, right now I could put some stuff not liked by Russian or Chinese government (maybe entire website) and give a direct s3 link to https:// s3 .amazonaws.com/mywebsite/index.html. Because it's https — there is no way man in the middle knows what people read on s3.amazonaws.com. With this change — dictators see my domain name and block requests to it right away. I don't know if they did it on purpose or just forgot about those who are less fortunate in regards to access to information, but this is a sad development. This censorship circumvention technique is actively used in the wild and loosing Amazon is no good.” Amazon team suggests that if your application is not able to utilize the virtual-hosted style request format, or if you have any questions or concerns, you may reach out to AWS Support. To know more about this news check out the official announcement page from Amazon. Update from Amazon team on 8th May Amazon’s Chief Evangelist for AWS, Jeff Barr sat with the S3 team to understand this change in detail. After getting a better understanding he posted an update on why the team plans to deprecate the path based model. Here’s his comparison on old vs the new: S3 currently supports two different addressing models: path-style and virtual-hosted style. Take a quick look at each one. The path-style model looks either like this (the global S3 endpoint): https://s3.amazonaws.com/jbarr-public/images/ritchie_and_thompson_pdp11.jpeg https://s3.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png Or this (one of the regional S3 endpoints): https://s3-useast2.amazonaws.com/jbarrpublic/images/ritchie_and_thompson_pdp11.jpeg https://s3-us-east-2.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png For example, jbarr-public and jeffbarr-public are bucket names; /images/ritchie_and_thompson_pdp11.jpeg and /jeffbarr-public/classic_amazon_door_desk.png are object keys. Even though the objects are owned by distinct AWS accounts and are in different S3 buckets and possibly in distinct AWS regions, both of them are in the DNS subdomain s3.amazonaws.com. Hold that thought while we look at the equivalent virtual-hosted style references: https://jbarr-public.s3.amazonaws.com/images/ritchie_and_thompson_pdp11.jpeg https://jeffbarr-public.s3.amazonaws.com/classic_amazon_door_desk.png These URLs reference the same objects, but the objects are now in distinct DNS subdomains (jbarr-public.s3.amazonaws.com and jeffbarr-public.s3.amazonaws.com, respectively). The difference is subtle, but very important. When you use a URL to reference an object, DNS resolution is used to map the subdomain name to an IP address. With the path-style model, the subdomain is always s3.amazonaws.com or one of the regional endpoints; with the virtual-hosted style, the subdomain is specific to the bucket. This additional degree of endpoint specificity is the key that opens the door to many important improvements to S3. The select few in the community are in favor of this as per one of the user comment on Hacker News which says, “Thank you for listening! The original plan was insane. The new one is sane. As I pointed out here https://twitter.com/dvassallo/status/1125549694778691584 thousands of printed books had references to V1 S3 URLs. Breaking them would have been a huge loss. Thank you!” But for the other few Amazon team has failed to address the domain censorship issue as per another user which says, “Still doesn't help with domain censorship. This was discussed in-depth in the other thread from yesterday, but TLDR, it's a lot harder to block https://s3.amazonaws.com/tiananmen-square-facts than https://tiananmen-square-facts.s3.amazonaws.com because DNS lookups are made before HTTPS kicks in.” Read about this update in detail here. Amazon S3 Security access and policies 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon introduces S3 batch operations to process millions of S3 objects
Read more
  • 0
  • 0
  • 6199

article-image-rstudio-1-2-releases-with-improved-testing-and-support-for-python-chunks-r-scripts-and-much-more
Amrata Joshi
06 May 2019
3 min read
Save for later

RStudio 1.2 releases with improved testing and support for Python chunks, R scripts, and much more!

Amrata Joshi
06 May 2019
3 min read
Last week, the team behind RStudio released RStudio 1.2 that includes dozens of new productivity enhancements and capabilities. RStudio 1.2 is compatible with projects in SQL, Stan, Python, and D3. With this release, testing R code integrations for shinytest and testthat is easier. Users can create,  test, and publish APIs in R with Plumber and run R scripts. What’s new in RStudio 1.2? Python sessions This release uses a shared Python session for executing Python chunks. It comes with simple bindings to access R objects from Python chunks and vice versa. Keyring In RStudio 1.2, passwords and secrets are stored securely with keyring by calling rstudioapi::askForSecret(). Users can install keyring directly from dialog prompt. Run R scripts Users can now run any R script as a background job in a clean R session and can also have a look at the script output in real time. Testing with RStudio 1.2 Users can opt for Run Tests command in testthat R scripts for directly running their projects. The testthat output in the Build pane now comes with navigable issue list. PowerPoint Users can now create PowerPoint presentations with R Markdown Package management With RStudio 1.2, users can now Specify a primary CRAN URL and secondary CRAN repos from the package preferences pane. Users can link to a package’s primary CRAN page from the packages pane. The CRAN repos can be configured with a repos.conf configuration file and the r-cran-repos-file option. Plumber Users can now easily create Plumber APIs in RStudio 1.2 and execute them within RStudio to view Swagger documentation and make test calls to the APIs Bug fixes in RStudio 1.2 In this release, the issue regarding “invalid byte sequence” has been fixed. Incorrect Git status has been rectified. Issues with low/no-contrast colors with HTML widgets has been fixed. It seems most users are excited about this release and they think that this way, Python will be more accessible to R users. A user commented on HackerNews, “I’m personally an Emacs Speaks Statistics fan myself, but RStudio has been huge boon to the R community. I expect that this will go a long ways towards making Python more accessible to R users.” Some are not much happy with this release as they think it has less options for graphics. Another comment reads, “I wish rstudio would render markdown in-line. It also tends to forget graphics in output after many open and closes of rmd. I’m intrigued by .org mode but as far as I can tell, there are not options for graphical output while editing.” To know more about this news, check out the post by RStudio. How to create your own R package with RStudio [Tutorial] The new RStudio Package Manager is now generally available Getting Started with RStudio    
Read more
  • 0
  • 0
  • 2381
article-image-palantirs-software-was-used-to-separate-families-in-a-2017-operation-reveals-mijente
Savia Lobo
06 May 2019
4 min read
Save for later

Palantir’s software was used to separate families in a 2017 operation reveals Mijente

Savia Lobo
06 May 2019
4 min read
Documents released this week, reveals that the data mining firm, Palantir was responsible for 2017 operation that targeted and arrested family members of children crossing the border alone. The documents show a huge contrast to what Palantir said its software was doing. This discrepancy was first identified by Mijente, an advocacy organization that has closely tracked Palantir’s murky role in immigration enforcement. The documents confirm that “the role Palantir technology played in facilitating hundreds of arrests, only a small fraction of which led to criminal prosecutions”, The Intercept reports. Palantir, a software firm founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, develops software that helps agents analyze massive amounts of personal data and builds profiles for prosecution and arrest. Also, in May 2018, Amazon employees, in a letter to Jeff Bezos, protested against the sale of its facial recognition tech to Palantir where they “refuse to contribute to tools that violate human rights”, citing the mistreatment of refugees and immigrants by ICE. Read Also: Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first Palantir earlier said it was not involved with the part of ICE, which was strictly devoted to deportations and the enforcement of immigration laws. Whereas Palantir’s $38 million contract with Homeland Security Investigations, or HSI, a component of ICE had a far broader criminal enforcement mandate. https://twitter.com/ConMijente/status/1124056308943138834 The 2017 ICE operation was designed to dissuade children from joining family members in the United States by targeting parents and sponsors for arrest. According to The Intercept, “Documents obtained through Freedom of Information Act litigation and provided to The Intercept show that this claim, that Palantir software is strictly involved in criminal investigations as opposed to deportations, is false.” As part of the operation, ICE arrested 443 people solely for being undocumented. For all this, Palantir’s software was used throughout, which helped agents build profiles of immigrant children and their family members for the prosecution and arrest of any undocumented person they encountered in their investigation. https://twitter.com/ConMijente/status/1124056314106322944 “The operation was underway as the Trump administration detained hundreds of children shelters throughout the country. Unaccompanied children were taken by border agents, sent to privately-run facilities, and held indefinitely. Any undocumented parent or family member who came forward to claim children were arrested by ICE for deportation. More children were kept in detention longer, as relatives stopped coming forward”, Mijente reports. Mijente further mentions in their post, “Mijente is urging Palantir to drop its contract with ICE and stop providing software to agencies that aid in tracking, detaining, and deporting migrants, refugees, and asylum seekers. As Palantir plans its initial public offering, Mijente is also calling on investors not to invest in a company that played a key role in family separation.” The seven-page document, titled “Unaccompanied Alien Children Human Smuggling Disruption Initiative,” details how one of Palantir’s software solutions, Investigative Case Management (ICM) can be used by agents stationed at the border to build cases of unaccompanied children and their families.Mijente further mentions, “This document is further proof that Palantir’s software directly aids in prosecutions for deportation carried out by HSI agents. Not only are HSI agents involved in deportations in the interior, but they are also actively aiding border agents by investigating and prosecuting relatives of unaccompanied children hoping to join their families.” Jesse Franzblau, senior policy analyst for the National Immigrant Justice Center, said in an email to The Intercept, “The detention and deportation machine is not only driven by hate, but also by profit. Palantir profits from its contract with ICE to help the administration target parents and sponsors of children, and also pays Amazon to use its servers in the process. The role of private tech behind immigration enforcement deserves more attention, particularly with the growing influence of Silicon Valley in government policymaking. “Yet, Palantir’s executives have made no move to cancel their work with ICE. Its founder, Alex Karp, said he’s “proud” to work with the United States government. Last year, he reportedly ignored employees who “begged” him to end the firm’s contract with ICE”, the Mijente report mentions. To know more about this news in detail head over to the official report. Lerna relicenses to ban major tech giants like Amazon, Microsoft, Palantir from using its software as a protest against ICE Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license “We can sell dangerous surveillance systems to police or we can stand up for what’s right. We can’t do both,” says a protesting Amazon employee
Read more
  • 0
  • 0
  • 2590

article-image-microsoft-introduces-remote-development-extensions-to-make-remote-development-easier-on-vs-code
Bhagyashree R
03 May 2019
3 min read
Save for later

Microsoft introduces Remote Development extensions to make remote development easier on VS Code

Bhagyashree R
03 May 2019
3 min read
Yesterday, Microsoft announced the preview of Remote Development extension pack for VS Code to enable developers to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment. https://twitter.com/code/status/1124016109076799488 Currently, developers will need to use the Insiders build for remote development until the stable version is available. The Insiders builds are the versions that are shipped daily with latest features and bug fixes. Why these VS Code extensions are needed? Developers often choose containers or remote virtual machines configured with specific development and runtime stacks as their development environment. This is an optimal choice because configuring such development environments locally could be too difficult or sometimes even impossible. Data scientists also require remote environments to do their work efficiently. They build and train data models and to do that they need to analyze large datasets. This demands massive storage and compute service, which a local machine can hardly provide. One option to solve this problem is using Remote Desktop but it can be sometimes laggy. Developers often use Vim and SSH or local tools with file synchronization, but these can also be slow and error-prone. There are browser-based tools that can be used in some scenarios, but they lack the richness and familiarity that desktop tools provide. VS Code Remote Development extensions pack Looking at these challenges, the VS Code team came up with a solution that suggested that VS Code should run in two places at once. One instance will run the developer tools locally and the other will connect to a set of development services running remotely in the context of a physical or virtual machine. Following are three extensions for working with remote workspaces: Remote-WSL Remote - WSL allows you to use WSL as a full development environment directly from VS Code. It runs commands and extensions directly in WSL so developers don’t have to think about pathing issues, binary compatibility, or other cross-OS challenges. With this extension, developers will be able to edit files located in WSL or the mounted Windows filesystem and also run and debug Linux-based applications on Windows. Remote-SSH Remote - SSH allows you to open folders or workspaces hosted on any remote machine, VM, or container with a running SSH server. It directly runs commands and other extensions on the remote machine so you don’t need to have the source code on your local machine. It enables you to use larger, faster, or more specialized hardware than your local machine. You can also quickly switch between different remote development environments and safely make updates. Remote-Containers Remote - Containers allows you to use a Docker container as your development container. It starts or attaches to a development container, which is running a well-defined tool and runtime stack. All your workspace files are copied or cloned into the container, or mounted from the local file system. To configure the development container you can use a ‘devcontainer.json’ file. To read more in detail, visit Microsoft’s official website. Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository Microsoft employees raise their voice against the company’s misogynist, sexist and racist acts  
Read more
  • 0
  • 0
  • 4341

article-image-mozillas-updated-policies-will-ban-extensions-with-obfuscated-code
Bhagyashree R
03 May 2019
3 min read
Save for later

Mozilla’s updated policies will ban extensions with obfuscated code

Bhagyashree R
03 May 2019
3 min read
Yesterday, Mozilla announced that according to its updated policies, extensions with obfuscated code will not be accepted on its add-ons platform. It is also becoming much stricter regarding blocking extensions that fail to abide by its policies. These policies will come into effect from June 2019. Last year in October, Google also announced a similar policy, which came into effect with the start of this year, to prevent malicious extensions from reaching its extensions store. If you do not know what obfuscated code means, it is basically writing code that is difficult for a human to understand. Common practices of writing obfuscated code include replacing function or variable names with weird but allowed characters, using reversed array indexing, using look-alike characters, etc. “Generally speaking, just try to find good coding guidelines and to try to violate them all,” said a developer on Stack Overflow. However, obfuscated code should not be confused with minified, concatenated, or otherwise machine-generated code, which are acceptable. Minification refers to the act of removing all unnecessary or redundant data that do not have any effect on the output, such as whitespaces, code comments, or shortening variable names, and so on. “We will no longer accept extensions that contain obfuscated code. We will continue to allow minified, concatenated, or otherwise machine-generated code as long as the source code is included. If your extension is using obfuscated code, it is essential to submit a new version by June 10th that removes it to avoid having it rejected or blocked,” Caitlin Neiman said in a blog post. If your code contains transpiled, minified or otherwise machine-generated code, you are required to submit a copy of human-understandable source code and also instructions on how to reproduce that build. Here is a snippet from Mozilla’s policies: “Add-ons are not allowed to contain obfuscated code, nor code that hides the purpose of the functionality involved. If external resources are used in combination with add-on code, the functionality of the code must not be obscured. To the contrary, minification of code with the intent to reduce file size is permitted.” Mozilla also plans to take stricter steps for those extensions that are found to violate its policies. Neiman said, “We will be blocking extensions more proactively if they are found to be in violation of our policies. We will be casting a wider net, and will err on the side of user security when determining whether or not to block.” If users are already using the extensions which have obfuscated, once the policies are employed, these extensions will be disabled. Many developers are supporting this decision. One Redditor commented, “This is great, obfuscated code doesn't really belong anywhere in the frontend, since you have access to the code and can figure out what the program does given enough time, so why not just make it readable.” Read the announcement on Mozilla blog and to go through the policies visit MDN web docs. Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs  
Read more
  • 0
  • 0
  • 2568
article-image-deeplearning4j-1-0-0-beta4-released-with-full-multi-datatype-support-new-attention-layers-and-more
Vincy Davis
03 May 2019
3 min read
Save for later

Deeplearning4J 1.0.0-beta4 released with full multi-datatype support, new attention layers, and more!

Vincy Davis
03 May 2019
3 min read
Yesterday, Deep Learning For Java (DL4J) released their new beta version, DL4J 1.0.0-beta4. The main highlight of this version is the full multi-datatype support for ND4J and DL4J, unlike past releases. The previous version deeplearning4j-1.0.0-beta3 was released last year. This 1.0.0-beta4 version also includes the addition of MKL-DNN support, new attention layers, and many more along with optimizations and bug fixes. What’s new in DL4J 1.0.0-beta4? Full multi-datatype support In past releases, all N-Dimensional arrays in ND4J were limited to a single datatype, set globally. Now, arrays of all datatypes may be used simultaneously. The supported datatypes are Double, Float, Half, Long, Int, Short, Ubyte, Byte, Bool and UTF8. CUDA Support CUDA 10.1 support has been added and CUDA 9.0 support has been dropped. DL4J 1.0.0-beta4 also supports CUDA versions 9.2, 10.0 and 10.1. Mac (OSX) CUDA binaries are no longer provided. However, support for Linux and Windows CUDA, and OSX CPU(x86_64) is still available. Memory Management Changes In DL4J 1.0.0-beta4, the periodic garbage collection is disabled by default; instead, garbage collection (GC) will be called only when it is required to reclaim memory from arrays that are allocated outside of workspaces. Deeplearning4J: Bug Fixes and Optimizations cuDNN helpers will no longer attempt to fall back on built-in layer implementations if an out-of-memory exception is thrown. Batch normalization global variance reparameterized to avoid underflow and zero/negative variance in some cases during distributed training. A bug where dropout instances were incorrectly shared between layers when using transfer learning with dropout has been fixed. An issue where tensor Along Dimension could result in an incorrect array order for edge cases and hence exceptions in LSTMs has been fixed. ND4J and SameDiff Features and Enhancements Removed reliance on periodic garbage collection calls for handling memory management of out-of-workspace (detached) INDArrays. New additions - TensorFlowImportValidator tool, INDArray.close() method, Nd4j.createFromNpzFile method, support for importing BERT models into SameDiff, SameDiff GraphTransformUtil, etc have been added. Evaluation, RegressionEvaluation etc. now support 4d (CNN segmentation) data formats. Bug Fixes and Optimizations The bug with InvertMatrix.invert() with [1,1] shape matrices has been fixed. Edge case bug for Updater instances with length 1 state arrays has been fixed. In SameDiff, ‘gradients’ are now no longer defined for non-floating-point variables, and variables that aren’t required to calculate loss or parameter gradients, thus the gradient calculation performance has improved. To know more about the release, check the detailed release notes. Deeplearning4j 1.0.0-alpha arrives! 7 things Java programmers need to watch for in 2019 Deep Learning Indaba presents the state of Natural Language Processing in 2018
Read more
  • 0
  • 0
  • 2835

article-image-facebook-bans-six-toxic-extremist-accounts-conspiracy-theory-organization
Fatema Patrawala
03 May 2019
5 min read
Save for later

Facebook bans six toxic extremist accounts and a conspiracy theory organization

Fatema Patrawala
03 May 2019
5 min read
In the wake of realworld hate crimes and violent terror attacks, Recently social media giants have been admonished by lawmakers around the world for allowing their platforms to amplify voices of extremists. To an extent where the UK lawmakers called them accessories to radicalization and accessories to crime. Additionally Democratic lawmakers on Thursday slammed the "vague explanations" offered by tech companies responding to questions about extremist content on their platforms. This news comes as a ripple effect that on Thursday, multiple news outlets broke a story that Facebook has announced about six particularly toxic extremists and one conspiracy theorist organization have been banned from Facebook and Instagram. Stories reported by CNN, the Verge, The Atlantic, and The Washington Post laud Facebook for blocking the accounts of inflammatory online figureheads like religious leader Louis Farrakhan, known for sharing anti-Semitic views; Paul Nehlen, a white nationalist who ran for Congress in 2018; far-right figures Milo Yiannopoulos and Laura Loomer; and conspiracy theorist Paul Joseph Watson and Infowars’ Alex Jones from both platforms. For years Jones had used Facebook channel to spread the idea that the Sandy Hook shooting, in which 20 children died, was a hoax. Jones followers took it upon themselves to harass the parents of murdered children. He’s being sued for defamation by 10 of those families. Jones was temporarily suspended from Facebook last summer. His official fan page was also previously banned, though Jones was allowed to operate a personal account. Now that has been prohibited on Facebook’s sites as well. Twitter permanently blocked InfoWars and Jones in September last year for violating its harassment policies. Yiannopoulos, the former editor of Breitbart News and a right-wing provocateur, and Loomer, a far-right activist who is also known for spreading conspiracy theories, have both previously been banned from Twitter Inc.’s social media service. “We’ve always banned individuals or organizations that promote or engage in violence and hate, regardless of ideology," a Facebook representative said in a statement on Thursday. "The process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts today.” But it seems the company’s enforcement of rules has long remained inconsistent. Just last July, Facebook had tweeted, "We just don't think banning Pages for sharing conspiracy theories or false news is the right way to go." https://twitter.com/chrisinsilico/status/1124031309066788865 https://twitter.com/facebook/status/1017530220520194048 Then in August, Facebook took down four Jones-related pages, saying it did not take them down for spreading conspiracy theories, but rather for "glorifying violence." “They have rules, but enforcement is completely random,” said Roger McNamee to Wired, a high-profile Silicon Valley investor and a sharp critic of Facebook. “They don’t do anything about it until massive harm has been done and they can no longer find a dodge. Facebook is clearly feeling pressure.” McNamee said Facebook’s business model depends on amplifying content that stimulates fear and outrage, and banning a few influential figures doesn't change that. "It is sacrificing a handful of the most visible extreme voices in order to protect a much larger number of users it needs to maximize profits," he said. The company didn’t say what specific posts or actions led to the bans, though a spokesperson said that Jones, Yiannopoulos and Loomer have all recently promoted Gavin McInnes, founder of the violence-prone far-right group the Proud Boys, whom Facebook banned in October. In March, two weeks after a gunman went live on Facebook before marching into a mosque in Christchurch, New Zealand, and killing dozens. Facebook took subsequent steps to remove his account and banned content that references white nationalism and white separatism. The news of the Facebook ban leaked out before the company had actually removed the controversial accounts. That gave Yiannopoulos a chance to notify his followers of the ban and promote his email newsletter, according to screenshots captured by BuzzFeed. Wired also reported that the four still had control of their Instagram accounts for nearly an hour after the bans were announced. And Jones’ Facebook page, “Infowars Is Back,” was still online and live-streaming for nearly two hours after the ban was disclosed. Loomer also took advantage of the advance notice, posting photos to Instagram about being banned from the platform that included captions directing their fans to follow them elsewhere. The pages were eventually removed, but the time lag and such media rollout turned Facebook into another example of the company’s struggles with content moderation. https://twitter.com/RMac18/status/1124020589021270017 “Our work against organized hate is ongoing," Facebook said in the statement. “We will continue to review individuals, pages, groups and content against our community standards." Facebook open-sources F14 algorithm for faster and memory-efficient hash tables Facebook sets aside $5 billion in anticipation of an FTC penalty for its “user data practices” Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson
Read more
  • 0
  • 0
  • 2170