Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-uks-data-protection-regulator-ico-releases-report-concludes-that-adtech-industry-operates-illegally
Sugandha Lahoti
21 Jun 2019
6 min read
Save for later

UK’s data protection regulator ICO releases report concludes that Adtech industry operates illegally

Sugandha Lahoti
21 Jun 2019
6 min read
UK’s data protection regulator ICO (Information Commissioner’s office) has published a report highlighting how thousands of companies are sharing personal data on hundreds of millions every day without a legal basis. The report also says, how most of today's online advertising is illegal at a 'general, systemic' level. The report was in response to a series of complaints made in the UK around the security and legality of the adtech ecosystem.  These complaints were made by Mr. Veale, an academician and Jim Killock, executive director of the Open Rights Group, as well as campaign group Privacy International. [box type="shadow" align="" class="" width=""]Adtech is a term used to describe tools that analyze and manage information (including personal data) for online advertising campaigns and automate the processing of advertising transactions. RTB (Real time bidding) uses adtech to enable the buying and selling of advertising inventory in real time on an impression by impression basis, typically involving an auction pricing mechanism. It is a type of online advertising that is most commonly used at present for selling visual inventory online, either on the website of a publisher or via a publisher’s app.[/box] RTB relies on the potential advertiser seeing information about you. That information can be as basic as the device you’re using to view the webpage, or where in the country you are. But it can have a more detailed picture, including the websites you visited, what your perceived interests are, even what health condition you’ve been searching for information about.  The complexity of this type of online advertising poses a number of risks about the level of data protection compliance. Hence the ICO has investigated this issue and summarized how the ad tech sector should comply with GDPR. In this report, ICO has prioritized two areas: the processing of special category data, and issues caused by relying solely on contracts for data sharing across the supply chain. The report highlights “Under data protection law, using people’s sensitive personal data to serve adverts requires their explicit consent, which is not happening right now. Sharing people’s data with potentially hundreds of companies, without properly assessing and addressing the risk of these counterparties, raises questions around the security and retention of this data.” Key findings from ICO’s report Adtech is disregarding Special and Non-special category data Non-special category data is being processed unlawfully at the point of collection. Online advertisers believe that legitimate interests can be used for placing and/or reading a cookie or other technology (rather than obtaining the consent PECR requires). Even if an argument could be made for reliance on legitimate interests, participants within the ecosystem are unable to demonstrate that they have properly carried out the legitimate interests tests and implemented appropriate safeguards. Special category data- relating to especially sensitive data such as ethnic origin, health background, religion, political and sexual orientation- is also being processed unlawfully. This is because explicit consent is not being collected due to lack of proper data protection laws. DPIAs are tools that organizations can use to identify and minimize the data protection risks of any processing operation. Article 35 of the GDPR specifies several circumstances that require DPIAs where there is large scale processing of special category data. ICO states that there appears to be a lack of understanding of, and potentially compliance with, the DPIA requirements of data protection law. This increases the risks associated with RTB which are probably being not fully assessed and mitigated. Individuals have no control over their privacy ICO claims that the Privacy information provided to individuals lacks clarity as it is overly complex. Individuals have no guarantees about the security of their personal data within the ecosystem. Moreover, individual profiles are extremely detailed and repeatedly shared among organizations for any one bid request, all without the individuals’ knowledge. Not just that, these organizations are processing these bid requests with inadequate technical and organizational measures to secure the data in transit and at rest. There is also little to no consideration as to the requirements of data protection law about international transfers of personal data. ICO says organizations must understand, document and be able to demonstrate: how their processing operations work; what they do; who they share any data with; and how they can enable individuals to exercise their rights. Contract-only approach for data protection legislation should stop The adtech industry currently uses contractual controls to provide a level of guarantees about data protection-compliant processing of personal data. However, this contract-only approach does not satisfy the requirements of data protection legislation. Organizations cannot rely on standard terms and conditions by themselves, without undertaking appropriate monitoring and ensuring technical and organizational controls back up those terms. ICO says that the controllers must: assess the processor is competent to process personal data in line with the GDPR; put in place a contract or other legal act meeting the requirements in Article 28(3); and ensure a processor’s compliance on an ongoing basis, in order for the controller to comply with the accountability principle and demonstrate due diligence (such as audits and inspections). What’s next for ICO ICO states that its report requires further analysis and exploration. They will undertake targeted information-gathering activities related to the data supply chain and profiling aspects, the controls in place, and the DPIAs that have been undertaken, starting in July 2019. They will also continue targeted engagement with key stakeholders. They will continue bilateral engagement with IAB Europe and Google. They may also undertake a further industry review in six months’ time. The scope and nature of such an exercise will depend on their findings over the forthcoming months. As obvious, this report was well appreciated by netizens. https://twitter.com/mark_barratt/status/1141702170334695424 https://twitter.com/jason_kint/status/1141881508619313154 https://twitter.com/DataEthicsEU/status/1141943677687926784     However, some people had issues with it being just a guidance report, with a lack of real efforts. https://twitter.com/neil_neilzone/status/1141769209778778113   They also criticized the next steps section. https://twitter.com/WolfieChristl/status/1141698725015937024 https://twitter.com/mikarv/status/1141643837712080898   Another issue which cropped up was how in spite of issues, the adtech industry, is also responsible for generating a large percentage of revenues. https://twitter.com/jonmundy/status/1141960501485867009 Although, ICO gave its reply. “RTB is an innovative means of ad delivery, but one that lacks data protection maturity in its current implementation. Whilst it is more the practices than the underlying technology that concerns us, it’s also the case that, if an online service is looking to generate revenue from digital advertising, there are a number of different ways available to do this. RTB is just one of these. Whatever form organizations choose, if it involves either accessing or storing information on user devices, and/or the processing of personal data, there are laws that they have to comply with.” Read the full report here. GDPR complaint in EU claim billions of personal data leaked via online advertising bids European Union fined Google 1.49 billion euros for antitrust violations in online advertising GDPR complaint claims Google and IAB leaked ‘highly intimate data’ of web users for behavioral advertising.
Read more
  • 0
  • 0
  • 3037

article-image-oracle-releases-emergency-patches-to-fix-a-critical-vulnerability-in-its-weblogic-servers
Savia Lobo
21 Jun 2019
2 min read
Save for later

Oracle releases emergency patches to fix a critical vulnerability in its WebLogic servers

Savia Lobo
21 Jun 2019
2 min read
On Tuesday Oracle published an out-of-band security update that had a patch to a critical code-execution vulnerability in its WebLogic server. “This remote code execution vulnerability is remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password,'' the Oracle update warned. The vulnerability tracked as CVE-2019-2729, has received a Common Vulnerability Scoring System score of 9.8 out of 10. The vulnerability is a deserialization attack targeting two Web applications that WebLogic appears to expose to the Internet by default—wls9_async_response and wls-wsat.war. The flaw in Oracle's WebLogic Java application servers came to light as a zero-day four days ago when it was reported by security firm KnownSec404. “This isn't the first, or even second, deserialization attack that has been used to target these services. The wls-wsat component was successfully exploited in a similar fashion in 2017, and KnownSec404 reported another one in April. The 2017 vulnerability was largely used to install bitcoin miners; April's vulnerability was exploited in cryptojacking and ransomware campaigns”, Arstechnica reported. John Heimann, Oracle's Security Program Vice-President, said, this was an incorrect assessment, and that the new attacks are exploiting a separate vulnerability that had nothing to do with the zero-day from April. If patching is not possible right away, the researchers propose two mitigation solutions: delete "wls9_async_response.war" and "wls-wsat.war" then restart the WebLogic service enforce access policy controls for URL access to the paths  "/_async/*" and "/wls-wsat/* According to Johannes Ullrich of the SANS Technology Institute, Oracle has been patching each of these series of deserialization vulnerabilities by individually blacklisting the deserialization of very specific classes as exploits are published. “Oracle has been using a "blacklist" approach in patching these deserialization vulnerabilities, blocking the deserialization of very specific classes, which has led to similar bypass/patch cat and mouse games in the past”, Ullrich mentions. To know more about this in detail, head over to Oracle’s blog post. Oracle does “organizational restructuring” by laying off 100s of employees IBM, Oracle under the scanner again for questionable hiring and firing policies RedHat takes over stewardship for the OpenJDK 8 and OpenJDK 11 projects from Oracle
Read more
  • 0
  • 0
  • 1530

article-image-ftc-to-investigate-youtube-over-mishandling-childrens-data-privacy
Savia Lobo
20 Jun 2019
5 min read
Save for later

FTC to investigate YouTube over mishandling children’s data privacy

Savia Lobo
20 Jun 2019
5 min read
The Federal Trade Commission (FTC) launched an investigation into YouTube over mishandling children’s private data and may levy the popular video-sharing website with a potential fine.  This probe has already prompted the tech giant to reevaluate some of its business practices. Google, which owns YouTube, declined to comment on the investigation. A report from the Washington Post says this investigation was triggered by complaints from children’s health and privacy groups. These complaints mentioned that YouTube improperly collected data from kids using the video service, thus violating the Children’s Online Privacy Protection Act, a 1998 law known as COPPA that forbids the tracking and targeting of users younger than age 13. The Washington Post said, according to consumer advocates, “some of the problems highlighted by the YouTube investigation are shared by many of the most popular online services, including social media sites, such as Instagram and Snapchat, and games such as Fortnite”. YouTube has come under scrutiny for exposing children to dangerous conspiracy theories, hate speech, violence, sexual content and even for catering to pedophiles, the New York Times reported. “The companies say their services are intended for adults and that they take action when they find users who are underage. Still, they remain widely popular with children, especially preteens, according to surveys and other data, raising concerns that the companies’ efforts — and federal law — have not kept pace with the rapidly evolving online world”, the Washington Post reports. In February, Youtube received major criticism from companies and individuals for recommending videos of minors and allowing pedophiles to comment on these posts, with a specific time stamp of the video of when an exposed private part of the young child was visible. YouTube was also condemned for monetizing these videos allowing advertisements for major brands like Alfa Romeo, Fiat, Fortnite, Grammarly, L’Oreal, Maybelline, Metro: Exodus, Peloton and SingleMuslims.com, etc to be displayed on these videos. Read Also: YouTube disables all comments on videos featuring children in an attempt to curb predatory behavior and appease advertisers According to The Verge, “The YouTube app, although generally is safer than the main  platform, it has faced an array of moderation challenges, including graphic discussions about pornography and suicide, explicit sexual language in cartoons, and modeling unsafe behaviors like playing with lit matches.” “One of the biggest requests that YouTube executives have received from policymakers, critics, and even some employees is to stop recommending videos that contain children”, The Verge reports. A YouTube spokesperson told The New York Times earlier this month that doing so would hurt creators. Instead, the company has limited “recommendations on videos that it deems as putting children at risk,” according to the Times. Marc Groman, a privacy lawyer who previously worked for the FTC and the White House, “YouTube is a really high-profile target, and for obvious reasons because all of our kids are on it.” “But the issues on YouTube that we’re all grappling with are elsewhere and everywhere.” In a statement to the Washington Post, Andrea Faville, a spokesperson from YouTube said, she emphasized that not all discussions about product changes come to fruition. “We consider lots of ideas for improving YouTube and some remain just that — ideas,” she said. “Others, we develop and launch, like our restrictions to minors live-streaming or updated hate speech policy.” The Wall Street Journal reported that YouTube was planning to migrate all children’s content off the service into a separate app, YouTube Kids, to better protect younger viewers from problematic material, “a change that would be difficult to implement because of the sheer volume of content on YouTube, and potentially could be costly to the company in lost advertising revenue.” David Monahan of the Campaign for a Commercial-Free Childhood, a Boston-based advocacy group told The Post, “YouTube’s business model puts profits first, and kids’ well-being last” “When we filed a COPPA complaint with the FTC a year ago, Google’s response was ridiculous — that YouTube is not a site for kids, when it’s actually the most popular children’s site on the Internet. We hope the FTC will act soon, and require YouTube to move all kids’ content to YouTube Kids with no marketing, no autoplay or recommendations, and strong protections for children’s privacy, he further added. https://twitter.com/CBSThisMorning/status/1141690074909892608 U.S. Senator Ed Markey said in a statement to Gizmodo, “In the coming weeks, I will introduce legislation that will combat online design features that coerce children and create bad habits, commercialization, and marketing that manipulate kids and push them into consumer culture, and the amplification of inappropriate and harmful content on the internet. It’s time for the adults in the room to step in and ensure that corporate profits no longer come before kids’ privacy.” To know more about this news in detail, head over to The Washington Post. YouTube CEO, Susan Wojcicki says reviewing content before upload isn’t a good idea YouTube’s new policy to fight online hate and misinformation misfires due to poor execution, as usual YouTube demonetizes anti-vaccination videos after Buzzfeed News reported that it is promoting medical misinformation
Read more
  • 0
  • 0
  • 1485

article-image-kubernetes-1-15-releases-with-extensibility-around-core-kubernetes-apis-cluster-lifecycle-stability-and-more
Vincy Davis
20 Jun 2019
5 min read
Save for later

Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!

Vincy Davis
20 Jun 2019
5 min read
Update: On July 23rd, the Enhancements Lead of Kubernetes 1.15 at VMware, Kenny Coleman,  published a “What's New in Kubernetes 1.15” video with Cloud Native Computing Foundation (CNCF). In the video, he explains in detail about the three new major features in Kubernetes 1.15, which include Dynamic HA Clusters with Kubeadm, Volume Cloning and CustomResourceDefinition (CRDs). Coleman also highlights each feature and explains its importance to users.  Watch the video below to know in detail about Kenny Coleman’s talk about Kubernetes 1.15. https://www.youtube.com/watch?v=eq7dgHjPpzc On June 19th, the Kubernetes team announced the release of Kubernetes 1.15, which consists of 25 enhancements, including 2 moving to stable, 13 in beta, and 10 in alpha. The key features of this release include extensibility around core Kubernetes APIs, cluster lifecycle stability, and usability improvements. This is Kubernetes’ second release this year. The previous version Kubernetes 1.14, released three months ago, had 10 stable enhancements--the most amount of stable features revealed in a release. In an interview to the New Stack, Claire Laurence, the team lead at Kubernetes said that in this release, “We’ve had a fair amount of features progress to beta. I think what we’ve been seeing a lot with these alpha and beta features as they progress is a lot of continued focus on stability and overall improvement before indicating that those features are stable.” Let’s have a brief look at all the new features and updates. #1 Extensibility around core Kubernetes APIs The theme of the new developments around CustomResourceDefinitions is data consistency and native behavior. The Kubernetes team wants that a user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. Hence, from v1.15 onwards, Kubernetes will check each schema against a restriction called “structural schema”. This enforces non-polymorphic and complete typing of each field in a CustomResource. Out of the five enhancements, the ‘CustomResourceDefinition Defaulting’ is an alpha release. It is specified using the default keyword in the OpenAPI validation schema. Defaulting will be available as alpha in Kubernetes 1.15 for structural schemas. The other four enhancements are in beta which include: CustomResourceDefinition Webhook Conversion In Kubernetes, CustomResourceDefinitions gain the ability to convert between different versions on-the-fly, just like users are used to from native resources for the long term. CustomResourceDefinition OpenAPI Publishing OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, but only for structural schemas. CustomResourceDefinitions Pruning Pruning is the automatic removal of unknown fields in objects sent to a Kubernetes API. A field is unknown if it is not specified in the OpenAPI validation schema. It enforces that only data structures specified by the CRD developer are persisted to etcd. This is the behaviour of native resources, and will be available for CRDs as well, starting as beta in Kubernetes 1.15. Admission Webhook Reinvocation & Improvements In the earlier versions, mutating webhooks were only called once, in alphabetical order. An earlier run webhook cannot react on the output of webhooks, called later in the chain. With Kubernetes 1.15, mutating webhooks can opt-in into at least one re-invocation by specifying reinvocationPolicy: IfNeeded. If a later mutating webhook modifies the object, the earlier webhook will get a second chance. #2 Cluster Lifecycle Stability and Usability Improvements The cluster lifecycle building block, kubeadm, continues to receive features and stability work, which is needed for bootstrapping production clusters efficiently. kubeadm has promoted high availability (HA) capability to beta, allowing users to use the familiar kubeadm init and kubeadm join commands to configure and deploy an HA control plane. With kubeadm, certificate management has become more robust in 1.15, as it seamlessly rotates all the certificates before expiry. The kubeadm configuration file API is moving from v1beta1 to v1beta2 in 1.15. kubeadm now has its own new logo. Continued Improvement of CSI In Kubernetes 1.15, the Special Interests Groups (SIG) Storage enables migration of in-tree volume plugins to Container Storage Interface (CSI). SIG Storage worked on bringing CSI to feature parity with in-tree functionality, including functionality like resizing and inline volumes. SIG Storage introduces new alpha functionality in CSI that doesn’t exist in the Kubernetes Storage subsystem yet, like volume cloning. Volume cloning enables users to specify another PVC as a “DataSource” when provisioning a new volume. If the underlying storage system supports this functionality and implements the “CLONE_VOLUME” capability in its CSI driver, then the new volume becomes a clone of the source volume. Additional feature updates Support for go modules in Kubernetes Core Continued preparation on cloud provider extraction and code organization. The cloud provider code has been moved to kubernetes/legacy-cloud-providers for easier removal later and external consumption. Kubectl get and describe now work with extensions. Nodes now support third party monitoring plugins. A new Scheduling Framework for schedule plugins is now Alpha ExecutionHook API designed to trigger hook commands in the containers for different use cases is now Alpha. These extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs will continue to depreciate and eventually will be retired in the next version 1.16. To know about the additional features in detail check out the release notes. https://twitter.com/markdeneve/status/1141135440336039936 https://twitter.com/IanColdwater/status/1141485648412651520 For more details on Kubernetes 1.15, check out Kubernetes blog. HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes
Read more
  • 0
  • 0
  • 3325

article-image-debian-gnu-linux-port-for-risc-v-64-bits-why-it-matters-and-roadmap
Amrata Joshi
20 Jun 2019
7 min read
Save for later

Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap

Amrata Joshi
20 Jun 2019
7 min read
Last month, Manuel A. Fernandez Montecelo, a Debian contributor and developer talked about the Debian GNU/Linux riscv64 port at the RISC-V workshop. Debian, a Unix-like operating system consists of free software supported by the Debian community that comprises of individuals who basically care about free and open-source software. The goal of the Debian GNU/Linux riscv64 port project has been to have Debian ready for installation and running on systems that implement a variant of the RISC-V (an open-source hardware instruction set architecture) based systems. The feedback from the people regarding his presentation at the workshop was positive. Earlier this week,  Manuel A. Fernandez Montecelo announced an update on the status of Debian GNU/Linux riscv64 port. The announcement comes weeks before the release of buster which will come with another set of changes to benefit the port. What is RISC-V used for and why is Debian interested in building this port? According to the Debian wiki page, “RISC-V (pronounced "risk-five") is an open source instruction set architecture (ISA) based on established reduced instruction set computing (RISC) principles. In contrast to most ISAs, RISC-V is freely available for all types of use, permitting anyone to design, manufacture and sell RISC-V chips and software. While not the first open ISA, it is significant because it is designed to be useful in modern computerized devices such as warehouse-scale cloud computers, high-end mobile phones and the smallest embedded systems. Such uses demand that the designers consider both performance and power efficiency. The instruction set also has a substantial body of supporting software, which fixes the usual weakness of new instruction sets. In this project the goal is to have Debian ready to install and run on systems implementing a variant of the RISC-V ISA: Software-wise, this port will target the Linux kernel Hardware-wise, the port will target the 64-bit variant, little-endian This ISA variant is the "default flavour" recommended by the designers, and the one that seems to attract more interest for planned implementations that might become available in the next few years (development boards, possible consumer hardware or servers).” Update on Debian GNU/Linux riscv64 port Image source: Debian Let’s have a look at the graph where the percent of arch-dependent packages that are built for riscv64 (grey line) has been around or higher than 80% since mid-2018. The arch-dependent packages are almost half of Debian's [main, unstable] archive. It means that the arch-independent packages can be used by all the ports, provided that the software is present on which they rely on. The update also highlights that around 90% of packages from the whole archive has been made available for this architecture. Image source: Debian The graph above highlights that the percentages are very stable for all architectures. Montecelo writes, “This is in part due to the freeze for buster, but it usually happens at other times as well (except in the initial bring-up or in the face of severe problems).” Even the second-class ports appear to be stable. Montecelo writes, “Together, both graphs are also testament that there are people working on ports at all times, keeping things working behind the scenes, and that's why from a high level view it seems that things just work.” According to him, apart from the work of porters themselves, there are people working on bootstrapping issues that make it easier to bring up ports, better than in the past. They also make coping better when toolchain support or other issues related to ports, blow up. He further added, “And, of course, all other contributors of Debian help by keeping good tools and building rules that work across architectures, patching the upstream software for the needs of several architectures at the same time (endianness, width of basic types), many upstream projects are generic enough that they don't need specific porting, etc.” Future scope and improvements yet to come To get Debian running on RISC-V will not be easy because of various reasons including limited availability of hardware being able to run Debian port and limited options for using bootloaders. According to Montecelo, this is an area of improvement from them. He further added, “Additionally, it would be nice to have images publicly available and ready to use, for both Qemu and hardware available like the HiFive Unleashed (or others that might show up in time), but although there's been some progress on that, it's still not ready and available for end users.” Presently, they are beyond 500 packages from the Rust ecosystem in the archive (which is about 4%) which can’t be built and used until Rust gets support for the architecture. Rust requires LLVM and there’s no Rust compiler based on GCC or other toolchains. Montecelo writes, “Firefox is the main high-level package that depends on Rust, but many packages also depend on librsvg2 to render SVG images, and this library has been converted to Rust. We're still using the C version for that, but it cannot be sustained in the long term." Apart from Rust, other packages use LLVM to some extent, but currently, it is not fully working for riscv64. The support of LLVM for riscv64 is expected to be completed this year. While talking about other programming languages, he writes, “There are other programming language ecosystems that need attention, but they represent a really low percentage (only dozens of packages, of more than 12 thousand; and with no dependencies outside that set). And then, of course, there is a long tail of packages that cannot be built due to a missing dependency, lack of support for the architecture or random failures -- together they make a substantial number of the total, but they need to be looked at and solved almost on a case-by-case basis.” Why are people excited about this? Many users seem to be excited about the news, one of the reasons being that there won’t be a need to bootstrap from scratch as Rust now will be able to cross-compile easily because of the Riscv64 support. A user commented on HackerNews, “Debian Rust maintainer here. We don't need to bootstrap from scratch, Rust (via LLVM) can cross-compile very easily once riscv64 support is added.” Also, this appears to be a good news for Debian, as cross-compiling has really come a long way on Debian. Rest are awaiting for more to get incorporated with riscv. Another user commented, “I am waiting until the Bitmanip extension lands to get excited about RISC-V: https://github.com/riscv/riscv-bitmanip” Few others think that there is a need for LLVM support for riscv64. A user commented, “The lack of LLVM backend surprises me. How much work is it to add a backend with 60 instructions (and few addressing modes)? It's clearly far more than I would have guessed.” Another comment reads, “Basically LLVM is now a dependency of equal importance to GCC for Debian. Hopefully this will help motivate expanding architecture-support for LLVM, and by proxy Rust.” According to users, the architecture of this port misses on two major points, one being the support for LLVM compiler and the other one being the support for Rust based on GCC. If the port gets the LLVM support by this year, users will be able to develop a front end for any programming language as well as a backend for any instruction set architecture. Now, if we consider the case of support for Rust based on GCC, then the port will help developers to get support for many language extensions as GCC provides the same. A user commented on Reddit, “The main blocker to finish the port is having a working Rust toolchain. This is blocked on LLVM support, which only supports RISCV32 right now, and RISCV64 LLVM support is expected to be finished during 2019.” Another comment reads, “It appears that enough people in academia are working on RISCV for LLVM to accept it as a mainstream backend, but I wish more stakeholders in LLVM would make them reconsider their policy.” To know more about this news, check out Debian’s official post. Debian maintainer points out difficulties in Deep Learning Framework Packaging Debian project leader elections goes without nominations. What now? Are Debian and Docker slowly losing popularity?  
Read more
  • 0
  • 0
  • 4656

article-image-rusts-original-creator-graydon-hoare-on-the-current-state-of-system-programming-and-safety
Bhagyashree R
20 Jun 2019
4 min read
Save for later

Rust’s original creator, Graydon Hoare on the current state of system programming and safety

Bhagyashree R
20 Jun 2019
4 min read
Back in July 2010, Graydon Hoare showcased the Rust programming language for the very first time at Mozilla Annual Summit. Rust is an open-source system programming language that was created with speed, memory safety, and parallelism in mind. Looking at Rust’s memory and thread safety guarantees, a supportive community, a quickly evolving toolchain, many major projects are being rewritten in Rust. And, one of the major ones was Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Mozilla is also using Rust for rewriting many other key parts of Firefox under Project Quantum. Fastly chose Rust to implement Lucet, its native WebAssembly compiler and runtime. More recently, Facebook also chose Rust to implement its controversial Libra blockchain. As the 9th anniversary of the day when Hoare first presented Rust in front of a large audience is approaching, The New Stack took a very interesting interview with him. In the interview, he talked about the current state of system programming, how safe he considers our current complex systems are, how they can be made safer, and more. Here are the key highlights from the interview: Hoare on a brief history of Rust Hoare started working on Rust as a side-project in 2006. Mozilla, his employer at that time, got interested in the project and provided him a team of engineers to help him in the further development of the language. In 2013, he experienced burnout and decided to step down as a technical lead. After working on some less-time-sensitive projects, he quit Mozilla and worked for the payment network, Stellar. In 2016, he got a call from Apple to work on the Swift programming language. Rust is now being developed by the core teams and an active community of volunteer coders. This programming language that he once described as “spare-time kinda thing” is being used by many developers to create a wide range of new software applications from operating systems to simulation engines for virtual reality. It was also "the most loved programming language" in the Stack Overflow Developer Survey for four years in a row (2016-2019). Hoare was very humble about the hard work and dedication he has put into creating the Rust programming language. When asked to summarize Rust’s history he simply said that “we got lucky”.  He added, “that Mozilla was willing to fund such a project for so long; that Apple, Google, and others had funded so much work on LLVM beforehand that we could leverage; that so many talented people in academia, industry and just milling about on the internet were willing to volunteer to help out.” The current state of system programming and safety Hoare considers the state of system programming language “healthy” as compared to the starting couple of decades in his career. Now, it is far easier to sell a language that is focused on performance and correctness. We are seeing more good languages coming into the market because of the increasing interaction between academia and industry. When asked about safety, Hoare believes that though we are slowly taking steps towards better safety, the overall situation is not getting better. He attributes building a number of new complex computing systems is making it worse. He said, “complexity beyond comprehension means we often can’t even define safety, much less build mechanisms that enforce it.” Another reason according to him is the huge number of vulnerable software present in the field that can be exploited anytime by a bad actor. For instance, on Tuesday, a zero-day vulnerability was fixed in Firefox that was being “exploited in the wild” by attackers. “Like much of the legacy of the 20th century, there’s just a tremendous mess in software that’s going to take generations to clean up, assuming humanity even survives that long,” he adds. How system programming can be made safer Hoare designed Rust with safety in mind. Its rich type system and ownership model ensures memory and thread safety. However, he suggests that we can do a lot better when it comes to safety in system programming. He listed a bunch of new improvements that we can implement, “information flow control systems, effect systems, refinement types, liquid types, transaction systems, consistency systems, session types, unit checking, verified compilers and linkers, dependent types.” Hoare believes that there are already many features suggested by academia. The main challenge for us is to implement these features “in a balanced, niche-adapted language that’s palatable enough to industrial programmers to be adopted and used.” You can read Hoare’s full interview on The New Stack. Rust 1.35.0 released Rust shares roadmap for 2019 Rust 1.34 releases with alternative cargo registries, stabilized TryFrom and TryInto, and more
Read more
  • 0
  • 0
  • 8960
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-curls-lead-developer-announces-googles-plan-to-reimplement-curl-in-libcrurl
Amrata Joshi
20 Jun 2019
4 min read
Save for later

Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”

Amrata Joshi
20 Jun 2019
4 min read
Yesterday, Daniel Stenberg, the lead developer of curl announced that Google is planning to reimplement curl in libcrurl and it will be renamed as libcurl_on_cronet. https://twitter.com/bagder/status/1141588339100934149 The official blog post reads, “The Chromium bug states that they will create a library of their own (named libcrurl) that will offer (parts of) the libcurl API and be implemented using Cronet.” Daniel Stenberg explains the reason for reimplementation, “Implementing libcurl using Cronet would allow developers to take advantage of the utility of the Chrome Network Stack, without having to learn a new interface and its corresponding workflow. This would ideally increase ease of accessibility of Cronet, and overall improve adoption of Cronet by first-party or third-party applications.” According to him, the team might also hope that 3rd party applications can switch to this library without the need for switching to another API. So if this works then there is a possibility that the team might also create “crurl” tool which then will be their own version of the tool using their own library. Daniel Stenberg states in the post, “In itself is a pretty strong indication that their API will not be fully compatible, as if it was they could just use the existing curl tool…” He writes, “As the primary author and developer of the libcurl API and the libcurl code, I assume that Cronet works quite differently than libcurl so there’s going to be quite a lot of wrestling of data and code flow to make this API work on that code.” The libcurl API is quite versatile and has developed over a period of almost 20 years. There’s a lot of functionality, options and subtle behavior that may or may not be easy to mimic. If the subset is limited to a number of functions and libcurl options and they are made to work exactly the way they have been documented, then it could be difficult as well as time-consuming. He writes, “I don’t think applications will be able to arbitrarily use either library for a very long time, if ever. libcurl has 80 public functions and curl_easy_setopt alone takes 268 different options!” Read Also: Cisco merely blacklisted a curl instead of actually fixing the vulnerable code for RV320 and RV325 According to Stenberg, there’s still no clarity on API/ABI stability or how are they planning to ship or version their library. Stenberg writes, “There’s this saying about imitation and flattery but getting competition from a giant like Google is a little intimidating. If they just put two paid engineers on their project they already have more dedicated man power than the original libcurl project does…” So, the team from Google’s end finds and fixes issues in the code and API such that curl improves. This makes more users aware of libcurl and its API and the team behind curl make it easier for users and applications to do safe and solid Internet transfers. According to Stenberg, applications need to be aware of the APIs they work with to avoid confusion. He also highlighted that users might have been confused because of the names, “libcrurl” and “crurl” as they appear to be like typos. He added, “Since I don’t think “libcrurl” will be able to offer a compatible API without a considerable effort, I think applications will need to be aware of which of the APIs they work with and then we have a “split world” to deal with for the foreseeable future and that will cause problems, documentation problems and users misunderstanding or just getting things wrong.” “Their naming will possibly also be the reason for confusion since “libcrurl” and “crurl” look so much like typos of the original names,” he said. To know more about this news, check out the blog post by Daniel Stanberg. Google Calendar was down for nearly three hours after a major outage How Genius used embedded hidden Morse code in lyrics to catch plagiarism in Google search results Google, Facebook and Twitter submit reports to EU Commission on progress to fight disinformation  
Read more
  • 0
  • 0
  • 2410

article-image-facebook-content-moderators-work-in-filthy-stressful-conditions-and-experience-emotional-trauma-daily-reports-the-verge
Fatema Patrawala
20 Jun 2019
5 min read
Save for later

Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge

Fatema Patrawala
20 Jun 2019
5 min read
Yesterday, The Verge published a gut-wrenching investigative report about the terrible working conditions of Facebook moderators at one of its contract vendor sites in North America - Tampa. Facebook’s content moderation site in Tampa, Florida, is operated by the professional services firm Cognizant. It is one of the lowest-performing sites in North America and has never consistently enforced Facebook’s policies with 98 percent accuracy, as per Cognizant’s contract. In February, The Verge had published a similar report on the deplorable working conditions of content moderators who work at Facebook’s Arizona site. Both the reports are investigated and written by an acclaimed tech reporter, Casey Newton. But yesterday’s article is based on the investigation performed by interviewing 12 current and former moderators and managers at the Tampa site. In most cases, pseudonyms are used to protect employees from potential retaliation from Facebook and Cognizant. But for the first time, three former moderators for Facebook agreed to break their nondisclosure agreements and discuss working conditions at the site on the record. https://twitter.com/CaseyNewton/status/1141317045881069569 The working conditions for the content moderators are filled with filth and stress. To an extent that one of them is being reported dead due to such an emotional trauma that the moderators go through everyday. Keith Utley, was a lieutenant commander in the military and after his retirement he chose to work as Facebook moderator at the Tampa site. https://twitter.com/CaseyNewton/status/1141316396942602240 Keith worked the overnight shift and he moderated the worst stuff posted by users on daily basis on Facebook including the the hate speech, the murders, the child pornography. Utley had a heart attack at his desk and died last year. Senior management initially discouraged employees from discussing the incident and tried hiding the fact that Keith died, for fear it would hurt productivity. But Keith’s father visited the site to collect his belongings and broke emotionally and said, “My son died here”. The moderators further mention that the Tampa site has only one bathroom for all the 800 employees working at the site. And repeatedly the bathroom is found smeared with feces and menstrual blood. The office coordinators did not even care about cleaning the site and it was infested with bed bugs. Workers also found fingernails and pubic hair on their desk. “Bed bugs can be found virtually every place people tend to gather, including the workplace,” Cognizant said in a statement. “No associate at this facility has formally asked the company to treat an infestation in their home. If someone did make such a request, management would work with them to find a solution.” There are instances of sexual harassment as well at the work place and workers have filed two such cases since April. They are now before the US Equal Employment Opportunity Commission. Often there are cases of physical and verbal fights in the office and instances of things stolen from the office premises was common. One of the former moderators bluntly said to The Verge reporter that if anything needs change. It is only one thing that Facebook needs to shut down. https://twitter.com/jephjacques/status/1141330025897168897 There are many significant voices added to the shout of breaking Facebook, one of them includes Elizabeth Warren, US Presidential candidate for 2020, who wants to break the big tech. Another one comes from Chris Hughes, one of the founders of Facebook who published an op-ed on why he thinks it's time to break Facebook. In response to this investigation, Facebook spokesperson, Chris Harrison says they will conduct an audit of its partner sites and make other changes to promote the well-being of its contractors. He said the company would consider making more moderators full-time employees in the future, and hope to provide counseling for moderators after they leave. This news garnered public anger and rage towards Facebook, people have commented that Facebook defecates on humanity and profits enormously while getting away with it easily. https://twitter.com/pemullen/status/1141357359861645318 Another one reads that Facebook’s mission of connecting the world has been an abject failure and the world is worse off from being connected in the ways Facebook has done it. Additionally there are comments on how this story comes in as a reminder of how little these big tech firms care about people. https://twitter.com/stautistic/status/1141424512736485376   Author of the book Antisocial Media and a columnist at Guardian, Siva Vaidhyanathan, applauds Casey Newton, The Verge reporter for bringing up this story. But he also mentions that Casey has ignored the work of Sarah T. Roberts who had written an entire book on this topic called, Behind the Screens. https://twitter.com/sivavaid/status/1141330295376863234 Check out the full story covered by The Verge on their official blog post. Facebook releases Pythia, a deep learning framework for vision and language multimodal research After refusing to sign the Christchurch Call to fight online extremism, Trump admin launches tool to defend “free speech” on social media platforms How Genius used embedded hidden Morse code in lyrics to catch plagiarism in Google search results
Read more
  • 0
  • 0
  • 2547

article-image-qt-5-13-releases-with-a-fully-supported-webassembly-module-chromium-73-support-and-more
Bhagyashree R
20 Jun 2019
3 min read
Save for later

Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more

Bhagyashree R
20 Jun 2019
3 min read
Yesterday, the team behind Qt announced the release of Qt 5.13. This release comes with fully-supported Qt for WebAssembly, Chromium 73-based QT WebEngine, and many other updates. In this release, the Qt community and the team have focused on improving the tooling to make designing, developing, and deploying software with Qt more efficient. https://twitter.com/qtproject/status/1141627444933398528 Following are some of Qt 5.13 highlights: Fully-supported Qt for WebAssembly Qt for WebAssembly makes it possible to build Qt applications for web browsers. The team previewed this platform in Qt 5.12 and beginning this release Qt for WebAssembly is fully-supported. This module uses Emscripten, the LLVM to JavaScript compiler to compile Qt applications for a web server. This will allow developers to run their native applications in any browser provided it supports WebAssembly. Updates in the QT QML module The QT QML module enables you to write applications and libraries in the QML language. Qt 5.13 comes with improved support for enums declared in  C++. With this release, JavaScript “null” as the binding value will be optimized at compile time. Also, QML will now generate function tables on 64-bit Windows making it possible to unwind the stack through JITed functions. Updates in Qt Quick and Qt Quick Controls 2 Qt Quick is the standard library for writing QML applications, which provides all the basic types required for creating user interfaces. With this release, support is added to TableView that allows hiding rows and columns. Qt Quick Controls 2 provides a set of UI controls for creating user interfaces. This release brings a new control named SplitView using which you can lay out items horizontally or vertically with a draggable splitter between each item. Additionally, the team has also added a cache property to the icon. Qt WebEngine Qt WebEngine provides a web browser engine that makes embedding content from the web into your applications easier on platforms that do not have a native web engine. This engine uses the code from the open-source Chromium project. Qt WebEngine is now based on Chromium 73. This latest version supports PDF viewing via an internal Chromium extension, Web Notifications API, and thread-safe and page-specific URL request interceptors. It also comes with an application-local client certificate store and client certificate support from QML. Lars Knoll, Qt’s CTO and Tuukka Turunen, Qt’s Head of R&D will be holding a webinar on July 2 to summarize all the news around Qt 5.13. Read the official announcement on Qt’s official website to know more in detail. Qt Creator 4.9.0 released with language support, QML support, profiling and much more Qt installation on different platforms [Tutorial] Qt Creator 4.9 Beta released with QML support, programming language support and more!
Read more
  • 0
  • 0
  • 3545

article-image-i-code-in-my-dreams-too-say-developers-in-jetbrains-state-of-developer-ecosystem-2019-survey
Fatema Patrawala
19 Jun 2019
5 min read
Save for later

‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey

Fatema Patrawala
19 Jun 2019
5 min read
Last week, Jetbrains published its annual survey results known as The State of Developer Ecosystem 2019. More than 19,000 people participated in this developer ecosystem survey. But responses from only 7000 developers from 17 countries were included in the report. The survey had over 150 questions and key results from the survey are published, complete results along with the raw data will be shared later. Jetbrains prepared an infographics based on the survey answers they received. Let us take a look at their key takeaways: Key takeaways from the survey Java is the most popular primary programming language. Python is the most studied language in 2019. Cloud services are getting more popular. The share of local and private servers dropped 8% and 3%, respectively, compared to 2018. Machine learning professionals have less fear that AI will replace developers one day. 44% of JavaScript developers use TypeScript regularly. In total, a quarter of all developers are using it in 2019, compared to 17% last year. The use of containerized environments by PHP developers is growing steadily by 12% per year. 73% of Rust devs use a Unix / Linux development environment, though Linux is not a primary environment for most of them. Go Modules appeared recently, but already 40% of Go developers use it and 17% want to migrate to it. 71% of Kotlin developers use Kotlin for work, mainly for new projects (96%), but more than a third are also migrating their existing projects to it. The popularity of Vue.js is growing year on year: it gained 11 percentage points since last year and has almost doubled its share since 2017. The most frequently used tools for developers involved in infrastructure development is Docker + Terraform + Ansible. The more people code at work, the more likely they are to code in their dreams. Developers choose Java as their primary language The participants were asked 3 questions about their language preference. Firstly, they were asked about the language they used last year, second they were asked about their primary language preference and, finally, they were asked to rank them. The most loved programming languages among developers are Java and Python. Second place is a tie between C# and JavaScript. Common secondary languages include HTML, SQL, and Shell scripting. A lot of software developers have some practice with these secondary languages, but very few work with them as their major language. For example, while 56% practice SQL, only 19% called it their primary language and only 1.5% rank it as their first language. Java, on the other hand, is the leading ‘solo’ language. 44% of its users use only Java or use Java first. The next top solo language is JavaScript, with a mere 17%. Android and React Native remain popular among mobile developers, Flutter gains momentum For mobile operating system preference 83% participants said they used Android as their preferred operating system followed by iOS which is 59%. Two thirds of mobile developers use native tools to develop for mobile OS. Every other developer uses cross-platform technologies or frameworks. 42% said they use React native as a cross platform mobile framework. Interestingly Flutter was at the 2nd place with 30% of audience preferring to use. Other included Cordova, Ionic, Xamarin, Unity etc. Other takeaways from the survey and few fun facts The most interesting question asked in this year’s survey was if developers code in their dreams. 52% responded Yes to this question which means the more people code at work (as a primary activity), the more likely they are to code in their dreams. Another really interesting fact was revealed when they were asked if AI will replace developers in future. 57% of participants responded that partially AI may replace programmers, but those who do Machine learning professionally were more skeptical about AI than those who do it as a hobby. 27% think that AI will never replace developers, while 6% agreed that it will fully replace programmers and another 11% were not sure. There were other questions like which is the most preferred operating system for the development environment. 57% of participants said they prefer Windows, followed by 49% for macOS and 48% for Unix/Linux. When asked about what types of applications do developers prefer to develop. Major chunk went to Web based Back-end applications, followed by Web front-end, mobile applications, libraries and frameworks, desktop applications, etc. 41% responded No to the question about if they contributed to open-source projects on a regular basis. Only 11% said they contribute to open source on a regular basis that is every month. 71% have Unit tests in their projects and 16% responded that they do not have any tests in their projects that is about among the fully employed senior developers. Source code collaboration tool is used regularly among the developers with 80% preference to it. Other tools like Standalone IDE, Lightweight Desktop Editor, Continuous Integration or Continuous Delivery tool, Issue tracker etc are also used by developers regularly. Demographics of the survey The demographics of the survey had 69% of people who are fully employed with a company or an organization. 75% were developer/programmer/software engineer. 1 in 14 people who were polled occupied a senior leadership role. Two thirds of the developers practice pair programming. The survey also revealed that the more experienced people spent less time on learning new tools / technologies / programming languages. The gender ratio participants is not revealed. Check out the infographic to know more about the survey results. What the Python Software Foundation & Jetbrains 2017 Python Developer Survey had to reveal Python Software foundation and JetBrains’ Python Developers Survey 2018 PyCon 2019 highlights: Python Steering Council discusses the changes in the current Python governance structure
Read more
  • 0
  • 0
  • 2787
article-image-mongodb-announces-new-cloud-features-beta-version-of-mongodb-atlas-data-lake-and-mongodb-atlas-full-text-search-and-more
Amrata Joshi
19 Jun 2019
3 min read
Save for later

MongoDB announces new cloud features, beta version of MongoDB Atlas Data Lake and MongoDB Atlas Full-Text Search and more!

Amrata Joshi
19 Jun 2019
3 min read
Yesterday, the team at MongoDB announced new cloud services and features that will offer a better way to work with data. The beta versions of MongoDB Atlas Data Lake and MongoDB Atlas Full-Text Search will help users to access new features in a fully managed MongoDB environment. MongoDB Charts include embedded charts in web applications The general availability of MongoDB Charts will help customers in creating charts and graphs, and further building and sharing dashboards. It also helps in embedding these charts, graphs and dashboards directly into web apps for creating better user experiences. MongoDB Charts is generally available to Atlas as well as on-premise customers which help in creating real-time visualization of MongoDB data. The MongoDB Charts include new features, such as embedded charts in external web applications, geospatial data visualization with new map charts, and built-in workload isolation for eliminating the impact of analytics queries on an operational application. Dev Ittycheria, CEO and President, MongoDB, said, “Our new offerings radically expand the ways developers can use MongoDB to better work with data.” He further added, “We strive to help developers be more productive and remove infrastructure headaches --- with additional features along with adjunct capabilities like full-text search and data lake. IDC predicts that by 2025 global data will reach 175 Zettabytes and 49% of it will reside in the public cloud. It’s our mission to give developers better ways to work with data wherever it resides, including in public and private clouds.” MongoDB Query Language added to MongoDB Atlas Data Lake MongoDB Atlas Data Lake helps customers to quickly query data on S3 in any format such as BSON, CSV, JSON, TSV, Parquet and Avro with the help of MongoDB Query Language (MQL). One of the major plus points about MongoDB Query Language is that it is expressive and will that allows developers to query the data. Developers can now use the same query language across data on S3, and make querying massive data sets easy and cost-effective. With MQL being added to MongoDB Atlas Data Lake, users can now run queries and explore their data by giving access to existing S3 storage buckets with a few clicks from the MongoDB Atlas console. Since the Atlas Data Lake is completely serverless, there is no need for setting up an infrastructure or managing it. Also, the customers pay only for the queries they run when they are actively working with the data. The team has planned for the availability of MongoDB Atlas Data Lake on Google Cloud Storage and Azure Storage for the future. Atlas Full-Text Search offers rich text search capabilities Atlas Full-Text Search offers rich text search capabilities that are based on Apache Lucene 8 against fully managed MongoDB databases. Also, there is no need for additional infrastructure or systems to manage. Full-Text Search helps the end users in filtering, ranking, and sorting their data for bringing out the most relevant results. So, users are not required to pair their database with an external search engine To know more about this news, check out the official press release. 12,000+ unsecured MongoDB databases deleted by Unistellar attackers MongoDB is going to acquire Realm, the mobile database management system, for $39 million MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process  
Read more
  • 0
  • 0
  • 2137

article-image-google-calendar-was-down-for-nearly-three-hours-after-a-major-outage
Amrata Joshi
19 Jun 2019
2 min read
Save for later

Google Calendar was down for nearly three hours after a major outage

Amrata Joshi
19 Jun 2019
2 min read
Yesterday, Google Calendar was down for nearly three hours around the world. Calendar users that were trying to access the service faced a 404 error message through their browsers from around 10 AM ET to 12:40 PM ET. Google updated the service details stating, “We're investigating reports of an issue with Google Calendar. We will provide more information shortly. The affected users are unable to access Google Calendar.” During this outage, Google services including Gmail and Google Maps appeared to be unaffected but Hangouts Meet reportedly experienced some issues. Meanwhile, when Calendar was down, a lot of them expressed their concerns via tweets. Here are a few of the reactions: https://twitter.com/BestGaryEver/status/1141004879382700040   https://twitter.com/falcons3040/status/1141143090239090689 https://twitter.com/ola11king/status/1141012717144199169 https://twitter.com/thejacegoodwin/status/1140999161434689541 https://twitter.com/ChristinaAllDay/status/1140986268878286848 Few others were irritated, a user commented on HackerNews, “I guess it's time for all the Google engineers to put their LeetCode skills to the test.” People were also expecting the response to be quicker from the company.  Another comment reads, “Over an hour into the outage, still no word at all from Google on the status page apart from -We're investigating.” Such outages have been happening every now and then; earlier this month, Google Cloud suffered a major outage that took down a number of Google services including YouTube, GSuite, Gmail, etc. This outage had also affected the services that were dependent on Google including Nest, Discord, Snapchat, Shopify and more. To know more about this news, check out the Service details by Google. How Genius used embedded hidden Morse code in lyrics to catch plagiarism in Google search results Google announces early access of ‘Game Builder’, a platform for building 3D games with zero coding Google, Facebook and Twitter submit reports to EU Commission on progress to fight disinformation
Read more
  • 0
  • 0
  • 2109

article-image-ubuntu-has-decided-to-drop-i386-32-bit-architecture-from-ubuntu-19-10-onwards
Vincy Davis
19 Jun 2019
4 min read
Save for later

Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards

Vincy Davis
19 Jun 2019
4 min read
Update: Five days after the announcement of dropping the i386 structure, Steve Langasek has now changed his stance. Yesterday, 23rd June, Langasek apologised to his users and said that this is not the case. He now claims that Ubuntu is only dropping the updates to the i386 libraries, and it will be frozen at the 18.04 LTS versions. He also mentioned that they are planning to have i386 applications including games, for versions of Ubuntu later than 19.10. This update comes after Valve Linux developer Pierre-Loup Griffais tweeted on 21st June that Steam will not support Ubuntu 19.10 and its future releases. He also recommended its users the same. Griffais has stated that are planning to switch to a different distribution and are evaluating ways to minimize breakage for their users. https://twitter.com/Plagman2/status/1142262103106973698 Between all the uncertainties of i386, Wine developers have also raised their concern because many 64-bit Windows applications still use a 32-bit installer, or some 32-bit components. Rosanne DiMesio, one of the admins in Wine’s Applications Database (AppDB) and Wiki, has said in a mail archive that there are many possibilities, such as building a pure 64 bit Wine packages for Ubuntu. Yesterday the Ubuntu engineering team announced their decision to discontinue i386 (32-bit) as an architecture, from Ubuntu 19.10 onwards. In a post to the Ubuntu Developer Mailing List, Canonical’s Steve Langasek explains that “i386 will not be included as an architecture for the 19.10 release, and we will shortly begin the process of disabling it for the eoan series across Ubuntu infrastructure.” Langasek also mentions that the specific distributions of builds, packages or distributes of the 32-bit software, libraries or tools will no longer work on the newer versions of Ubuntu. He also mentions that the Ubuntu team will be working on the 32-bit support, over the course of the 19.10 development cycle. The topic of dropping i386 systems has been in discussion among the Ubuntu developer community, since last year. One of the mails in the archive, mentions that “Less and less non-amd64-compatible i386 hardware is available for consumers to buy today from anything but computer parts recycling centers. The last of these machines were manufactured over a decade ago, and support from an increasing number of upstream projects has ended.” Earlier this year, Langasek stated in one of his mail archives that running a 32-bit i386 kernel on the recent 64-bit Intel chips had a risk of weaker security than using a 64-bit kernel. Also the usage of i386 has reduced broadly in the ecosystem, and hence it is “increasingly going to be a challenge to keep software in the Ubuntu archive buildable for this target”, he adds. Langasek also informed users that the automated upgrades to 18.10 are disabled on i386. This has been done to enable users of i386 to stay on the LTS, as it will be supported until 2023. This will help users to not be stranded on a non-LTS release, which will be supported only until early 2021. The general reaction to this news has been negative. Users have expressed outrage on the discontinuity of i386 architecture. A user on Reddit says that “Dropping support for 32-bit hosts is understandable. Dropping support for 32 bit packages is not. Why go out of your way to screw over your users?” Another user comments, “I really truly don't get it. I've been using ubuntu at least since 5.04 and I'm flabbergasted how dumb and out of sense of reality they have acted since the beginning, considering how big of a headstart they had compared to everyone else. Whether it's mir, gnome3, unity, wayland and whatever else that escapes my memory this given moment, they've shot themselves in the foot over and over again.” On Hacker News, a user commented “I have a 64-bit machine but I'm running 32-bit Debian because there's no good upgrade path, and I really don't want to reinstall because that would take months to get everything set up again. I'm running Debian not Ubuntu, but the absolute minimum they owe their users is an automatic upgrade path.” Few think that this step was needed, for the sake of riddance. Another Redditor adds,  “From a developer point of view, I say good riddance. I understand there is plenty of popular 32-bit software still being used in the wild, but each step closer to obsoleting 32-bit is one step in the right direction in my honest opinion.” Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users
Read more
  • 0
  • 0
  • 4443
article-image-mozilla-releases-firefox-67-0-3-and-firefox-esr-60-7-1-to-fix-a-zero-day-vulnerability-being-abused-in-the-wild
Bhagyashree R
19 Jun 2019
2 min read
Save for later

Mozilla releases Firefox 67.0.3 and Firefox ESR 60.7.1 to fix a zero-day vulnerability, being abused in the wild

Bhagyashree R
19 Jun 2019
2 min read
Yesterday, Mozilla released Firefox 67.0.3 and Firefox ESR 60.7.1 to fix an actively exploited vulnerability that can enable attackers to remotely execute arbitrary code on devices using vulnerable versions. So, if you are a Firefox user, it is recommended that you update it right now. This critical zero-day flaw was reported by Samuel Groß, a security researcher with Google Project Zero security team and the Coinbase Security team. It is a type confusion vulnerability tracked as CVE-2019-11707 that occurs “when manipulating JavaScript objects due to issues in Array.pop. This can allow for an exploitable crash. We are aware of targeted attacks in the wild abusing this flaw.” Not much information has been disclosed about the vulnerability yet, apart from this short description on the advisory page. In general, we can say that type confusion happens when a piece of code fails to verify the object type that is passed to it and blindly uses it without type-checking. The US Cybersecurity and Infrastructure Security Agency (CISA) also issued an alert informing users and administrators to update Firefox as soon as possible: “The Cybersecurity and Infrastructure Security Agency (CISA) encourages users and administrators to review the Mozilla Security Advisory for Firefox 67.0.3 and Firefox ESR 60.7.1 and apply the necessary updates.” Users can install the patched Firefox versions by downloading them from Mozilla’s official website. Or, they can click on the hamburger icon on the upper-right hand corner, type Update into the search box and hit the Restart to update Firefox button to be sure. This is not the first time when a zero-day vulnerability has been found in Firefox. Back in 2016, a vulnerability was reported in Firefox that was exploited by attackers to de-anonymize Tor Browser users. The attackers then collected the user data that included their IP addresses, MAC addresses, and hostnames. Mozilla then released an emergency fix in Firefox 50.0.2 and 45.5.1 ESR. Firefox releases v66.0.4 and 60.6.2 to fix the expired certificate problem that ended up disabling add-ons Firefox 67 enables AV1 video decoder ‘dav1d’, by default on all desktop platforms Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features
Read more
  • 0
  • 0
  • 2203

article-image-lyft-announces-envoy-mobile-an-ios-and-android-client-network-library-for-mobile-application-networking
Sugandha Lahoti
19 Jun 2019
3 min read
Save for later

Lyft announces Envoy Mobile, an iOS and Android client network library for mobile application networking

Sugandha Lahoti
19 Jun 2019
3 min read
Yesterday, Lyft released the initial OSS preview release of Envoy Mobile. This is an iOS and Android client network library that brings Lyft’s Envoy Proxy to mobile platforms. https://twitter.com/mattklein123/status/1140998175722835974 Envoy proxy was initially built at Lyft to solve the networking and observability issues inherent in large polyglot server-side microservice architectures. It soon gained large scale public appreciation and was used by major public cloud providers, end user companies, and infrastructure startups. Now, Envoy Proxy is brought to iOS and Android platforms, providing an API and abstraction for mobile application networking. Envoy Mobile is currently in a very early stage of development. The initial release brings the following features: Ability to compile Envoy on both Android and iOS: Envoy Mobile uses an intelligent protobuf code generation and an abstract transport to help both iOS and Android provide similar interfaces and ergonomics for consuming APIs. Ability to run Envoy on a thread within an application, utilizing it effectively as an in-process proxy server. Swift/Obj-C/Kotlin demo applications that utilize exposed Swift/Obj-C/Kotlin “raw” APIs to interact with Envoy and make network calls, Long term goals Envoy Mobile provides support for Swift APIs for iOS and Kotlin APIs for Android initially, but depending on community interest they will consider adding support for additional languages in the future. In the long term, they are also planning to include the gRPC Server Reflection Protocol into a streaming reflection service API. This API will allow both Envoy and Envoy Mobile to fetch generic protobuf definitions from a central IDL service, which can then be used to implement annotation driven networking via reflection. They also plan to bring Envoy Mobile to xDS configuration to mobile clients, in the form of routing, authentication, failover, load balancing, and other policies driven by a global load balancing system. Envoy Mobile can also add cross-platform functionality when using strongly typed IDL APIs. Some examples of annotations that are planned in their roadmap are Caching, Priority, Streaming, Marking an API as offline/deferred capable and more. Envoy Mobile is getting loads of appreciation from developers with many happy they have open sourced its development. A comment on Hacker news reads, “I really like how they're releasing this as effectively the first working proof of concept and committing to developing the rest entirely in the open - it's a great opportunity to see how a project of this scale plays out in real-time on GitHub.” https://twitter.com/omerlh/status/1141225499139682305 https://twitter.com/dinodaizovi/status/1141157828247347200   Currently the project is in a pre-release stage. Not all features are implemented, and it is not ready for production use. However, you can get started here. Also see the demo release and their roadmap where they plan to develop Envoy Mobile entirely in the open. Related News Uber and Lyft drivers go on strike a day before Uber IPO roll-out Lyft introduces Amundsen;  a data discovery and metadata engine for its researchers and data scientists Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race
Read more
  • 0
  • 0
  • 3602