Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-glitch-hits-2-5-million-apps-secures-30m-in-funding-and-is-now-available-in-vs-code
Sugandha Lahoti
10 Jul 2019
5 min read
Save for later

Glitch hits 2.5 million apps, secures $30M in funding, and is now available in VS Code

Sugandha Lahoti
10 Jul 2019
5 min read
Glitch, the web apps creating tool, has made a series of major announcements yesterday. Glitch is a tool that lets you code full-stack apps right in the browser, where they’re instantly deployed. Glitch, formerly known as Fog Creek Software, is an online community where people can upload projects and enable others to remix them. Creating web apps with Glitch is as easy as working on Google Docs. The Glitch community reached a milestone by hitting 2.5 million free and open apps, more than the number in Apple's app store. Many apps on Glitch are decidedly smaller, simpler, and quicker to make on average focused on single-use things. Since all apps are open source, others can then remix the projects into their own creations. Glitch raises $30M with a vision of being a healthy, responsible company Glitch has raised $30M in a Series A round funding from a single investor, Tiger Global. The round closed in November 2018, but Anil Dash, CEO of Glitch said he wanted to be able to show people that the company did what it said it would do, before disclosing the funding to the public; the company has grown twice in size since. Glitch is not your usual tech startup. The policies, culture, and creative freedom offered are unique. Their motto is to be a simple tool for creating web apps for people and teams of all skill levels, while fostering a friendly and creative community and a different kind of company aiming to set the standard for thoughtful and ethical practices in tech. The company is on track for building one of the friendliest, most inclusive, and welcoming social platforms on the internet. They’re built with sustainability in mind, are independent, privately held, and transparent and open in business model and processes. https://twitter.com/firefox/status/1148716282696601601 They are building a healthy, responsible company and have shared their inclusion statistics, and benefits like salary transparency, paid climate leave (consists upto 5 consecutive work days taken at employee’s discretion, for extreme weather), full parental leave and more in a public handbook. This handbook is open-sourced so anyone, anytime, anywhere can see how the company runs day to day. Because this handbook is made in Glitch, users can remix it to get their own copy that is customizable. https://twitter.com/Pinboard/status/1148645635173670913 As the community and the company have grown, they have also invested significantly in diversity, inclusion, and tech ethics. On the gender perspective, 47% of the company identifies as cisgender women, 40% identify as cisgender men, 9% identify as non-binary/gender non-conforming/questioning and 4% did not disclose. On the race and ethnicity front, the company is 65% white, 7% Asian, 11% black, 4% Latinx, 11% two or more races and 2% did not disclose. Meanwhile, 29% of the company identifies as queer and 11% of people reported having a disability. Their social platform, Anil notes has no wide-scale abuse, systematic misinformation, or surveillance-based advertising. The company wants to, “prove that a group of people can still create a healthy community, a successful business, and have a meaningful impact on society, all while being ethically sound.” A lot of credit for Glitch and it’s inclusion policies goes to Anil Dash, the CEO. As pointed by Kimberly Bryant, who is the founder of BlackGirlsCode, “'A big reason for Glitch's success and vision though is Anil. This "inclusion mindset" starts at the top and I think that is evidenced by the companies and founders who get it right.” Karla Monterroso, CEO Code2040 says, “It becomes about operationalizing strategy. About creating actual inclusion. About how you intentionally build a diverse team and an org that is just.” https://twitter.com/karlitaliliana/status/1148641017823764480 https://twitter.com/karlitaliliana/status/1148653580842196992   Dash notes, “It’s the entire team working together. Buy-in at every level of the organization, people being brave enough to be vulnerable, all doing the hard work of self-reflection & not being defensive. And knowing we’re only getting started.” Other community members and tech experts have also appreciated Dash’s resilience into building an open source, sustainable, inclusive platform. https://twitter.com/TheSamhita/status/1148706941432225792 https://twitter.com/LeeTomson/status/1148655031308210176   People have also used it for activist purposes and highly recommend it. https://twitter.com/schep_/status/1148654037518168065 Glitch now on VSCode offering real-time code collab Glitch is also available in Visual Studio Code allowing everyone from beginners to experts to code.  Features include real-time collaboration, code rewind, and live previews. This feature is available in preview; users can download the Glitch VS Code extension on the Visual Studio Marketplace. Features include: Rewind: look back through code history, rollback changes, and see files as they were in the past with a diff. Console: Open the console and run commands directly on Glitch container. Logs: See output in logs just like on Glitch. Debugger: make use of the built-in Node debugger to inspect full-stack code. Source: Medium https://twitter.com/horrorcheck/status/1148635444218933250 For now the company is dedicated solely to building out Glitch and release specialized and powerful features for businesses later this year. How do AWS developers manage Web apps? Introducing Voila that turns your Jupyter notebooks to standalone web applications PayPal replaces Flow with TypeScript as their type checker for every new web app
Read more
  • 0
  • 0
  • 2896

article-image-openjdk-project-valhallas-lw2-early-access-builds-are-now-available-for-you-to-test
Bhagyashree R
09 Jul 2019
3 min read
Save for later

OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test

Bhagyashree R
09 Jul 2019
3 min read
Last week, the early access builds for OpenJDK Project Valhalla's LW2 phase was released, which was first proposed in October last year. LW2 is the next iteration of the L-World series that brings further language and JDK API support for inline types. https://twitter.com/SimmsUpNorth/status/1147087960212422658 Proposed in 2014, Project Valhalla is an experimental OpenJDK project under which the team is working on major new language features and enhancements for Java 10 and beyond. The new features and enhancements are being done in the following focus areas: Value Types Generic Specialization Reified Generics Improved 'volatile' support The LW2 specifications Javac source support Starting from LW2, the prototype is based on mainline JDK (currently version 14). That is why it requires source-level >= JDK14. To make a class declaration of inline type it uses the “inline class’ modifier or ‘@__inline__’ annotation. Interfaces, annotation types, or enums cannot be declared as inline types. The top-level, inner, or local classes may be inline types. As inline types are implicitly final, they cannot be abstract. Also, all instance fields of an inline class are implicitly final. Inline types implicitly extend ‘java.lang.Object’ similar to enums, annotation types, and interfaces. Supports "Indirect" projections of inline types via the "?" operator. javac now allows using ‘==’ and ‘!=’ operators to compare inline type. Java APIs Among the new or modified APIs include ‘isInlineClass()’, ‘asPrimaryType()’, ‘asIndirectType()’, ‘isIndirectType()’, ‘asNullableType()’, and ‘isNullableType()’. Now the ‘getName()’ method reflects the Q or L type signatures for arrays of inline types. Using ‘newInstance()’ on an inline type will throw ‘NoSuchMethodException’ and ‘setAccessible()’ will throw ‘InaccessibleObjectException’. With LW2, initial core Reflection and VarHandles support are in place. Runtime When attempting to synchronize or call wait(*) or notify*() on an inline type IllegalMonitorException will be thrown. ‘ClassCircularityError’ is thrown if loading an instance field of an inline type which declares its own type either directly ‘NotSerializableException’ will be thrown if you are attempting to serialize an inline type. If you are casting from indirect type to inline type, it may result in ‘NullPointerException’. Download the early access binaries to test this prototype. These were some of the specifications of LW2 iteration. Check out the full list of specification at OpenJDK’s official website. Also, stay tuned with the current happenings in Project Valhalla. Getting started with Z Garbage Collector(ZGC) in Java 11 [Tutorial] Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more Firefox 67 will come with faster and reliable JavaScript debugging tools
Read more
  • 0
  • 0
  • 2364

article-image-linux-5-2-releases-with-inclusion-of-sound-open-firmware-project-new-mount-api-improved-pressure-stall-information-and-more
Vincy Davis
09 Jul 2019
5 min read
Save for later

Linux 5.2 releases with inclusion of Sound Open Firmware project, new mount API, improved pressure stall information and more

Vincy Davis
09 Jul 2019
5 min read
Two days ago, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.2 in his usual humorous way, describing it as a ‘Bobtail Squid’. The release has new additions like the inclusion of the Sound Open Firmware (SOF) project, improved pressure stall information, new mount API, significant performance improvements in the BFQ I/O scheduler, new GPU drivers, optional support for case-insensitive names in ext4 and more. The earlier version, Linux 5.1 was released exactly two months ago. Torvalds says, “there really doesn't seem to be any reason for another rc, since it's been very quiet. Yes, I had a few pull requests since rc7, but they were all small, and I had many more that are for the upcoming merge window. So despite a fairly late core revert, I don't see any real reason for another week of rc, and so we have a v5.2 with the normal release timing.” Linux 5.2 also kicks off the Linux 5.3 merge window. What’s new in Linux 5.2? Inclusion of Sound Open Firmware (SOF) project Linux 5.2 includes Sound Open Firmware (SOF) project, which has been created to reduce firmware issues by providing an open source platform to create open source firmware for audio DSPs. The SOF project is backed by Intel and Google. This will enable users to have open source firmware, personalize it, and also use the power of the DSP processors in their sound cards in imaginative ways. Improved Pressure Stall information With this release, users can configure sensitive thresholds and use poll() and friends to be notified, whenever a certain pressure threshold is breached within the user-defined time window. This allows Android to monitor and prevent mounting memory shortages, before they cause problems for the user. New mount API With Linux 5.2, Linux developers have redesigned the entire mount API, thus resulting in addition of six new syscalls: fsopen(2), fsconfig(2), fsmount(2), move_mount(2), fspick(2), and open_tree(2). The previous mount(2) interface was not easy for applications and users to understand the returned errors, was not suitable for the specification of multiple sources such as overlayfs need and it was not possible to mount a file system into another mount namespace. Significant performance improvements in the BFQ I/O scheduler BFQ is a proportional-share I/O scheduler available for block devices since the 4.12 kernel release. It associates each process or group of processes with a weight, and grants a fraction of the available I/O bandwidth to that proportional weight. In Linux 5.2, there have been performance tweaks to the BFQ I/O scheduler such that the application start-up time has increased under load by up to 80%. This drastically increases the performance and decreases the execution time of the BFQ I/O scheduler. New GPU drivers for ARM Mali devices In the past, the Linux community had to create open source drivers for the Mali GPUs, as ARM has never been open source friendly with the GPU drivers. Linux 5.2 has two new community drivers for ARM Mali accelerators, such that lima covers the older t4xx and panfrost the newer 6xx/7xx series. This is expected to help the ARM Mali accelerators. More CPU bug protection, and "mitigations" boot option Linux 5.2 release has more bug infrastructure added to deal with the Microarchitectural Data Sampling (MDS) hardware vulnerability, thus allowing access to data available in various CPU internal buffers. Also, in order to help users to deal with the ever increasing amount of CPU bugs across different architectures, the kernel boot option mitigations= has been added. It's a set of curated, arch-independent options to enable/disable protections regardless irrespective of the system they are running in. clone(2) to return pidfds Due to the design of Unix, sending signals to processes or gathering /proc information is not always safe due to the possibility of PID reuse. With clone(2) returning to pidfds, it will allow users to get pids at process creation time, which are usable with the pidfd_send_signal(2) syscall. pidfds helps Linux to avoid this problem, and the new clone(2) flag will make it even easier to get pidfs, thus providing an easy way to signal and process PID metadata safely. Optional support for case-insensitive names in ext4 This release implements support for case-insensitive file name lookups in ext4, based on the feature bit and the encoding stored in the superblock. This will enable users to configure directories with chattr +F (EXT4_CASEFOLD_FL) attribute. This attribute is only enabled on empty directories for filesystems that support the encoding feature, thus preventing collision of file names that differ by case. Freezer controller for cgroups v2 added A freezer controller provides an ability to stop the workload in a cgroup and temporarily free up some resources (cpu, io, network bandwidth and, potentially, memory) for some other tasks. Cgroup v2 lacked this functionality, until this release. This functionality is always available and is represented by cgroup.freeze and cgroup.events cgroup control files. Device mapper dust target added Linux 5.2 adds a device mapper 'dust' target to simulate a device that has failing sectors and/or read failures. It also adds the ability to enable the emulation of the read failures at an arbitrary time. The 'dust' target aims to help storage developers and sysadmins that want to test their storage stack. Users are quite happy with the Linux 5.2 release. https://twitter.com/ejizhan/status/1148047044864557057 https://twitter.com/konigssohne/status/1148014299484512256 https://twitter.com/YuzuSoftMoe/status/1148419200228179968 Linux 5.2 has many other performance improvements introduced in the file systems, memory management, block layer and more. Visit the kernelnewbies page, for more details. “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 Canonical, the company behind the Ubuntu Linux distribution, was hacked; Ubuntu source code unaffected OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more!
Read more
  • 0
  • 0
  • 2804
Banner background image

article-image-dont-break-your-users-and-create-a-community-culture-says-linus-torvalds-creator-of-linux-at-kubecon-cloudnativecon-open-source-summit-china-2019
Sugandha Lahoti
09 Jul 2019
5 min read
Save for later

“Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019

Sugandha Lahoti
09 Jul 2019
5 min read
At the Cloud Native Computing Foundation’s flagship conference, KubeCon + CloudNativeCon + Open Source Summit China 2019, Linus Torvalds, creator of Linux and Git was in a conversation with Dirk Hohndel, VP and Chief Open Source Officer, VMware on the past, present, and future of Linux. The cloud Native conference gathers technologists from leading open source and cloud native communities scheduled to take place in San Diego, California from November 18-21, 2019. When I think about Linux, Linus says, I worry about the technology and not care about the market. In a lot of areas of technology, being first is more important than being best because if you get a huge community around yourself you have already won. Linus says he and the Linux community and maintainers don’t focus on individual features; what they focus on is the process of getting those features out and making releases. He doesn’t believe in long term planning; there are no plans that span more than roughly six months. Top questions on security, gaming and Linux’s future, learnings and expectations Is the interest in Linux from people outside of the core Linux community declining? Linus opposes this statement stating that it’s still growing albeit not at quite the same rate it used to be. He says that people outside the Linux kernel community should care about Linux’s consistency and the fact that there are people to make sure that when you move to a new kernel your processes will not break. Where is the major focus for security in IT infrastructure? Is it in the kernel, or in the user space? When it comes to security you should not focus on one particular area alone. You need to have secure hardware, software, kernels, and libraries at every stage. The true path to security is to have multiple layers of security where even if one layer gets compromised there is another layer that picks up that problem. The kernel, he says, is one of the more security conscious projects because if the kernel has a security problem it's a problem for everybody. What are some learnings that other projects like Kubernetes and the whole cloud native world can take from the kernel? Linus acknowledges that he is not sure how much the kernel development model really translates to other projects. Linux has a different approach to maintenance as compared to other projects as well as a unified picture of where it is headed. However other projects can take up two learnings from Linux: Don't break your users: Linus says, this has been a mantra for the kernel for a long time and it's something that a lot of other projects seem to not have learned. If you want your project to flourish long term you shouldn’t let your users worry about upgrades and versions and instead make them aware of the fact that you are a stable platform. Create a common culture: In order to have a long life for a platform/project, you should create a community and have a common culture, a common goal to work together for a long term. Is gaming a platform where open source is going to be relevant? When you take up a new technology, Linus states,  you want to take as much existing infrastructure as possible to make it easy to get to your goals. Linux has obviously been a huge part of that in almost every setting. So the only places where Linux isn't completely taking over are those where there was a very strong established market and code base already. If you do something new, exciting and interesting you will almost inevitably use Linux as the base and that includes new platforms for gaming. What can we expect for Linux for the second thirty years? Will it continue just as today or where do you think we're going? Realistically if you look at what Linux does today, it's not that different from what operating systems did 50-60 years ago. What has changed is the hardware and the use. Linux is right in between those two things. What an operating system fundamentally does is act as a resource manager and as the interface between software and hardware. Linus says, “ I don't know what software and hardware will look like in 30 years but I do know we'll still have an operating system and that will probably be called Linux. I may not be around in 30 years but I will be around in 2021 for the 30 year Linux anniversary.” Go through the full conversation here. Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’. Linux 5.1 out with Io_uring IO interface, persistent memory, new patching improvements and more! Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities
Read more
  • 0
  • 0
  • 3185

article-image-githubs-hub-command-line-tool-makes-using-git-easier
Bhagyashree R
08 Jul 2019
3 min read
Save for later

GitHub's 'Hub' command-line tool makes using git easier

Bhagyashree R
08 Jul 2019
3 min read
GitHub introduced ‘Hub’ that extends git command-line with extra functionality to enable developers complete their everyday GitHub tasks right from the terminal. Hub does not have any dependencies, but as it is designed to wrap git, it is recommended to have at least git 1.7.3 or newer.  Hub provides both new and some extended version of commands that already exist in git. Here are some of them: hub-am: Used to replicate commits locally from a GitHub pull request.  hub-cherry-pick: Allows cherry-picking a commit from a fork on GitHub. hub-alias: Used to show shell instructions for wrapping git.  hub-browse: Used to open a GitHub repository in a web browser. hub-create: Used to create a new repository on GitHub and add a git remote for it. hub-fork: Allows forking the current repository on GitHub and adds a git remote for it. You can see the entire list of commands on the Hub Man Page. Most of these commands are expected to be run in a context of an existing local git repository. What are the advantages of using Hub Contributing to open source: This tool makes contributing to open source much easier by providing features for fetching repositories, navigating project pages, forking repos, and even submitting pull requests, all from the command-line. Script your workflows: You can easily script your workflows and set priorities by listing and creating issues, pull requests, and GitHub releases. Easily maintain projects: It allows you to easily fetch from other forks, review pull requests, and cherry-pick URLs. Use GitHub for work: It saves your time by allowing you to open pull requests for code reviews and push to multiple remotes at once. It also supports GitHub Enterprise, however, it needs to be whitelisted.  Hub is not the only tool of its kind, there are tools like Magit Forge and Lab. Though developers think that it is convenient, some feel that it increases GitHub lock-in. "While it is pretty cool, using such tool increases general lock-in to GitHub, in terms of both habits and potential use of it for automation of processes," a user expressed its opinion on Hacker News.  Another Hacker News user suggested, “I wish there was an open standard for operations that hub allows to do and all major Git forges, including open source ones, such as Gogs/Gitea and GitLab, supported it. In that case having a command-line tool that, like Git itself, is not tied to a particular vendor, but allows to do what hub does, could have been indispensable.” To know more in detail, check out Hub’s GitHub repository. Pull Panda is now a part of GitHub; code review workflows now get better! Github Sponsors: Could corporate strategy eat FOSS culture for dinner?
Read more
  • 0
  • 0
  • 3408

article-image-openwrt-18-06-4-released-with-updated-linux-kernel-security-fixes-curl-and-the-linux-kernel-and-much-more
Amrata Joshi
05 Jul 2019
3 min read
Save for later

OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more!

Amrata Joshi
05 Jul 2019
3 min read
This month, the OpenWrt Community announced the release of OpenWrt 18.06.4, the fourth service release of the stable OpenWrt 18.06 series. This release comes with a number of bug fixes in the network and system and brings updates to the kernel and base packages. The official page reads, “Note that the OpenWrt 18.06.3 release was skipped in favor to 18.06.4 due to a last-minute 4.14 kernel update fixing TCP connectivity problems which were introduced with the first iteration of the Linux SACK (Selective Acknowledgement)vulnerability patches.” What is the OpenWrt project? The OpenWrt Project, a Linux operating system, targets embedded devices and is a replacement for the vendor-supplied firmware consisting of a wide range of wireless routers and non-network devices.  OpenWrt ​is an easily modifiable operating system for routers and is powered by a Linux kernel. It offers a fully writable filesystem with optional package management instead of creating a single, static firmware. It is useful for developers as OpenWrt provides a framework for building an application without having to create a complete firmware image and distribution around it. It also gives freedom of full customization to the users that allows them to use an embedded device in many ways. What’s new in OpenWrt 18.06.4? In this release, Linux kernel has been updated to versions 4.9.184/4.14.131 from 4.9.152/4.14.95 in v18.06.2. It also comes with SACK (Selective Acknowledgement) security fixes for the Linux kernel and WPA3 security fixes in hostapd. It further offers security fixes for Curl and the Linux kernel, and comes with MT76 wireless driver updates. In this release, there are many network and system service fixes. Many users seem to be happy about this news and they choose routers based on the fact if they are supported by OpenWrt or not. A user commented on HackerNews, “I choose my routers based on if they are supported or not by OpenWrt. And for everybody that asks my opinion, too. Because they might not need/want/know/have a desire to install OpenWrt now, but it's good to have the door open for the future.” Users are also happy with OpenWrt’s interface, a user commented, “For people asking about the user interface of OpenWrt. I think it is very well dun. I get a long with it just fine and I am blind and have to use a screen reader. A11y in Luci is grate. All the pages make sence to most people you do not have to be a networking expert.” To know more about this news, check out OpenWrt’s official page. OpenWrt 18.06.2 released with major bug fixes, updated Linux kernel and more! Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Linux use-after-free vulnerability found in Linux 2.6 through 4.20.11  
Read more
  • 0
  • 0
  • 4140
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-rust-1-36-0-releases-with-a-stabilized-future-trait-nll-for-rust-2015-and-more
Bhagyashree R
05 Jul 2019
3 min read
Save for later

Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more

Bhagyashree R
05 Jul 2019
3 min read
Yesterday, the team behind Rust announced the release of Rust 1.36.0. This release brings a stabilized 'Future' trait, NLL for Rust 2015, stabilized Alloc crate as the core allocation and collections library, a new --offline flag for Cargo, and more. Following are some of the updates in Rust 1.36.0: The stabilized 'Future' trait A ‘Future’ in Rust represents an asynchronous value that allows a thread to continue doing useful work while it waits for the value to become available. This trait has been long-awaited by Rust developers and with this release, it has been finally stabilized. “With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we'll tell you more about in the future,” the Rust release team added. The alloc crate is stable The ‘std’ crate of the standard library provides types like Box<T> and OS functionality. But, the problem is it requires a global allocator and other OS capabilities. Beginning with Rust 1.36.0, the parts of std that are dependent on a global allocator will now be available in the ‘alloc’ crate and std will re-export these parts later. Use MaybeUninit<T> instead of mem::uninitialized Previously, the ‘mem::uninitialized’ function allowed you to bypass Rust’s memory-initialization checks by pretending to generate a value of type T without doing anything. Though the function has proven handy while lazily allocating arrays, it can be dangerous in many other scenarios as the Rust compiler just assumes that values are properly initialized. In Rust 1.36.0, the MaybeUninit<T> type has been stabilized to solve this problem. Now, the Rust compiler will understand that it should not assume that a MaybeUninit<T> is a properly initialized T. This will enable you to do gradual initialization more safely and eventually use ‘.assume_init()’. Non-lexical lifetimes (NLL) for Rust 2015 The Rust team introduced NLL in December last year when announcing Rust 1.31.0. It is an improvement to Rust’s static model of lifetimes to make the borrow checker smarter and more user-friendly. When it was first announced, it was only stabilized for Rust 2018. Now the team has backported it to Rust 2015 as well. In the future, we can expect all Rust editions to use NLL. --offline support in Cargo Previously, the Rust package manager, Cargo used to exit with an error if it needed to access the network and the network was not available. Rust 1.36.0 comes with a new flag called ‘--offline’ that makes the dependency resolution algorithm to only use locally cached dependencies, even if there might be a newer version. These were some of the updates in Rust 1.36.0. Read the official announcement to know more in detail. Introducing Vector, a high-performance data router, written in Rust Brave ad-blocker gives 69x better performance with its new engine written in Rust Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust
Read more
  • 0
  • 0
  • 2130

article-image-npm-inc-after-a-third-try-settles-former-employee-claims-who-were-fired-for-being-pro-union-the-register-reports
Fatema Patrawala
04 Jul 2019
5 min read
Save for later

Npm Inc, after a third try, settles former employee claims, who were fired for being pro-union, The Register reports

Fatema Patrawala
04 Jul 2019
5 min read
Yesterday, reports from The Register confirmed that the Javascript Package registry NMP Inc. and the 3 former employees who were fired, agreed on a settlement.  NPM which stands for Node Package Manager, is the company behind the widely used NPM JavaScript package repository. In March, the company laid off 5 employees in a unprofessional and unethical manner. In April, 3 out of 5 former staffers – Graham Carlson, Audrey Eschright, and Frédéric Harper – had formally accused NPM Inc of union busting in a complaint to the US National Labor Relations Board. https://twitter.com/bram_parsons/status/1146230097617178625 The deal was settled after the third round of negotiations between the two parties as per The Register. The filing posted on the NLRB website, administrative law judge Gerald Etchingham said he had received a letter from one of the attorneys involved in the dispute that both sides had agreed to settle. The terms of the deal were not disclosed but as per the NLRB settlements, such cases usually involve a pay back, job restoration or additional compensation. However, it is highly unlikely that none of the former employees will agree for job restore and will not return to npm. Other than this, NPM Inc is also required to share a letter with current employees accepting the ways in which it violated the laws. But there are no reports of this action yet from Npm inc. https://twitter.com/techworkersco/status/1146255087968239616 Audrey Eschright, one of the plaintiffs  complained on Twitter about the company's behaviour and former rejections to settle on claims. "I'm amazed that NPM has rejected their latest opportunity to settle the NLRB charges and wants to take it to court," she wrote. "Doing so continues the retaliation I and my fellow claimants experienced. We're giving up our own time, making rushed travel plans, and putting in a lot of effort because we believe our rights as workers are that important." According to Eschright, NPM Inc refused to settle because the CEO has taken the legal challenge personally. "Twice their lawyers have spent hours to negotiate an agreement with the NLRB, only to withdraw their offer," she elaborated on Twitter. "The only reason we've heard has been about Bryan Bogensberger's hurt feelings." The Register also mentioned that last week NPM Inc had tried to push back a hearing to be held on 8th July citing the reason that management was traveling for extensive fund raising. But NLRB denied the request and said that the reason is not justified.  NLRB also mentioned that NPM Inc "ignores the seriousness of these cases, which involve three nip-in-the-bud terminations at the onset of an organizing drive." It is indeed true that NPM Inc ignores the seriousness of this case but also oversees the fact that npm registry coordinates the distribution of hundreds of thousands of modules used by some 11 million JavaScript developers around the world. The management of NPM Inc is making irrational decisions and behaving notoriously, due to which the code for the npm command-line interface (CLI) suffers from neglect, unfixed bugs piling up and pull requests languishing.  https://twitter.com/npmstatus/status/1146055266057646080 On Monday, there were reports of Npm 6.9.1 bug which was caused due to .git folder present in the published tarball. The Community architect at the time, Kat Marchán had to release npm 6.9.2 to fix the issue. Shortly after, Marchán, who was a CLI and Community Architect at npm has also quit the company. Marchán made this announcement yesterday on Twitter, adding that she is no longer a maintainer on npm CLI or its components.  https://twitter.com/maybekatz/status/1146208849206005760 Another ex-npm employee noted on Marchán’s resignation, that every modern web framework depends on npm, and npm is inseparable from Kat’s passionate brilliance. https://twitter.com/cowperthwait/status/1146209348135161856 NPM Inc. now not only needs to fix bugs but majorly it also needs to fix its relationship and reputation among the Javascript community. Update on 20th September - NPM Inc. CEO resigns Reports from news sources came about NPM CEO, Bryan Bogensberger to resign effective immediately in order to pursue new opportunities. NPM's Board of directors have commenced a search for a new CEO. The company's leadership will be managed collaboratively by a team comprised of senior npm executives. "I am proud of the complete transformation we have been able to make in such a short period of time," said Bogensberger. "I wish this completely revamped, passionate team monumental success in the years to come!" Before joining npm, Inc., Bogensberger spent three years as CEO and co-founder of Inktank, a leading provider of scale-out, open source storage systems that was acquired by Red Hat, Inc. for $175 million in 2014. He also has served as vice president of business strategy at DreamHost, vice president of marketing at Joyent, and CEO and co-founder of Reasonablysmart, which Joyent acquired in 2009. To know more, check out PR Newswire website. Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems? Surprise NPM layoffs raise questions about the company culture  
Read more
  • 0
  • 0
  • 2404

article-image-gitlab-faces-backlash-from-users-over-performance-degradation-issues-tied-to-redis-latency
Vincy Davis
02 Jul 2019
4 min read
Save for later

GitLab faces backlash from users over performance degradation issues tied to redis latency

Vincy Davis
02 Jul 2019
4 min read
Yesterday, GitLab suffered major performance degradation in terms of 5x increased error rate and site slow down. The degradation was identified and rectified within few hours of its discovery. https://twitter.com/gabrielchuan/status/1145711954457088001 https://twitter.com/lordapo_/status/1145737533093027840 The GitLab engineers promptly started investigating the slowdown on GitLab.com and notified users that the slow down is in redis and lru cluster, thus impacting all web requests serviced by the rails front-end. What followed next was a very comprehensive detailing about the issue, its causes, who’s handling what kind of issue and more. GitLab’s step by step response looked like this: First, they investigated slow response times on GitLab. Next, they added more workers to alleviate the symptoms of the incident. Then, they investigated jobs on shared runners that were being picked up at a low rate or appeared being stuck. Next, they tracked CI issues and observed performance degradation as one incident. Over the time, they continued to investigate the degraded performance and CI pipeline delays. After a few hours, all services restored to normal operation and the CI pipelines continued to catch up from delays earlier with nearly normal levels. David Smith, the Production Engineering Manager at GitLab also updated users that the performance degradation was due to few issues tied to redis latency. Smith also added that, “We have been looking into the details of all of the network activity on redis and a few improvements are being worked on. GitLab.com has mostly recovered.” Many users on Hacker News wrote about their unpleasant experience with GitLab.com. A user states that, “I recently started a new position at a company that is using Gitlab. In the last month I've seen a lot of degraded performance and service outages (especially in Gitlab CI). If anyone at Gitlab is reading this - please, please slow down on chasing new markets + features and just make the stuff you already have work properly, and fill in the missing pieces.” Another user comments, “Slow down, simplify things, and improve your user experience. Gitlab already has enough features to be competitive for a while, with the Github + marketplace model.” Later, a GitLab employee by the username, kennyGitLab commented that GitLab is not losing sight and is just following the company’s new strategy of ‘Breadth over depth’. He further added that, “We believe that the company plowing ahead of other contributors is more valuable in the long run. It encourages others to contribute to the polish while we validate a future direction. As open-source software we want everyone to contribute to the ongoing improvement of GitLab.” Users were indignant by this response. A user commented, “"We're Open Source!" isn't a valid defense when you have paying customers. That pitch sounds great for your VCs, but for someone who spends a portion of their budget on your cloud services - I'm appalled. Gitlab is a SaaS company who also provides an open source set of software. If you don't want to invest in supporting up time - then don't sell paid SaaS services.” Another comment read, “I think I understand the perspective, but the messaging sounds a bit like, ‘Pay us full price while serving as our beta tester; sacrifice the needs of your company so you can fulfill the needs of ours’.” Few users also praised GitLab for prompt action and for providing everybody with in-depth detailing about the investigation. A user wrote that, “This is EXACTLY what I want to see when there's a service disruption. A live, in-depth view of who is doing what, any new leads on the issue, multiple teams chiming in with various diagnostic stats, honestly it's really awesome. I know this can't be expected from most businesses, especially non-open sourced ones, but it's so refreshing to see this instead of the typical "We're working on a potential service disruption" that we normally get.” GitLab goes multicloud using Crossplane with kubectl Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note
Read more
  • 0
  • 0
  • 3061

article-image-google-proposes-a-libc-in-llvm-rich-felker-of-musl-libc-thinks-its-a-very-bad-idea
Vincy Davis
28 Jun 2019
4 min read
Save for later

Google proposes a libc in LLVM, Rich Felker of musl libc thinks it’s a very bad idea

Vincy Davis
28 Jun 2019
4 min read
Earlier this week, Siva Chandra, Google LLVM contributor asked all LLVM developers on their opinion about starting a libc in LLVM. He mentioned a list of high-level goals and guiding principles, that they are intending to pursue. Three days ago, Rich Felker the creator of musl libc, made his thoughts about libc very clear by saying that “this is a very bad idea.” In his post, Chandra has said that he believes that a libc in LLVM will be beneficial and usable for the broader LLVM community, and may serve as a starting point for others in the community to flesh out an increasingly complete set of libc functionality.  Read More: Introducing LLVM Intermediate Representation One of the goals, mentioned by Chandra, states that the libc project would mesh with the “as a library” philosophy of the LLVM and would help in making the “the C Standard Library” more flexible. Another goal for libc states that it will support both static non-PIE and static-PIE linking. This means enabling the C runtime and the PIE loader for static non-PIE and static-PIE linked executables. Rich Felker posted his thoughts on the libc in LLVM as follows: Writing and maintaining a correct, compatible, high-quality libc is a monumental task. Though the amount of code needed is not that large, but “the subtleties of how it behaves and the difficulties of implementing various interfaces that have no capacity to fail or report failure, and the astronomical "compatibility surface" of interfacing with all C and C++ software ever written as well as a large amount of software written in other languages whose runtimes "pass through" the behavior of libc to the applications they host,”. Felkar believes that this will make libc not even of decent quality.  A corporate-led project is not answerable to the community, and hence they will leave whatever bugs it introduces, for the sake of compatibility with their own software, rather than fixing them. This is the main reason that Felkar thinks that if at all, a libc is created, it should not be a Google project.  Lastly Felkar states that avoiding monoculture preserves the motivation for consensus-based standard processes rather than single-party control. This will prove to be a motivation for people writing software, so they will write it according to proper standards, rather than according to a particular implementation.   Many users agree with Rich Felkar’s views.  A user on Hacker News states that “This speaks volumes very clearly. This highlights an immense hazard. Enterprise scale companies contributing to open-source is a fantastic thing, but enterprise scale companies thrusting their own proprietary libraries onto the open-source world is not. I'm already actively avoiding becoming beholden to Google in my work as it is already, let alone in the world where important software uses a libc written by Google. If you're not concerned by this, refer to the immense power that Google already wields over the extremely ubiquitous web-standards through the market dominance that Chrome has.” Another user says that, “In the beginning of Google's letter they let us understand they are going to create a simplified version for their own needs. It does mean they don't care about compatibility and bugs, if it doesn't affect their software. That's not how this kind of libraries should be implemented.” Another comment reads, “If Google wants their own libc that’s their business. But LLVM should not be part of their “manifest destiny”. The corporatization of OSS is a scary prospect, and should be called out loud and clear like this every time it’s attempted” While there are few others who think that Siva Chandra’s idea of a libc in LLVM might be a good thing. A user on Hacker News comments that “That is a good point, but I'm in no way disputing that Google could do a great job of creating their own libc. I would never be foolish enough to challenge the merit of Google's engineers, the proof of this is clear in the tasting of the pudding that is Google's software. My concerns lie in the open-source community becoming further beholden to Google, or even worse with Google dictating the direction of development on what could become a cornerstone of the architecture of many critical pieces of software.” For more details, head over to Rich Felkar’s pipermail.  Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed LLVM 8.0.0 releases! LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 4037
article-image-the-go-team-shares-new-proposals-planned-to-be-implemented-in-go-1-13-and-1-14
Bhagyashree R
27 Jun 2019
5 min read
Save for later

The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14

Bhagyashree R
27 Jun 2019
5 min read
Yesterday, the Go team shared the details of what all is coming in Go 1.13, the first release that is implemented using the new proposal evaluation process. In this process, feedback is taken from the community on a small number of proposals to reach the final decision. The team also shared what proposals they have selected to implement in Go 1.14 and the next steps. At Gophercon 2017, Russ Cox, Go programming language tech lead at Google, first disclosed the plan behind the implementation of Go 2. This plan was simple: the updates will be done in increments and will have minimal to no effect on everybody else. Updates in Go 1.13 Go 1.13, which marks the first increment towards Go 2, is planned to release in early August this year. A lot of language changes have landed in this release that were shortlisted from the huge list of Go 2 proposals based on the new proposal evaluation process. These proposals are selected under the criteria that they should address a problem, have minimal disruption, and provide a clear and well-understood solution. The team selected “relatively minor and mostly uncontroversial” proposals for this version. These changes are backward-compatible as modules, Go’s new dependency management system is not the default build mode yet. Go 1.11 and Go 1.12 include preliminary support for modules that makes dependency version information explicit and easier to manage. Proposals planned to be implemented in Go 1.13 The proposals that were initially planned to be implemented in Go 1.13 were: General Unicode identifiers based on Unicode TR31: This proposes to add support for enabling programmers using non-Westen alphabets to combine characters in identifiers and export uncased identifiers. Binary integer literals and support for_ in number literals: Go comes with support for octal, hexadecimal, and standard decimal literals. However, unlike other mainstream languages like Java 7, Python 3, and Ruby, it does not have support for binary integer literals. This proposes adding support for binary integer literals as a new prefix to integer literals like 0b or 0B. Another minor update is adding support for a blank (_) as a separator in number literals to improve the readability of complex numbers. Permit signed integers as shift counts: This proposes to change the language spec such that the shift count can be a signed or unsigned integer, or any non-negative constant value that can be represented as an integer. Out of these shortlisted proposals the binary integer literals, separators for number literals, and signed integer shift counts are implemented. The general Unicode identifiers proposal was not implemented as there was no “concrete design document in place in time.“ The proposal to support binary integer literals was significantly expanded, which led to an overhauled and modernized Go’s number literal syntax. Updates in Go 1.14 After the relatively minor updates in Go 1.13, the team has plans to take it up a notch with Go 1.14. With the new major version Go 2, their overarching goal is to provide programmers improved scalability. To achieve this, the team has to tackle the three biggest hurdles: package and version management, better error handling support, and generics. The first hurdle, package and version management will be addressed by the modules feature, which is growing stronger with each release. For the other two, the team has presented draft designs at last year’s GopherCon in Denver. https://youtu.be/6wIP3rO6On8 Proposals planned to be implemented in Go 1.14 Following are the proposals that are shortlisted for Go 1.14: A built-in Go error check function, ‘try’: This proposes a new built-in function named ‘try’ for error handling. It is designed to remove the boilerplate ‘if’ statements typically associated with error handling in Go. Allow embedding overlapping interfaces: This is a backward-compatible proposal to make interface embedding more tolerant. Diagnose ‘string(int)’ conversion in ‘go vet’: This proposes to remove the explicit type conversion string(i) where ‘i’ has an integer type other than ‘rune’. The team is making this backward-incompatible change as it was introduced in the early days of Go and now has become quite confusing to comprehend. Adopt crypto principles: This proposes to implement design principles for cryptographic libraries outlined in the Cryptography Principles document. The team is now seeking community feedback on these proposals. “We are especially interested in fact-based evidence illustrating why a proposal might not work well in practice or problematic aspects we might have missed in the design. Convincing examples in support of a proposal are also very helpful,” the blog post reads. While developers are confident that Go 2 will bring a lot of exciting features and enhancements, not many are a fan of some of the proposed features, for instance, the try function. “I dislike the try implementation, one of Go's strengths for me after working with Scala is the way it promotes error handling to a first class citizen in writing code, this feels like its heading towards pushing it back to an afterthought as tends to be the case with monadic operations,” a developer commented on Hacker News. Some Twitter users also expressed their dislike towards the proposed try function: https://twitter.com/nicolasparada_/status/1144005409755357186 https://twitter.com/dullboy/status/1143934750702362624 These were some of the updates proposed for Go 1.13 and Go 1.14. To know more about this news, check out the Go Blog. Go 1.12 released with support for TLS 1.3, module support among other updates Go 1.12 Release Candidate 1 is here with improved runtime, assembler, ports and more State of Go February 2019 – Golang developments report for this month released  
Read more
  • 0
  • 0
  • 3445

article-image-fedora-workstation-31-to-come-with-wayland-support-improved-core-features-of-pipewire-and-more
Bhagyashree R
26 Jun 2019
3 min read
Save for later

Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more

Bhagyashree R
26 Jun 2019
3 min read
On Monday, Christian F.K. Schaller, Senior Manager for Desktop at Red Hat, shared a blog post that outlined the various improvements and features coming in Fedora Workstation 31. These include Wayland improvements, more PipeWire functionality, continued improvements around Flatpak, Fleet Commander, and more.  Here are some of the enhancements coming to Fedora Workstation 31: Wayland transitioning to complete soon Wayland is a desktop server protocol that was introduced to replace the X Windowing System with a modern and simpler windowing system in Linux and other Unix-like operating systems. The team is focusing on removing the X Windowing System dependency so that the GNOME Shell will be able to run without the need of XWayland.  Schaller shared that the work related to removing X dependency is done for the shell itself. However, some things are left in regards to the GNOME Setting daemon. Once this work is complete an X server (XWayland) will only start if an X application is run and will shut down when the application is stopped. Another aspect that the team is working on is allowing X applications to run as root under XWayland. Running desktop applications as root is generally not considered safe. However, there are few applications that only work when they are run as root. This is why the team has decided to continue support for running applications as root in XWayland. The team is also adding support for NVidia binary driver to allow running a native Wayland session on top of the binary driver. PipeWire with improved desktop sharing portal PipeWire is a multimedia framework that aims to improve the handling of audio and video in Linux. This release will come with more improved core features of PipeWire. The existing desktop sharing portal is now enhanced and will soon have Miracast support. The team’s ultimate goal is to make the GNOME integration even more seamless than the standalone app.  Better infrastructure for building Flatpaks Flatpak is a utility for software deployment and package management in Linux. The team is making the infrastructure for building Flatpaks from RPMS better. They will also be offering applications from flathub.io and quay.io out of the box and in accordance with Fedora rules for third-party software. The team will also be making a Red Hat UBI based runtime available. A third-party developer can use this runtime to build their applications and be sure that it will be supported by Red Hat for the lifetime of a given RHEL release. Fedora Toolbox with improved GNOME Terminal  Fedora Toolbox is a tool that gives developers a seamless experience when using an immutable OS like Silverblue. Currently, improvements are being done to GNOME Terminal that will ensure a more natural behavior inside the terminal when interacting with pet containers. The is looking for ways to make the selection of containers more discoverable so that developers will easily get access to a Red Hat UBI container or a Red Hat TensorFlow container for instance.  Along with these, the team is improving the infrastructure for Linux fingerprint reader support, securing Gamemode, adding support for Dell Totem, improving media codec support, and more. To know more in detail check out Schaller’s blog post. Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more! Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more Fedora 31 will now come with Mono 5 to offer open-source .NET support
Read more
  • 0
  • 0
  • 5060

article-image-introducing-pyoxidizer-an-open-source-utility-for-producing-standalone-python-applications-written-in-rust
Bhagyashree R
26 Jun 2019
4 min read
Save for later

Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust

Bhagyashree R
26 Jun 2019
4 min read
On Monday, Gregory Szorc, a Developer Productivity Engineer at Airbnb, introduced PyOxidizer, a Python application packaging and distribution tool written in Rust. This tool is available for Windows, macOS, and Linux operating systems. Sharing his vision behind this tool, Szorc wrote in the announcement, “I want PyOxidizer to provide a Python application packaging and distribution experience that just works with a minimal cognitive effort from Python application maintainers.” https://twitter.com/indygreg/status/1143187250743668736 PyOxidizer aims to solve complex packaging and distribution problems so that developers can put their efforts into building applications instead of juggling with build systems and packaging tools. According to the GitHub README, “PyOxidizer is a collection of Rust crates that facilitate building libraries and binaries containing Python interpreters.” Its most visible component is the ‘pyoxidizer’ command line tool. With this tool, you can create new projects, add PyOxidizer to existing projects, produce binaries containing a Python interpreter, and various related functionality. How PyOxidizer is different from other Python application packaging/distribution tools PyOxidizer provides the following benefits over other Python application packaging/distribution tools: It works across all popular platforms, unlike many other tools that only target Windows or macOS. It works even if the executing system does not have Python installed. It does not have special system requirements like SquashFS, container runtimes, etc. Its startup performance is comparable to traditional Python execution. It supports single file executables with minimal or none system dependencies. Here are some of the features PyOxidizer comes with: Generates a standalone single executable file One of the most important features of PyOxidizer is that it can produce a single executable file that contains a fully-featured Python interpreter, its extensions, standard library, and your application's modules and resources. PyOxidizer embeds self-contained Python interpreters as a tool and software library by exposing its lower-level functionality. Serves as a bridge between Rust and Python The ‘Oxidizer’ part in PyOxidizer comes from Rust. Internally, it uses Rust to produce executables and manage the embedded Python interpreter and its operations. Along with solving the problem of packaging and distribution with Rust, PyOxidizer can also serve as a bridge between these two languages. This makes it possible to add a Python interpreter to any Rust project and vice versa. With PyOxidizer, you can bootstrap a new Rust project that contains an embedded version of Python and your application. “Initially, your project is a few lines of Rust that instantiates a Python interpreter and runs Python code. Over time, the functionality could be (re)written in Rust and your previously Python-only project could leverage Rust and its diverse ecosystem,” explained Szorc. The creator chose Rust for the run-time and build-time components because it is considered to be one of the superior systems programming languages and does not require considerable effort solving difficult problems like cross-compiling. He believes that implementing the embedding component in Rust also opens more opportunities to embed Python in Rust programs. “This is largely an unexplored area in the Python ecosystem and the author hopes that PyOxidizer plays a part in more people embedding Python in Rust,” he added. PyOxidizer executables are faster to start and import During the execution, binaries built with PyOxidizer does not have to do anything special like creating a temporary directory to run the Python interpreter. Everything is loaded directly from the memory without any explicit I/O operations. So, when a Python module is imported, its bytecode is loaded from a memory address in the executable using zero-copy. This results in making the executables produced by PyOxidizer faster to start and import. PyOxidizer is still in its early stages. Yesterday’s initial release is good at producing executables embedding Python. However, not much has been implemented yet to solve the distribution part of the problem. Some of the missing features that we can expect to come in the future are an official build environment, support for C extensions, more robust packaging support, easy distribution, and more. The creator encourages Python developers to try this tool and share feedback with him or file an issue on GitHub. You can also contribute to this project via Patreon or PayPal. Many users are excited to try this tool: https://twitter.com/kevindcon/status/1143750501592211456 https://twitter.com/acemarke/status/1143389113871040517 Read the announcement made by Szorc to know more in detail. Python 3.8 beta 1 is now ready for you to test PyPI announces 2FA for securing Python package downloads Matplotlib 3.1 releases with Python 3.6+ support, secondary axis support, and more
Read more
  • 0
  • 0
  • 5976
article-image-a-vulnerability-discovered-in-kubernetes-kubectl-cp-command-can-allow-malicious-directory-traversal-attack-on-a-targeted-system
Amrata Joshi
25 Jun 2019
3 min read
Save for later

A vulnerability discovered in Kubernetes kubectl cp command can allow malicious directory traversal attack on a targeted system

Amrata Joshi
25 Jun 2019
3 min read
Last week, the Kubernetes team announced that a security issue (CVE-2019-11246) was discovered with Kubernetes kubectl cp command. According to the team this issue could lead to a directory traversal in such a way that a malicious container could replace or create files on a user’s workstation.  This vulnerability impacts kubectl, the command line interface that is used to run commands against Kubernetes clusters. The vulnerability was discovered by Charles Holmes, from Atredis Partners as part of the ongoing Kubernetes security audit sponsored by CNCF (Cloud Native Computing Foundation). This particular issue is a client-side defect and it requires user interaction to exploit the system. According to the post, this issue is of high severity and  the Kubernetes team encourages to upgrade kubectl to Kubernetes 1.12.9, 1.13.6, and 1.14.2 or later versions for fixing this issue. To upgrade the system, users need to follow the installation instructions from the docs. The announcement reads, “Thanks to Maciej Szulik for the fix, to Tim Allclair for the test cases and fix review, and to the patch release managers for including the fix in their releases.” The kubectl cp command allows copying the files between containers and user machine. For copying files from a container, Kubernetes runs tar inside the container for creating a tar archive and then copies it over the network, post which, kubectl unpacks it on the user’s machine. In case, the tar binary in the container is malicious, it could possibly run any code and generate unexpected, malicious results. An attacker could use this to write files to any path on the user’s machine when kubectl cp is called, which is limited only by the system permissions of the local user. The current vulnerability is quite similar to CVE-2019-1002101 which was an issue in the kubectl binary, precisely in the kubectl cp command. The attacker could exploit this vulnerability for writing files to any path on the user’s machine. Wei Lien Dang, co-founder and vice president of product at StackRox, said, “This vulnerability stems from incomplete fixes for a previously disclosed vulnerability (CVE-2019-1002101). This vulnerability is concerning because it would allow an attacker to overwrite sensitive file paths or add files that are malicious programs, which could then be leveraged to compromise significant portions of Kubernetes environments.” Users are advised to run kubectl version --client and in case it does not say client version 1.12.9, 1.13.6, or 1.14.2 or newer, then it means the user is running a vulnerable version which needs to be upgraded. To know more about this news, check out the announcement.  Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more! HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!    
Read more
  • 0
  • 0
  • 2956

article-image-qt-and-lg-electronics-partner-to-make-webos-as-the-platform-of-choice-for-embedded-smart-devices
Amrata Joshi
25 Jun 2019
3 min read
Save for later

Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices

Amrata Joshi
25 Jun 2019
3 min read
The team at Qt and LG Electronics partner to provide webOS as the platform for embedded smart devices in the automotive, robotics and smart home sectors. The webOS, also known as LG webOS, is a Linux kernel-based multitasking operating system for smart devices. The webOS platform powers smart home devices including LG Smart TVs and smart home appliances, and can also deliver greater consumer benefits in high-growth industries such as the automotive sector. The system UI of LG webOS is written mostly using Qt Quick 2 and Qt technology. Last year in March, LG announced an open-source edition of webOS. I.P. Park, president and CTO of LG Electronics, said in a statement, “Smart devices have the potential to deliver an unmatched customer experience wherever we may be – in our homes, cars, and anywhere in between.” Park further added, “Our partnership with Qt enables us to dramatically enhance webOS, providing our customers with the most advanced platform for the creation of highly immersive devices and services. We look forward to continuing our long-standing collaboration with Qt to deliver memorable experiences in the exciting areas of automotive, smart homes and robotics.” LG selected Qt as its business and technical partner for webOS to meet the challenging requirements and also to navigate the market dynamics of the automotive, smart home and robotics industries. With this partnership, Qt will provide LG with end-to-end, integrated as well as a hardware-agnostic development environment for engineers, developers, and designers for creating innovative and immersive apps and devices. Also, officially, the webOS will become a reference operating system of Qt. This partnership will help the customers to leverage webOS’ set of middleware-enabled functionality that saves customer time and effort in their embedded development projects. Qt’s feature-rich development tools such as Qt Creator, Qt Design Studio and Qt 3D Studio will also support webOS. Juha Varelius, CEO of Qt, said to us, “LG has been a technology leader for generations, which is one of the many reasons they’ve become such a trusted partner of Qt.” Varelius further added, “With the company’s initiative to expand the reach of webOS into rapidly growing markets, LG is underscoring the massive potential of Qt-enabled connected experiences. By collaborating with LG on this initiative, we’re able to make it easy as possible for our customers to build devices that bring a new definition to the word ‘smart’.” Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more Qt Creator 4.9.0 released with language support, QML support, profiling and much more Qt installation on different platforms [Tutorial]  
Read more
  • 0
  • 0
  • 2010