Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-apache-maven-javadoc-plugin-version-3-1-0-released
Sugandha Lahoti
08 Mar 2019
2 min read
Save for later

Apache Maven Javadoc Plugin version 3.1.0 released

Sugandha Lahoti
08 Mar 2019
2 min read
On Monday, the Apache Maven team announced the release of the Apache Maven Javadoc Plugin, version 3.1.0. The Javadoc Plugin uses the Javadoc tool to generate javadocs for a specified project. It gets the parameter values that will be used from the plugin configuration specified in the pom. The plugin can also be used to package the generated javadocs into a jar file for distribution. What’s new in Maven Javadoc Plugin version 3.1.0? New features include the support for aggregated reports at each level in the multi-module hierarchy. The dependency has also been upgraded to parent pom 32. Changes made to the repository The aggregate goal doesn't respect managed dependencies detectLinks may pass invalid URLs to javadoc(1) Invalid 'expires' attribute <link> entries that do not redirect are ignored and those that point to a resource that requires an Accept header may be ignored Other improvements: The plugin adds an 'aggregated-no-fork' goal The Command line dump reveals proxy user/password in case of errors The plugin ignores module-info.java on earlier Java versions Additionalparam documentation has been cleaned up Element-list links from java10 dependencies are now supported Reports are now allowed to be generated in Spanish locale The default value for removeUnknownThrows is changed to true Proxy configuration now properly works for both HTTP and HTTPS Patterns are used for defaultJavadocApiLinks Typos are fixed in User Guide. Groups parameter is not compatible with Surefire Other fixes: duplicates lines are fixed in the javadoc JavadocOptionsXpp3Reader doesn't deserialize the placement element <additionalOption> input isn't escaped for double backslashes links option is ignored in offline mode, even for local links in argument file in tag Maven Javadoc Plugin can't get dependency from third-party maven repository These are just a select few updates. For more details, head over to the mailing list archives. Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more! Twitter adopts Apache Kafka as their Pub/Sub System Apache Spark 2.4.0 released
Read more
  • 0
  • 0
  • 2129

article-image-microsoft-open-sources-the-windows-calculator-code-on-github
Amrata Joshi
07 Mar 2019
3 min read
Save for later

Microsoft open sources the Windows Calculator code on GitHub

Amrata Joshi
07 Mar 2019
3 min read
Since the past couple of years, Microsoft has been supporting open source projects, it even joined the Open Invention Network. Last year, Microsoft had announced the general availability of its Windows 3.0 File Manager code. Yesterday, the team at Microsoft made an announcement regarding releasing its Windows Calculator program as an open source project on GitHub under the MIT License. Microsoft is making the source code, build system, unit tests, and product roadmap available to the community. It would be interesting for the developers to explore how different parts of the Calculator app work and further getting to know the Calculator logic. Microsoft is also encouraging developers to participate in their projects by bringing in new perspectives on the Calculator code. The company highlighted that developers can contribute by participating in discussions, fixing or reporting issues, prototyping new features and by addressing design flows. By reviewing the Calculator code, developers can explore the latest Microsoft technologies like XAML, Universal Windows Platform, and Azure Pipelines. They can also learn about Microsoft’s full development lifecycle and can even reuse the code to build their own projects. Microsoft will be also contributing custom controls and API extensions used in Calculator and projects like the Windows UI Library and Windows Community Toolkit. The official announcement reads, “Our goal is to build an even better user experience in partnership with the community.” With the recent updates from Microsoft, it seems that the company is becoming more and more developer friendly. Just two days ago, the company updated its App Developer Agreement. As per the new policy, the developers will now get up to 95% share. According to a few users, Microsoft might collect user information via this new project and even the section below telemetry (on the GitHub post) states the same. The post reads, "This project collects usage data and sends it to Microsoft to help improve our products and services. Read our privacy statement to learn more. Telemetry is disabled in development builds by default, and can be enabled with the SEND_TELEMETRY build flag." One of the users commented on HackerNews, “Well it must include your IP address too, and they know the time and date it was received. And then it gets bundled with the rest of the data they collected. I don't even want them knowing when I'm using my computer. What gets measured gets managed.” Few users have different perspectives regarding this. Another comment reads, “Separately, I question whether anyone looking at the telemetry on the backend. In my experience, developers add this stuff because they think it will be useful, then it never or rarely gets looked at. A telemetry event here, a telemetry event there, pretty soon you're talking real bandwidth.” Check out Microsoft’s blog post for more details on this news. Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military Microsoft adds new features to Microsoft Office 365: Microsoft threat experts, priority notifications, Desktop App Assure, and more
Read more
  • 0
  • 0
  • 3156

article-image-microsoft-store-updates-its-app-developer-agreement-to-give-developers-up-to-95-of-app-revenue
Amrata Joshi
07 Mar 2019
3 min read
Save for later

Microsoft Store updates its app developer agreement, to give developers up to 95% of app revenue

Amrata Joshi
07 Mar 2019
3 min read
Last year, Microsoft had announced about its new revenue split figures at the Build 2018. The new policy was expected to be rolled out by the end of 2018. However, it was actually two days ago that the team at Microsoft Store updated its App Developer Agreement (ADA) which is the revenue sharing agreement. The consumer app developers will now benefit by earning up to 95 percent cut of the revenue on app sales excluding games, and an 85 percent cut on the low end. This 95 percent share can be earned only when a customer uses a deep link (tracked by CID (Connection ID)) to purchase the app. In case the customers are directed by Microsoft to their app through a collection or "any other owned Microsoft properties (tracked by an OCID)," then developers will get an 85 percent share. This policy for the fee structure is effective for purchases on Windows Mixed Reality, Windows phone, Windows 10 PCs, and Surface Hub. The policy excludes purchases made on Xbox consoles. If there is no CID or OCID attributed to purchase, then in the case of a web search, customers will get 95 percent revenue. Few Hacker news users have appreciated this new revenue split policy as according to them the company has made a fair move. One user commented on HackerNews, “It seems like a reasonable shifting of costs. If you rely mostly on Microsoft for acquiring new customers, then Microsoft should get a little bit more of a cut, and if you rely mostly on your own marketing methods, then it should get less.” Another comment reads, “It’s an insanely good deal. MSFT has to be losing money on that.” According to a few others, there is also a benefit of organic search here. As app stores don’t usually have much of organic search going on. This move might result in the company getting a better idea on the organic search being done on their store. Also, the 5%-15% cut is an add on. According to a few users, it is equally beneficial for Microsoft as the company earns a cut as well. A comment reads, “Like all digital goods, the marginal cost of MSFT doing this is zero. I don't think they are losing money on this, in terms of pure margins, it’s probably quite lucrative (though in absolute revenue, maybe not so much).” Another comment reads, “I actually think this is a brilliant insight on the side of Microsoft, by inverting this model they get a non-zero slice of a pie they previously did not have.” This may have an effect on how other tech companies and developers function. Other companies may possibly get pressurized by Microsoft’s move considering the company has significantly gained the confidence of developers. To know more about this news, check out Microsoft’s blog post. Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military Microsoft adds new features to Microsoft Office 365: Microsoft threat experts, priority notifications, Desktop App Assure, and more
Read more
  • 0
  • 0
  • 1781
Visually different images

article-image-github-releases-vulcanizer-a-new-golang-library-for-operating-elasticsearch
Natasha Mathur
06 Mar 2019
2 min read
Save for later

GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch

Natasha Mathur
06 Mar 2019
2 min read
The GitHub team released a new Go library, Vulcanizer, that interacts with an Elasticsearch cluster, yesterday. Vulcanizer is not a full-fledged Elasticsearch client. However, it is aimed at providing a high-level API to help with common tasks associated with operating an Elasticsearch cluster. These tasks include querying health status of the cluster, migrating data from nodes, updating cluster settings, and more. GitHub makes use of Elasticsearch as the core technology behind its search services. GitHub has already released the Elastomer library for Ruby and they use Elastic library for Go by user olivere. However, the GitHub team wanted a high-level API that corresponded with the common operations on cluster such as disabling allocation or draining the shards from a node. They wanted a library that focused more on the administrative operations and that could be easily used by their existing tooling. Since Go’s structure encourages the construction of composable software, they decided it was a good fit for Elasticsearch. This is because, Elasticsearch is very effective and helps carry out almost all the operations that can be done using its HTTP interface, and where you don’t want to write JSON manually. Vulcanizer is great at getting nodes of a cluster, updating the max recovery cluster settings, and safely adding or removing the nodes from the exclude settings, making sure that shards don’t unexpectedly allocate onto a node. Also, Vulcanizer helps build ChatOps tooling around Elasticsearch quickly for common tasks. GitHub team states that having all the Elasticsearch functionality in their own library, Vulcanizer, helps its internal apps to be slim and isolated. For more information, check out the official GitHub Vulcanizer post. GitHub increases its reward payout model for its bug bounty program   GitHub launches draft pull requests GitHub Octoverse: top machine learning packages, languages, and projects of 2018
Read more
  • 0
  • 0
  • 3493

article-image-it-is-supposedly-possible-to-increase-reproducibility-from-54-to-90-in-debian-buster
Melisha Dsouza
06 Mar 2019
2 min read
Save for later

It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!

Melisha Dsouza
06 Mar 2019
2 min read
Yesterday, Holger Levsen, a member of the team maintaining reproducible.debian.net, started a discussion on reproducible builds, stating that “Debian Buster will only be 54% reproducible (while we could be at >90%)”. He started off by stating that tests indicate Debian Buster’s 26476 source packages (92.8%) out of 28523 source packages in total can be built reproducibly in buster/amd64. The 28523 source packages build 57448 binary packages. Next, by looking at binary packages that Debian actually distributes, he says that Vagrant came up with an idea to check buildinfo.debian.net for .deb files for which there exists 2 or more .buildinfo. Turning this into a Jenkins job, he checked the above idea for all 57448 binary packages (including downloading all those .deb files from ftp.d.o)  in amd64/buster/main. He obtained the following results: reproducible packages in buster/amd64: 30885: (53.7600%) unreproducible packages in buster/amd64: 26543: (46.2000%) and reproducible binNMUs in buster/amd64: 0: (0%) unreproducible binNMU in buster/amd64: 7423: (12.9200%) He suggests that binNMUs are unreproducible because of their design and his proposed solution to obtain reproducible nature is that 'binNMUs should be replaced by easy "no-change-except-debian/changelog-uploads'. This means a 12% increase in reproducibility from 54%. Next, he also discovered that 6804 source packages need a rebuild from December 2016. This is because these packages were built with an old dpkg not producing .buildinfo files. 6804 of 28523 accounts for 23.9%. Summing everything up- 54%+12%+24% equals 90% reproducibility. Refer to the entire discussion thread for more details on this news. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable User discovers bug in debian stable kernel upgrade; armmp package affected Debian 9.7 released with fix for RCE flaw
Read more
  • 0
  • 0
  • 2663

article-image-reactos-0-4-11-is-now-out-with-kernel-improvements-manifests-support-and-more
Bhagyashree R
05 Mar 2019
2 min read
Save for later

ReactOS 0.4.11 is now out with kernel improvements, manifests support, and more!

Bhagyashree R
05 Mar 2019
2 min read
Yesterday, the ReactOS team announced the release of ReactOS 0.4.11. This release comes with improvements in the kernel for better overall system stability, support for manifests, and more. Following are some of the updates ReactOS 0.4.11 comes with: Kernel improvements ReactOS 0.4.11 comes with substantial updates in the interface that allow operating systems to talk with storage devices. Nowadays, computers generally use SATA connections and the corresponding AHCI interface. To support this interface, ReactOS relies on the UniATA driver. But, this driver was not supported by the 6th generation of Intel’s Core processors (Skylake). The team has now resolved this incompatibility enabling users to test ReactOS on more modern platforms. Support for manifests Applications often depend on other libraries in the form of dynamic link libraries (DLLs), which are loaded by the loader (LDR). One way these dependencies are specified is with the help of manifests. In the previous versions of ReactOS, manifests were not properly supported. ReactOS 0.4.11 comes with sufficient support for manifests, which has also widened the range of applications that can run in ReactOS. With this support added, ReactOS can now run applications like Blender 2.57b, Bumptop, Evernote 5.8.3, Quicktime Player 7.7.9, and many others. USETUP improvements ReactOS 0.4.11 comes with major improvements in the USETUP module. The goal behind these improvements was to enable users to upgrade an existing installation of ReactOS. This is also a step forward towards making ReactOS an actual system OS with the ability to update without the loss of any data and configuration. Testing In this release, the team has restructured the test results page to better encapsulate the relevant information. In addition to the overall conclusion of the test, users will now also be able to see details such as tracking what drove a particular conclusion and the workarounds that they might themselves attempt. Support for network debugging and diagnosis programs ReactOS 0.4.11 now supports various network debugging and diagnosis programs as a result of work done in TCP and UDP connection enumeration. With this update, the ReactOS team aims to make the platform useful for not just running applications, but also to debug them. To read the full list of updates in ReactOS 0.4.11, check out the official announcement. Btrfs now boots ReactOS, a free and open source alternative for Windows NT ReactOS version 0.4.9 released with Self-hosting and FastFAT crash fixes You can now install Windows 10 on a Raspberry Pi 3
Read more
  • 0
  • 0
  • 2551
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-gnu-octave-5-1-0-releases-with-new-changes-and-improvements
Natasha Mathur
04 Mar 2019
3 min read
Save for later

GNU Octave 5.1.0 releases with new changes and improvements

Natasha Mathur
04 Mar 2019
3 min read
GNU Octave team released version 5.1.0 of the popular high-level programming language, last week. GNU Octave 5.1.0 comes with general improvements, dependencies, and other changes. What’s new in GNU Octave 5.1.0? General Improvements The Octave plotting system in GNU Octave 5.1.0 supports high-resolution screens (the ones with greater than 96 DPI such as HiDPI/Retina monitors). There’s a newly added Unicode character support for files and folders in Windows. The fsolve function is modified to use larger step sizes while calculating the Jacobian of a function with finite differences, thereby, leading to faster convergence. The ranks function is recoded for performance and has now become 25X faster. It also supports a third argument that can specify resolving the ranking of tie values. Another function randi has also been recoded to produce an unbiased (all results are equally likely) sample of integers. The function isdefinite now returns true or false instead of -1, 0, or 1. The intmax, intmin, and flintmax functions can now accept a variable as input. There is no longer a need for path handling functions to perform variable or brace expansion on path elements. Also, Octave’s load-path is no longer subject to these expansions. A new printing device is available, "-ddumb", that can produce ASCII art for plots. This device has been made available only with the gnuplot toolkit. Other Changes Dependencies: The GUI now requires Qt libraries in GNU Octave 5.1.0. The minimum Qt4 version that is supported is Qt4.8.The OSMesa library is no longer used. To print invisible figures while using OpenGL graphics, the Qt QOFFSCREENSURFACE feature must be available. The FFTW library should be able to perform FFT calculations. The FFTPACK sources are removed from Octave. Matlab Compatibility: The functions such as issymmetric and ishermitian now accept an option "nonskew" or "skew" for calculating the symmetric or skew-symmetric property of a matrix. The issorted function can now use a direction option of "ascend" or "descend". You can now use clear with no arguments and it will remove only local variables from the current workspace. Global variables will no longer be visible, but will exist in the global workspace. Graphic Objects: Figure graphic objects in GNU Octave 5.1.0 now have a new property "Number" which is read-only and that can return the handle (number) of the figure. But if "IntegerHandle" is set to "off" then the property will return an empty matrix []. Patch and surface graphic objects can now use the "FaceNormals" property for flat lighting. "FaceNormals" and "VertexNormals" can now be calculated only when necessary to improve graphics performance. The "Margin" property of text-objects has a new default of 3 rather than 2. For the complete list of changes, check out the official GNU Octave 5.1.0 release notes. GNU Health Federation message and authentication server drops MongoDB and adopts PostgreSQL Bash 5.0 is here with new features and improvements GNU ed 1.15 released!
Read more
  • 0
  • 0
  • 3416

article-image-the-npm-engineering-team-shares-why-rust-was-the-best-choice-for-addressing-cpu-bound-bottlenecks
Bhagyashree R
04 Mar 2019
3 min read
Save for later

The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks

Bhagyashree R
04 Mar 2019
3 min read
Last month, the npm engineering team in a white paper shared why they chose Rust to rewrite their authorization service. If you are not already aware, npm is the largest package manager that offers both an open source and enterprise registry. The npm registry boasts of about 1.3 billion package downloads per day. Looking at the huge user base, it is not a surprise that the npm engineering team has to regularly keep a check on any area that causes performance problems. Though most of the network-bound operations were pretty efficient, while looking at the authorization service, the team saw a CPU-bound task that was causing a performance bottleneck. They decided to rewrite its “legacy JavaScript implementation” in Rust to make it modern and performant. Why the npm team chose Rust? C, C++, and Java were rejected by the team as C++ or C requires expertise in memory management and Java requires the deployment of JVM and associated libraries. They were then left with two options as the alternate programming languages: Go and Rust. To narrow down on one programming language that was best suited for their authorization service, the team rewrote the service in Node.js, Go, and Rust. The Node.js rewrite was acting as a baseline to evaluate Go or Rust. While rewriting in Node.js took just an hour, given the team’s expertise in JavaScript, the performance was very similar to the legacy implementation. The team finished the Go rewrite in two days but ruled it out because it did not provide a good dependency management solution. “The prospect of installing dependencies globally and sharing versions across any Go project (the standard in Go at the time they performed this evaluation) was unappealing,” says the white paper. Though the Rust rewrite took the team about a week, they were very impressed by the dependency management Rust offers. The team noted that Rust’s strategy is very much inspired by npm’s strategy. For instance, its Cargo command-line tool is similar to the npm command-line tool. All in all, the team chose Rust because not only it matched their JavaScript-inspired expectations, it also gave better developer experience. The deployment process of the new service was also pretty straightforward, and even after deployment, the team rarely encountered any operational issues. The team also states that one of the main reasons for choosing Rust was its helpful community. “When the engineers encountered problems, the Rust community was helpful and friendly in answering questions. This enabled the team to reimplement the service and deploy the Rust version to production.” What were the downsides of choosing Rust? The team did find the language a little bit difficult to grasp at first. The team shared in the white paper, “The design of the language front-loads decisions about memory usage to ensure memory safety in a different way than other common programming languages.” Rewriting the service in Rust came with an extra burden of maintaining two separate solutions for monitoring, logging, and alerting for the existing JavaScript stack and the new Rust stack. Given that it is quite a new language, Rust currently also lacks industry-standard libraries and best practices for these solutions. Read the white paper shared by npm for more details. Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web
Read more
  • 0
  • 0
  • 3181

article-image-the-erlang-ecosystem-foundation-launched-at-the-code-beam-sf-conference
Bhagyashree R
01 Mar 2019
2 min read
Save for later

The Erlang Ecosystem Foundation launched at the Code BEAM SF conference

Bhagyashree R
01 Mar 2019
2 min read
Yesterday, at the ongoing Code BEAM SF event, the formation of Erlang Ecosystem Foundation (EFF) was announced. Its founding members, Jose Valim, Peer Stritzinger, Fred Hebert, Miriam Pena, and Francesco Cesarini spoke about its journey, importance, and goals. The proposal for creating EEF was submitted last year in December to foster the Erlang and Elixir ecosystem. https://twitter.com/CodeBEAMio/status/1101310225804476416 Code BEAM SF, formerly known as Erlang & Elixir Factory, is a two-day event commenced on Feb 28. This conference brings together the best minds in the Erlang and Elixir communities to discuss the future of these technologies. The purpose of the Erlang Ecosystem Foundation EEF is a non-profit organization for driving the further development and adoption of Erlang, Elixir, LFE, and other technologies based on BEAM, the Erlang virtual machine. Backed by companies like Cisco, Erlang solutions, Ericsson, and others, this foundation aims to grow and support a diverse community around the Erlang and Elixir Ecosystem. This foundation will encourage the development of technologies and open source projects based on BEAM languages. “Our goal is to increase the adoption of this sophisticated platform among forward-thinking organizations. With member-supported Working Groups actively contributing to libraries, tools, and documentation used regularly by individuals and companies relying on the stability and versatility of the ecosystem, we actively invest in critical pieces of technical infrastructure to support our users in their efforts to build the next generation of advanced, reliable, real-time applications,” says the official EEF website. EEF will also be responsible for sponsoring the working groups to help them solve the challenges users of BEAM technology might be facing, particularly in areas such as documentation, interoperability, and performance. To know more about Erlang Ecosystem Foundation in detail, visit its official website. Erlang turns 20: Tracing the journey from Ericsson to Whatsapp Elixir 1.7, the programming language for Erlang virtual machine, releases Introducing Mint, a new HTTP client for Elixir
Read more
  • 0
  • 0
  • 3081

article-image-mozilla-engineer-shares-the-implications-of-rewriting-browser-internals-in-rust
Bhagyashree R
01 Mar 2019
2 min read
Save for later

Mozilla engineer shares the implications of rewriting browser internals in Rust

Bhagyashree R
01 Mar 2019
2 min read
Yesterday, Diane Hosfelt, a Research Engineer at Mozilla, shared what she and her team experienced when rewriting Firefox internals in Rust. Taking Quantum CSS as a case study, she touched upon the potential security vulnerabilities that could have been prevented if it was written in Rust from the very beginning. Why Mozilla decided to rewrite Firefox internal in Rust? Quantum CSS is a part of Mozilla’s Project Quantum, under which it is rewriting Firefox internals to make it faster. One of the major parts of this project is Servo, an engine designed to provide better concurrency and parallelism. To achieve these goals Mozilla decided to rewrite Servo in Rust, replacing C++. Rust is very similar to C++ in some ways while being different in terms of the abstractions and data structures it uses. It was created by Mozilla keeping concurrency safety in mind. Its type and memory-safe property make programs written in Rust thread-safe. What type of bugs does Rust prevent? Overall Rust prevents bugs related to memory, bounds, null/uninitialized variables, or integer by default. Hosfelt mentioned in her blog post, “Due to the overlap between memory safety violations and security-related bugs, we can say that Rust code should result in fewer critical CVEs (Common Vulnerabilities and Exposures).” However, there are some types of bugs that Rust does not address like correctness bugs. According to Hosfelt, Rust is a good option in the following cases: When your program involves processing of untrusted input safely When you want to use parallelism for better performance When you are integrating isolated components into an existing codebase You can go through the blog post by Diane Hosfelt on Mozilla’s website. Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 5406
article-image-rust-1-33-0-released-with-improvements-to-const-fn-pinning-and-more
Amrata Joshi
01 Mar 2019
2 min read
Save for later

Rust 1.33.0 released with improvements to Const fn, pinning, and more!

Amrata Joshi
01 Mar 2019
2 min read
Yesterday, the team at Rust announced the stable release, Rust 1.33.0, a programming language that helps in building reliable and efficient software. This release comes with significant improvements to const fns, and the stabilization of a new concept: "pinning." What's new in Rust 1.33.0? https://twitter.com/rustlang/status/1101200862679056385 Const fn It’s now possible to work with irrefutable destructuring patterns (e.g. const fn foo((x, y): (u8, u8)) { ... }). This release also offers let bindings (e.g. let x = 1;). It also comes with mutable let bindings (e.g. let mut x = 1;) Pinning This release comes with a new concept for Rust programs called pinning. Pinning ensures that the pointee of any pointer type for example P has a stable location in memory. This means that it cannot be moved elsewhere and its memory cannot be deallocated until it gets dropped. And the pointee is said to be "pinned". Compiler It is now possible to set a linker flavor for rustc with the -Clinker-flavor command line argument. The minimum required LLVM version is 6.0. This release comes with added support for the PowerPC64 architecture on FreeBSD and x86_64-unknown-uefi target. Libraries In this release, the methods overflowing_{add, sub, mul, shl, shr} are const functions for all numeric types. Now the is_positive and is_negative methods are const functions for all signed numeric types. Even the get method for all NonZero types is now const. Language It now possible to use the cfg(target_vendor) attribute. E.g. #[cfg(target_vendor="apple")] fn main() { println!("Hello Apple!"); }. It is now possible to have irrefutable if let and while let patterns. It is now possible to specify multiple attributes in a cfg_attr attribute. One of the users commented on the HackerNews, “This release also enables Windows binaries to run in Windows nanoserver containers.” Another comment reads, “It is nice to see the const fn improvements!” https://twitter.com/AndreaPessino/status/1101217753682206720 To know more about this news, check out Rust’s official post. Introducing RustPython, a Python 3 interpreter written in Rust How Deliveroo migrated from Ruby to Rust without breaking production Rust 1.32 released with a print debugger and other changes  
Read more
  • 0
  • 0
  • 2678

article-image-cuda-10-1-released-with-new-tools-libraries-improved-performance-and-more
Amrata Joshi
28 Feb 2019
2 min read
Save for later

CUDA 10.1 released with new tools, libraries, improved performance and more

Amrata Joshi
28 Feb 2019
2 min read
Yesterday, the team at NVIDIA released CUDA 10.1 with a new lightweight GEMM library, new functionalities and performance updates to existing libraries, and improvements to the CUDA Graphs APIs. What’s new in CUDA 10.1? Now there are new encoding and batched decoding functionalities in nvJPEG. This release also features faster performance for a broad set of random number generators in cuRAND. In this release, there is improved performance and support for fork/join kernels in CUDA Graphs APIs. Compiler In this release, the CUDA-C and CUDA-C++ compiler, nvcc, are found in the bin/ directory. They are built on top of the NVVM optimizer, which itself is built on top of the LLVM compiler infrastructure. Developers who are willing to target NVVM directly can do so by using the Compiler SDK, which is available in the nvvm/directory. Tools There are new development tools available in the bin/ directory including, few IDEs like nsight (Linux, Mac), Nsight VSE (Windows) and debuggers like cuda-memcheck, cuda-gdb (Linux), Nsight VSE (Windows). The tools also include a few profilers and utilities. Libraries This release comes with cuBLASLt, a new lightweight GEMM library with a flexible API and tensor core support for INT8 inputs and FP16 CGEMM split-complex matrix multiplication. CUDA 10.1 also features selective eigensolvers SYEVDX and SYGVDX in cuSOLVER. Few of the available utility libraries in the lib/ directory (DLLs on Windows are in bin/) are cublas (BLAS), cublas_device (BLAS Kernel Interface), cuda_occupancy (Kernel Occupancy Calculation [header file implementation]), etc. To know more about this news in detail, check out the post by Nvidia. Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial] ClojureCUDA 0.6.0 now supports CUDA 10 Stable release of CUDA 10.0 out, with Turing support, tools and library changes
Read more
  • 0
  • 0
  • 2635

article-image-dart-2-2-is-out-with-support-for-set-literals-and-more
Savia Lobo
27 Feb 2019
2 min read
Save for later

Dart 2.2 is out with support for set literals and more!

Savia Lobo
27 Feb 2019
2 min read
Michael Thomsen, the Project Manager for Dart announced the stable release of the general purpose programming language, Dart 2.2. This version, which is an incremental update to v2, offers improved performance of ahead-of-time (AOT) compiled native code and a new set literal language feature. Improvements in Dart 2.2 Improved AOT performance Developers have worked on improving the AOT performance by 11–16% on microbenchmarks (at the cost of a ~1% increase in code size). Prior to this optimization, developers had to make several lookups to an object pool to determine the destination address. However, the optimized AOT code is now able to call the destination directly using a PC-relative call. Extended Literals to support sets Dart supported the literal syntax only for Lists and Maps, which caused difficulties in initializing Sets as it had to be initialized via a list as follows: Set<String> currencies = Set.of(['EUR', 'USD', 'JPY']); This code proved to be inefficient due to the lack of literal support and also made currencies a compile-time constant. With Dart 2.2’s extension of literals to support sets, users can initialize a set and make it const using a convenient new syntax: const Set<String> currencies = {'EUR', 'USD', 'JPY'}; Updated Dart language Specification Dart 2.2 includes the up-to-date ‘Dart language specification’ with the spec source moved to a new language repository. Developers have also added continuous integration to ensure a rolling draft specification is generated in PDF format as and when the specification for future versions of the Dart language evolves. Both the 2.2 version and rolling Dart 2.x specifications are available on the Dart specification page. To know more about this announcement in detail, visit Michael Thomsen’s blog on Medium. Google Dart 2.1 released with improved performance and usability Google’s Dart hits version 2.0 with major changes for developers Is Dart programming dead already?  
Read more
  • 0
  • 0
  • 4583
article-image-python-3-8-alpha-2-is-now-available-for-testing
Natasha Mathur
27 Feb 2019
2 min read
Save for later

Python 3.8 alpha 2 is now available for testing

Natasha Mathur
27 Feb 2019
2 min read
After releasing Python 3.8.0 alpha 1 earlier this month, Python team released the second alpha version of the four planned alpha releases of Python 3.8, called Python 3.8.0a2, last week. Alpha releases make it easier for the developers to test the current state of new features, bug fixes, and the release process. Python team states that many new features for Python 3.8 are still being planned and written. Here is a list of some of the major new features and changes so far, however, these features are currently raw and not meant for production use: PEP 572 i.e. Assignment expressions have been accepted. Now, users can assign to variables within an expression with the help of the notation NAME := expr. A new exception, TargetScopeError has also been added with one change to the evaluation order. Typed_ast, a fork of the ast module (in C) used by mypy, pytype, and (IIRC) has been merged back to CPython. Typed_ast helps preserve certain comments. Multiprocessing is now allowed and users can use shared memory segments to avoid pickling costs and the need for serialization between processes. The next pre-release for Python 3.8 will be Python 3.8.0a3 and has been scheduled for 25th March 2019. For more information, check out the official Python 3.8.0a2 announcement. PyPy 7.0 released for Python 2.7, 3.5, and 3.6 alpha 5 blog posts that could make you a better Python programmer Python Software foundation and JetBrains’ Python Developers Survey 2018
Read more
  • 0
  • 0
  • 2624

article-image-facebook-open-sources-magma-a-software-platform-for-deploying-mobile-networks
Amrata Joshi
26 Feb 2019
2 min read
Save for later

Facebook open sources Magma, a software platform for deploying mobile networks

Amrata Joshi
26 Feb 2019
2 min read
Yesterday, the team at Facebook open-sourced Magma, a software platform that will help operators for deploying mobile networks easily. This platform comes with a software-centric distributed mobile packet core and tools for automating network management. Magma extends existing network topologies to the edge of rural deployments, private LTE (Long Term Evolution) networks or wireless enterprise deployments instead of replacing existing EPC deployments for large networks. Magma enables new types of network archetypes where there is a need for continuous integration of software components and incremental upgrade cycles. It also allows authentication and integration with the help of LTE EPC (Evolved Packet Core). It also reduces the complexity of operating mobile networks by enabling automation of network operations like software updates, element configuration, and device provisioning. Magma’s centralized cloud-based controller can be used on a public or private cloud environment. Its automated provisioning infrastructure makes deploying LTE as easy as deploying a WiFi access point. The platform currently works with existing LTE base stations and can associate with traditional mobile cores for extending services to new areas. According to a few users, “Facebook internally considers the social network to be its major asset and not their technology.” Any investment in open technologies or internal technology which make the network effect stronger is considered important. Few users discussed Facebook’s revenue strategies in the HackerNews thread. A comment on HackerNews reads, “I noticed that FB and mobile phone companies offering "free Facebook" are all in a borderline antagonistic relationship because messenger kills their revenue, and they want to bill FB an arm and a leg for that.” To know more about this news in detail, check out Facebook’s blog post. Facebook open sources SPARTA to simplify abstract interpretation Facebook open sources the ELF OpenGo project and retrains the model using reinforcement learning Facebook’s AI Chief at ISSCC talks about the future of deep learning hardware  
Read more
  • 0
  • 0
  • 2792