Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-net-core-releases-may-2019-updates
Amrata Joshi
15 May 2019
3 min read
Save for later

.NET Core releases May 2019 updates

Amrata Joshi
15 May 2019
3 min read
This month, during the Microsoft Build 2019, the team behind .NET Core announced that .NET Core 5 will be coming in 2020. Yesterday the team at .NET Core released the .NET Core May 2019 updates for 1.0.16, 1.1.14, 2.1.11 and 2.2.5. The updates include security, reliability fixes, and updated packages. Expected updates in .NET Core Security .NET Core Tampering Vulnerability(CVE-2019-0820) When .NET Core improperly processes RegEx strings, a denial of service vulnerability exists. In this case, the attacker who can successfully exploit this vulnerability can cause a denial of service against a .NET application. Even a remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to a .NET Core application. This update addresses this vulnerability by correcting how .NET Core applications handle RegEx string processing. This security advisory provides information about a vulnerability in .NET Core 1.0, 1.1, 2.1 and 2.2. Denial of Service vulnerability in .NET Core and ASP.NET Core (CVE-2019-0980 & CVE-2019-0981) When .NET Core and ASP.NET Core improperly handle web requests, denial of service vulnerability exists. An attacker who can successfully exploit this vulnerability can cause a denial of service against a .NET Core and ASP.NET Core application. This vulnerability can be exploited remotely and without authentication. A remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to a .NET Core application. This update addresses this vulnerability by correcting how .NET Core and ASP.NET Core web applications handle web requests. This security advisory provides information about the two vulnerabilities (CVE-2019-0980 & CVE-2019-0981) in .NET Core and ASP.NET Core 1.0, 1.1, 2.1, and 2.2. ASP.NET Core Denial of Service vulnerability(CVE-2019-0982) When ASP.NET Core improperly handles web requests, a denial of service vulnerability exists. An attacker who can successfully exploit this vulnerability can cause a denial of service against an ASP.NET Core web application. This vulnerability can be exploited remotely and without authentication. A remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to the ASP.NET Core application. This update addresses this vulnerability by correcting how the ASP.NET Core web application handles web requests. This security advisory provides information about a vulnerability (CVE-2019-0982) in ASP.NET Core 2.1 and 2.2. Docker images .NET Docker images have now been updated. microsoft/dotnet, microsoft/dotnet-samples, and microsoft/aspnetcore repos have also been updated. Users can get the latest .NET Core updates on the .NET Core download page. To know more about this news, check out the official announcement. .NET 5 arriving in 2020! Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 .NET for Apache Spark Preview is out now!  
Read more
  • 0
  • 0
  • 2599

article-image-graalvm-19-0-releases-with-java-8-se-compliant-java-virtual-machine
Bhagyashree R
13 May 2019
2 min read
Save for later

GraalVM 19.0 releases with Java 8 SE compliant Java Virtual Machine, and more!

Bhagyashree R
13 May 2019
2 min read
Last week, the team behind GraalVM announced the release of GraalVM 19.0. This is the first production release, which comes with early adopter Windows support, class initialization update in GraalVM Native Image, Java 8 SE compliant Java Virtual Machine, and more. https://twitter.com/graalvm/status/1126607204860289024 GraalVM is a polyglot virtual machine that allows users to run applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Kotlin, Clojure, and LLVM-based languages such as C and C++. Updates in GraalVM 19.0 GraalVM Native Image GraalVM Native Image is responsible for compiling Java code ahead-of-time to a standalone executable called a native image. Currently, it is available as an early adopter plugin and you can install it by executing the ‘gu install native-image’ command. With this release, Native Image is updated in how classes are initialized in a native-image. The application classes are now initialized at runtime by default and all the JDK classes are initialized at the build time. This change was made to improve user experience, as it eliminates the need to write substitutions and to deal with instances of unsupported classes ending up in the image heap. Early adopter Windows support With this release, early adopter builds for Windows users are also made available. These builds include the JDK with the GraalVM compiler enabled, Native Image capabilities, and GraalVM’s JavaScript engine and the developer tools. Java 8 SE compliant Java VM This release comes with Java 8 SE compliant Java Virtual Machine, which is based on OpenJDK 1.8.0_212. Read also: No more free Java SE 8 updates for commercial use after January 2019 Node.js with polyglot capabilities This release comes with Node.js with polyglot capabilities, based on Node.js 10.15.2. With these capabilities, you will be able to leverage Java or Scala libraries from Node.js and also use Node.js modules in Java applications. JavaScript engine compliant with ECMAScript 2019 GraalVM 19.0 comes with JavaScript engine compliant with the latest ECMAScript 2019 standard. You can now migrate from JavaScript engines Rhino or Nashorn, which are no longer maintained, to GraalVM’s JavaScript engine compatible with the latest standards. Check out the GraalVM 19.0 release notes for more details. OpenJDk team’s detailed message to NullPointerException and explanation in JEP draft Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more What’s new in ECMAScript 2018 (ES9)?
Read more
  • 0
  • 0
  • 2377

article-image-introducing-swiftwasm-a-tool-for-compiling-swift-to-webassembly
Bhagyashree R
13 May 2019
2 min read
Save for later

Introducing SwiftWasm, a tool for compiling Swift to WebAssembly

Bhagyashree R
13 May 2019
2 min read
The attempts of porting Swift to WebAssembly has been going on for very long, and finally, a team of developers has come up with SwiftWasm, which was released last week. With this tool, you will now be able to run your Swift code on the web by compiling it to WebAseembly. https://twitter.com/swiftwasm/status/1127324144121536512 The SwiftWasm tool is built on top of the WASI SDK, which is a WASI-enabled C/C++ toolchain. This makes the WebAssembly executables generated by SwiftWasm work on both browsers and standalone WebAssembly runtimes such as Wasmtime, Fastly’s Lucet, or any other WASI-compatible WebAssembly runtime. How you can work with SwiftWasm? While macOS does not need any dependencies to be installed, some dependencies need to be installed on Ubuntu and Windows: On Ubuntu install ‘libatomic1’: sudo apt-get install libatomic1 On Windows: First Install the Windows Subsystem for Linux, and then install the libatomic1 library. The next step is to compile SwiftWasm by running the following commands: ./swiftwasm example/hello.swift hello.wasm To run the resulting ‘hello.wasm’ file, go to the SwiftWasm polyfill and upload the file. You will see the output in the textbox. This polyfill supports Firefox 66, Chrome 74, and Safari 12.1. The news of having a tool for running Swift on the web has got many developers excited. https://twitter.com/pvieito/status/1127620197668487169 https://twitter.com/johannesweiss/status/1126913408455053312 https://twitter.com/jedisct1/status/1126909145926569986 The project is still work-in-progress and thus has some limitations. Currently, only the Swift ‘stdlib’ is compiled and other libraries such as Foundation or SwiftPM are not included. Few functions such as ‘Optional.Map’ does not work because of the calling convention differences between throwing and non-throwing closures. If you want to contribute to this project, check out its pull request on Swift’s GitHub repository to know more about its current status. You can try SwiftWasm on its official website. Swift is improving the UI of its generics model with the “reverse generics” system Swift 5 for Xcode 10.2 is here! Implementing Dependency Injection in Swift [Tutorial]
Read more
  • 0
  • 0
  • 5291
Banner background image

article-image-github-announces-beta-version-of-github-package-registry-its-new-package-management-service
Sugandha Lahoti
13 May 2019
3 min read
Save for later

GitHub announces beta version of GitHub Package Registry, its new package management service

Sugandha Lahoti
13 May 2019
3 min read
Update: At WWDC 2019, GitHub added support for Swift packages to GitHub Package Registry. Swift packages make it easy to share your libraries and source code across projects and with the Swift community. Last Friday, GitHub announced a new package management service to allow developers and organizations to easily generate "packages" from their code. Called the GitHub Package Registry, this service allows developers to publish public or private packages next to their source code. https://twitter.com/github/status/1127261105963917312 “GitHub Package Registry is compatible with common package management clients, so you can publish packages with your choice of tools,” Simina Pasat, director of Product Management at GitHub, explains in the official announcement. The GitHub Package Registry is available in limited beta for now. However, it will always be free to use for open source. The new service is currently compatible with JavaScript (npm), Java (Maven), Ruby (RubyGems), .NET (NuGet) and Docker images, with support for other languages and tools to come. Packages hosted on GitHub will include detailed insights such as download statistics and project/package history. Developers can also publish multiple packages of different types for more complex repositories. They can also customize publishing and post-publishing workflows using webhooks and GitHub Actions. GitHub Package Registry has unified identity and permissions meaning packages on GitHub inherit the visibility and permissions associated with the repository. This means, organizations no longer need to maintain a separate package registry and mirror permissions across systems. They can use a single set of credentials across different systems for code and packages, and manage access permissions with the same tools. Developers are generally enthusiastic about the new GitHub Venture. Here are some positive comments from a thread on Hacker News. “This is really outstanding. GitHub Package Registry separates the registry from the artifact storage, which is the right way to do it. The registry should be quick to update because it's only a pointer. The artifact storage will be under my control. Credentials and security should be easier to deal with. I really hope this works out.” “This is pretty interesting. Github really is becoming the social network that MS never seemed to be able to create. We already use it as our portfolio of work for potential employers. We collaborate with fellow enthusiasts and maybe even make new friends. We host our websites from it. Abuse it to store binaries, too. And now, alongside, source code we can use it as a CDN of sorts to serve packages, for free, sounds pretty great.” “It's a really nice project overall, having a GitHub Package Registry that supports many different projects and run by a company that today is good, is always nice.” GitHub deprecates and then restores Network Graph after GitHub users share their disapproval Apache Software Foundation finally joins the GitHub open source community Introducing Gitpod, a one-click IDE for GitHub
Read more
  • 0
  • 0
  • 2674

article-image-introducing-tensorflow-graphics-packed-with-tensorboard-3d-object-transformations-and-much-more
Amrata Joshi
10 May 2019
3 min read
Save for later

Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more

Amrata Joshi
10 May 2019
3 min read
Yesterday, the team at TensorFlow introduced TensorFlow Graphics. A computer graphics pipeline requires 3D objects and their positioning in the scene, and a description of the material they are made of, lights and a camera. This scene description then gets interpreted by a renderer for generating a synthetic rendering. In contrast, a computer vision system starts from an image and then tries to infer the parameters of the scene. This also allows the prediction of which objects are in the scene, what materials they are made of, and their three-dimensional position and orientation. Developers usually require large quantities of data to train machine learning systems that are capable of solving these complex 3D vision tasks.  As labelling data is a bit expensive and complex process, so it is better to have mechanisms to design machine learning models. They can easily comprehend the three dimensional world while being trained without much supervision. By combining computer vision and computer graphics techniques we get to leverage the vast amounts of unlabelled data. For instance, this can be achieved with the help of analysis by synthesis where the vision system extracts the scene parameters and the graphics system then renders back an image based on them. In this case, if the rendering matches the original image, which means the vision system has accurately extracted the scene parameters. Also, we can see that in this particular setup, computer vision and computer graphics go hand-in-hand. This also forms a single machine learning system which is similar to an autoencoder that can be trained in a self-supervised manner. Image source: TensorFlow We will now explore some of the functionalities of TensorFlow Graphics. Object transformations Object transformations are responsible for controlling the position of objects in space. The axis-angle formalism is used for rotating a cube and the rotation axis points up to form a positive which leads the cube to rotate counterclockwise. This task is also at the core of many applications that include robots that focus on interacting with their environment. Modelling cameras Camera models play a crucial role in computer vision as they influence the appearance of three-dimensional objects projected onto the image plane. For more details about camera models and a concrete example of how to use them in TensorFlow, check out the Colab example. Material models Material models are used to define how light interacts with objects to give them their unique appearance. Some materials like plaster and mirrors usually reflect light uniformly in all directions. Users can now play with the parameters of the material and the light to develop a good sense of how they interact. TensorBoard 3d TensorFlow Graphics features a TensorBoard plugin to interactively visualize 3d meshes and point clouds. Through which visual debugging is also possible that helps to assess whether an experiment is going in the right direction. To know more about this news, check out the post on Medium. TensorFlow 1.13.0-rc2 releases! TensorFlow 1.13.0-rc0 releases! TensorFlow.js: Architecture and applications  
Read more
  • 0
  • 0
  • 3555

article-image-differentialequations-jl-v6-4-0-released-with-gpu-support-in-ode-solvers-linsolve-defaults-and-much-more
Amrata Joshi
10 May 2019
3 min read
Save for later

DifferentialEquations.jl v6.4.0 released with GPU support in ODE solvers, linsolve defaults, and much more!

Amrata Joshi
10 May 2019
3 min read
Yesterday, the team behind JuliaDiffeEq released DifferentialEquations.jl v6.4.0,  a suite for numerically solving differential equations in Julia. This release gives users the ability to use ODE solvers on GPU, with automated tooling for faster broadcast, matrix-free Newton-Krylov, better Jacobian re-use algorithms, memory use reduction, etc. What’s new in DifferentialEquations.jl v6.4.0? Full GPU support in ODE solvers With this release, the stiff ODE solvers allow expensive calculations, like those in neural ODEs or PDE discretizations, and utilize GPU acceleration. This release also allows the initial condition to be a GPUArray where the internal methods don’t perform any indexing in order to allow for all computations to take place on the GPU without data transfers. Fast DiffEq-Specific Broadcast This release comes with a broadcast wrapper that allows all sorts of information to be passed to the compiler in the differential equation solver’s internals. This makes a bunch of no-aliasing and sizing assumptions that are normally not possible. This leads the internals to use a special @..,which also turns out to be faster than standard loops. Smart linsolve defaults This release comes with a smarter linsolve defaults, which automatically detects the BLAS installation and utilizes RecursiveFactorizations.jl that speeds up the process for ODE. Users can use the linear solver to automatically switch to a form that works for sparse Jacobians. Even banded matrices and Jacobians on the GPU are now automatically handled. Automated J*v Products via Autodifferentiation Users can now use GMRES, easily without the need for constructing the full Jacobian matrix. Users can simply use the directional derivatives in the direction of v in order to compute J*v. Performance improvement With this release, the performance of all implicit methods like KenCarp4 has been improved. DiffEqBiological.jl can now handle large reaction networks and can parse the networks much faster and can build Jacobians that utilize sparse matrices. Though there is still plenty of room for improvement. Partial Neural ODEs This release comes with a lot of improvements and gives a glimpse of working examples of partial neural differential equations that are equations, which have pre-specified portions. These equations allow for batched data and GPU acceleration. Memory optimization  This release comes with memory optimizations of low-memory Runge-Kutta methods for hyperbolic or advection-dominated PDEs. These methods now have a minimal number of registers which are required for the method. Large PDE discretizations can now make use of DifferentialEquations.jl without loss of memory efficiency. Robust callbacks The team has introduced the ContinuousCallback implementation in this release that has increased robustness in double event detection. To know more about this news, check out the official announcement. The solvers – these great unknown Moving Further with NumPy Modules How to build an options trading web app using Q-learning  
Read more
  • 0
  • 0
  • 1549
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-linux-5-1-out-with-io_uring-io-interface-persistent-memory-new-patching-improvements-and-more-2
Vincy Davis
08 May 2019
3 min read
Save for later

Linux 5.1 out with Io_uring IO interface, persistent memory, new patching improvements and more!

Vincy Davis
08 May 2019
3 min read
Yesterday, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.1 in a mailing list announcement. This release provides users with an open source operating system with lots of great additions, as well as improvements to existing features. The previous version, Linux 5.0 was released two months ago. “On the whole, 5.1 looks very normal with just over 13k commits (plus another 1k+ if you count merges). Which is pretty much our normal size these days. No way to boil that down to a sane shortlog, with work all over.”, said Linus Torvalds in the official announcement. What’s new in Linux 5.1? Io_uring: New Linux IO interface Linux 5.1 introduces a new high-performance interface called io_uring. It’s easy to use and hard to misuse user/application interface. Io_uring has an efficient buffered asynchronous I/O support, the ability to do I/O without even performing a system call via polled I/O, and other efficiency enhancements. This will help deliver fast and efficient I/O for Linux. Io_uring permits safe signal delivery in the presence of PID reuse which will improve power management without affecting power consumption. Liburing is used as the user-space library which will make the usage simpler. Axboe's FIO benchmark has also been adapted already to support io_uring. Security In Linux 5.1, the SafeSetID LSM module has been added which will provide administrators with security and policy controls. It will restrict UID/GID transitions from a given UID/GID to only those approved by system-wide acceptable lists. This will also help in stopping to receive the auxiliary privileges associated with CAP_SET{U/G}ID, which will allow the user to set up user namespace UID mappings. Storage Along with physical RAM, users can now use persistent memory as RAM (system memory), allowing them to boot the system to a device-mapper device without using initramfs, as well as support for cumulative patches for the live kernel patching feature. This persistent memory can also be used as a cost-effective RAM replacement. Live patching improvements With Linux 5.1 a new capability is being added to live patching, it’s called Atomic Replace. It includes all wanted changes from all older live patches and can completely replace them in one transition. Live patching enables a running system to be patched without the need for a full system reboot. This will allow new drivers compatible with new hardware. Users are quite happy with this update. A user on Reddit commented, “Finally! I think this one fixes problems with Elantech's touchpads spamming the dmesg log. Can't wait to install it!” Another user added, “Thank you and congratulations for the developers!” To download the Linux kernel 5.1 sources, head over to kernel.org. To know more about the release, check out the official mailing announcement. Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Announcing Linux 5.0! Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look  
Read more
  • 0
  • 0
  • 5691

article-image-microsoft-build-2019-introducing-windows-terminal-application-packed-with-multiple-tab-opening-improved-text-and-more
Amrata Joshi
07 May 2019
2 min read
Save for later

Microsoft Build 2019: Introducing Windows Terminal, application packed with multiple tab opening, improved text and more

Amrata Joshi
07 May 2019
2 min read
Yesterday, at the Microsoft Build 2019, the team at Microsoft announced Windows Terminal, a new terminal application for users of command-line tools and shells like PowerShell, Command Prompt, and WSL. This terminal will be delivered via the Microsoft Store in Windows 10 and will be regularly updated. Key features of Windows Terminal Multiple tabs Windows Terminal comes with multiple tab support so users will now be able to open any number of tabs, each connected to a command-line shell or app of their choice. E.g. PowerShell, Ubuntu on WSL, Command Prompt, a Raspberry Pi via SSH, etc. Text Windows terminal uses a GPU accelerated DirectWrite/DirectX-based text rendering engine so that it displays text characters, glyphs, and symbols present within fonts on the PC. In addition, it also includes emoji, powerline symbols, CJK ideograms, icons, programming ligatures, etc. It can also render text much faster as compared to the previously used engines. Users now have the option of using their own new font. Settings and configurability Windows Terminal comes with many settings and configuration options that manage Terminal’s appearance and each of the shells/profiles that users open as new tabs. The settings are stored in a structured text file so that it makes it easy for users and/or tools to configure. With the terminal’s configuration mechanism, users will be able to create multiple “profiles” for each shell/app/tool. And these profiles can have their own combination of color themes, font styles and sizes, background blur/transparency levels, etc so that users can now create their own custom-styled Terminal. Windows Console The team further announced that they are open sourcing Windows Console which hosts the command-line infrastructure in Windows and provides the traditional Console UX. The primary goal of the console is preserving backward compatibility with existing command-line tools, scripts, etc. To know more about this news, check out Microsoft’s blog post. Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository    
Read more
  • 0
  • 0
  • 3548

article-image-eclipse-foundation-releases-updates-on-its-jakarta-ee-rights-to-java-trademarks
Vincy Davis
07 May 2019
3 min read
Save for later

Eclipse foundation releases updates on its Jakarta EE Rights to Java trademarks

Vincy Davis
07 May 2019
3 min read
Last week, Eclipse Foundation made an announcement about regarding an update on Jakarta EE Rights to Java Trademarks. This announcement also gives an update on the complex and confidential negotiations between the Eclipse Foundation and Oracle, which includes a summary of all the progress till date and the implications of the agreement and use of Java trademarks and the javax namespace. In 2017, Oracle announced the migration of Java EE to the Eclipse Foundation. However, the process has been slow and constant. The mutual intention of the Eclipse Foundation and the Oracle team was to allow the evolution of the javax package namespace in Jakarta EE specifications. Unfortunately, they could not reach a mutual agreement on the same. Read More: Jakarta EE: Past, Present, and Future It has now been decided that the javax package namespace and the Java trademarks such as the existing specification names cannot be evolved or used by the Jakarta EE community. This is believed to be the best possible outcome for the community by the Eclipse Foundation and Oracle. In its official post, Eclipse Foundation states that Oracle’s Java trademarks are the property of Oracle only. Hence, the Eclipse Foundation has no rights to use them. They have further mentioned some implications including: The javax package namespace may be used within Jakarta EE specifications but may be used “as is” only. No modification to the javax package namespace is permitted within Jakarta EE component specifications. Jakarta EE specifications that continue to use the javax package namespace must remain TCK compatible with the corresponding Java EE specifications. Jakarta EE component specifications using the javax package namespace may be omitted entirely from future Jakarta EE Platform specifications. Specification names must be changed from a “Java EE” naming convention to a “Jakarta EE” naming convention. This includes acronyms such as EJB, JPA or JAX-RS. Additionally, any specification which uses the javax namespace will continue to carry the certification and container requirements which Java EE has had in the past. Also, the Jakarta EE Working Group along with Oracle will continue to work on the Jakarta EE 8 specification and is looking forward to future versions of the Jakarta EE specifications. The team is also sure that many application servers will be certified as Jakarta EE 8 compatible. After Jakarta EE 8, the main aim of Jakarta EE 9 will be to maximize compatibility for future versions which will help in not suppressing any innovation. There have been mixed reactions to this announcement. Some feel that this is a great change towards openness and avoids confusion, whereas others believe that lawyers of tech companies are making it difficult for software to get developed. A Reddit user has commented, “With these changes, it is more likely that developer would stop using it and switch to other frameworks.” To know more about this news in detail, visit The Eclipse Foundation’s official blog post. Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more Eclipse announces support for Java 12 Stack Overflow survey data further confirms Python’s popularity as it moves above Java in the most used programming language list
Read more
  • 0
  • 0
  • 1756

article-image-net-5-arriving-in-2020
Amrata Joshi
07 May 2019
4 min read
Save for later

.NET 5 arriving in 2020!

Amrata Joshi
07 May 2019
4 min read
Yesterday, on the first day of Microsoft Build 2019, the team behind .NET Core announced that .NET Core 3.0 will be .NET 5, which will also be the next big release in the .NET family. Now there will be just one .NET going forward, and users will be able to use it to target  Linux, macOS, Windows, iOS, Android, tvOS, watchOS and WebAssembly and much more. .NET Core team will also introduce new .NET APIs, runtime capabilities and language features as part of .NET 5 along with the first preview, which is expected in November 2020. .NET 5 takes .NET Core and the best of Mono, runtime for .NET Core, to create a single platform that you can use for all your modern .NET code. This release will be supported with future updates to Visual Studio 2019, Visual Studio Code and Visual Studio for Mac. What is expected in .NET 5? Switch build in runtimes .NET Core has two main runtimes, namely, Mono which is the original cross-platform implementation of .NET and CoreCLR which is primarily targeted at supporting cloud applications, including the largest services at Microsoft. Both runtimes have a lot of similarities, so, the team has decided to make CoreCLR and Mono drop-in replacements for one another. The team plans to make it easier for users to choose between the different runtime options. .NET 5 applications In this release, all the .NET 5 applications will be using the CoreFX framework which will work smoothly with Xamarin and client-side Blazor workloads. These .NET 5 applications will be buildable with the .NET CLI, which will ensure that users have common command-line tooling across projects. Naming The team thought of simplifying the naming as there is only one .NET going forward, so there is no need of clarifying term like “Core”. According to the team, .NET 5 is a shorter name and also communicates that it has uniform capabilities and behaviors. Others ways in which .NET 5 project will improve are: This release will produce a single .NET runtime and framework which has a uniform runtime behaviour and developer experiences and can be used everywhere. This release will also expand the capabilities of .NET by reflecting the best of .NET Core, .NET Framework, Xamarin and Mono. It will also help in building projects out of a single code-base that developers can work on and expand together. Also, the code and project files will look and feel the same no matter which type of app is getting built. Users will continue to get access to the same runtime, API and language capabilities with each app. Users will now have more choice for runtime experiences. This release will come with Java interoperability for all the platforms. In this release, Objective-C and Swift interoperability will be supported on multiple operating systems. What won’t change? NET Core will continue to be open source and community-oriented on GitHub. It will still have cross-platform implementation. This release will also support platform-specific capabilities, such as Windows Forms and WPF on Windows, etc. It will support side-by-side installation and provide high performance. It will also support small project files (SDK-style) and command-line interface (CLI). A glimpse at the future roadmap Image source: Microsoft The blog reads, “The .NET 5 project is an important and exciting new direction for .NET. You will see .NET become simpler but also have a broader and more expansive capability and utility. All new development and feature capabilities will be part of .NET 5, including new C# versions. We see a bright future ahead in which you can use the same.” To know more about this news, check out Microsoft’s blog post. Fedora 31 will now come with Mono 5 to offer open-source .NET support .NET 4.5 Parallel Extensions – Async .NET 4.5 Extension Methods on IQueryable  
Read more
  • 0
  • 0
  • 6393
article-image-microsoft-build-2019-introducing-wsl-2-the-newest-architecture-for-the-windows-subsystem-for-linux
Amrata Joshi
07 May 2019
3 min read
Save for later

Microsoft Build 2019: Introducing WSL 2, the newest architecture for the Windows Subsystem for Linux

Amrata Joshi
07 May 2019
3 min read
Yesterday, on the first day of Microsoft Build 2019, the team at Microsoft introduced WSL 2, the newest architecture for the Windows Subsystem for Linux. With WSL 2, file system performance will increase and users will be able to run more Linux apps. The initial builds of WSL 2 will be available by the end of June, this year. https://twitter.com/windowsdev/status/1125484494616649728 https://twitter.com/poppastring/status/1125489352795201539 What’s new in WSL 2? Run Linux libraries WSL 2 powers Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. This new architecture brings changes to how these Linux binaries interact with Windows and computer’s hardware, but it will still manage to provide the same user experience as in WSL Linux distros With this release, the individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, and can be upgraded or downgraded at any time. Also, users can run WSL 1 and WSL 2 distros side by side. This new architecture uses an entirely new architecture that uses a real Linux kernel. Increases speed With this release, file-intensive operations like git clone, npm install, apt update, apt upgrade, and more will get faster. The initial tests that the team has run have WSL 2 running up to 20x faster as compared to WSL 1, when unpacking a zipped tarball. And it is around 2-5x faster while using git clone, npm install and cmake on various projects. Linux kernel with Windows The team will be shipping an open source real Linux kernel with Windows which will make full system call compatibility possible. This will also be the first time a Linux kernel is shipped with Windows. The team is building the kernel in house and in the initial builds they will ship version 4.19 of the kernel. This kernel is been designed in tune with WSL 2 and it has been optimized for size and performance. The team will service this Linux kernel through Windows updates, users will get the latest security fixes and kernel improvements without needing to manage it themselves. The configuration for this kernel will be available on GitHub once WSL 2 will release. The WSL kernel source will consist of links to a set of patches in addition to the long-term stable source. Full system call compatibility The Linux binaries use system calls for performing functions such as accessing files, requesting memory, creating processes, and more. In WSL 1 the team has created a translation layer that interprets most of these system calls and allow them to work on the Windows NT kernel. It is challenging to implement all of these system calls, where some of the apps don’t run properly in WSL 1. WSL 2 includes its own Linux kernel which has full system call compatibility. To know more about this news, check out Microsoft’s blog post. Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository      
Read more
  • 0
  • 0
  • 7615

article-image-rstudio-1-2-releases-with-improved-testing-and-support-for-python-chunks-r-scripts-and-much-more
Amrata Joshi
06 May 2019
3 min read
Save for later

RStudio 1.2 releases with improved testing and support for Python chunks, R scripts, and much more!

Amrata Joshi
06 May 2019
3 min read
Last week, the team behind RStudio released RStudio 1.2 that includes dozens of new productivity enhancements and capabilities. RStudio 1.2 is compatible with projects in SQL, Stan, Python, and D3. With this release, testing R code integrations for shinytest and testthat is easier. Users can create,  test, and publish APIs in R with Plumber and run R scripts. What’s new in RStudio 1.2? Python sessions This release uses a shared Python session for executing Python chunks. It comes with simple bindings to access R objects from Python chunks and vice versa. Keyring In RStudio 1.2, passwords and secrets are stored securely with keyring by calling rstudioapi::askForSecret(). Users can install keyring directly from dialog prompt. Run R scripts Users can now run any R script as a background job in a clean R session and can also have a look at the script output in real time. Testing with RStudio 1.2 Users can opt for Run Tests command in testthat R scripts for directly running their projects. The testthat output in the Build pane now comes with navigable issue list. PowerPoint Users can now create PowerPoint presentations with R Markdown Package management With RStudio 1.2, users can now Specify a primary CRAN URL and secondary CRAN repos from the package preferences pane. Users can link to a package’s primary CRAN page from the packages pane. The CRAN repos can be configured with a repos.conf configuration file and the r-cran-repos-file option. Plumber Users can now easily create Plumber APIs in RStudio 1.2 and execute them within RStudio to view Swagger documentation and make test calls to the APIs Bug fixes in RStudio 1.2 In this release, the issue regarding “invalid byte sequence” has been fixed. Incorrect Git status has been rectified. Issues with low/no-contrast colors with HTML widgets has been fixed. It seems most users are excited about this release and they think that this way, Python will be more accessible to R users. A user commented on HackerNews, “I’m personally an Emacs Speaks Statistics fan myself, but RStudio has been huge boon to the R community. I expect that this will go a long ways towards making Python more accessible to R users.” Some are not much happy with this release as they think it has less options for graphics. Another comment reads, “I wish rstudio would render markdown in-line. It also tends to forget graphics in output after many open and closes of rmd. I’m intrigued by .org mode but as far as I can tell, there are not options for graphical output while editing.” To know more about this news, check out the post by RStudio. How to create your own R package with RStudio [Tutorial] The new RStudio Package Manager is now generally available Getting Started with RStudio    
Read more
  • 0
  • 0
  • 2381

article-image-microsoft-introduces-remote-development-extensions-to-make-remote-development-easier-on-vs-code
Bhagyashree R
03 May 2019
3 min read
Save for later

Microsoft introduces Remote Development extensions to make remote development easier on VS Code

Bhagyashree R
03 May 2019
3 min read
Yesterday, Microsoft announced the preview of Remote Development extension pack for VS Code to enable developers to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment. https://twitter.com/code/status/1124016109076799488 Currently, developers will need to use the Insiders build for remote development until the stable version is available. The Insiders builds are the versions that are shipped daily with latest features and bug fixes. Why these VS Code extensions are needed? Developers often choose containers or remote virtual machines configured with specific development and runtime stacks as their development environment. This is an optimal choice because configuring such development environments locally could be too difficult or sometimes even impossible. Data scientists also require remote environments to do their work efficiently. They build and train data models and to do that they need to analyze large datasets. This demands massive storage and compute service, which a local machine can hardly provide. One option to solve this problem is using Remote Desktop but it can be sometimes laggy. Developers often use Vim and SSH or local tools with file synchronization, but these can also be slow and error-prone. There are browser-based tools that can be used in some scenarios, but they lack the richness and familiarity that desktop tools provide. VS Code Remote Development extensions pack Looking at these challenges, the VS Code team came up with a solution that suggested that VS Code should run in two places at once. One instance will run the developer tools locally and the other will connect to a set of development services running remotely in the context of a physical or virtual machine. Following are three extensions for working with remote workspaces: Remote-WSL Remote - WSL allows you to use WSL as a full development environment directly from VS Code. It runs commands and extensions directly in WSL so developers don’t have to think about pathing issues, binary compatibility, or other cross-OS challenges. With this extension, developers will be able to edit files located in WSL or the mounted Windows filesystem and also run and debug Linux-based applications on Windows. Remote-SSH Remote - SSH allows you to open folders or workspaces hosted on any remote machine, VM, or container with a running SSH server. It directly runs commands and other extensions on the remote machine so you don’t need to have the source code on your local machine. It enables you to use larger, faster, or more specialized hardware than your local machine. You can also quickly switch between different remote development environments and safely make updates. Remote-Containers Remote - Containers allows you to use a Docker container as your development container. It starts or attaches to a development container, which is running a well-defined tool and runtime stack. All your workspace files are copied or cloned into the container, or mounted from the local file system. To configure the development container you can use a ‘devcontainer.json’ file. To read more in detail, visit Microsoft’s official website. Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository Microsoft employees raise their voice against the company’s misogynist, sexist and racist acts  
Read more
  • 0
  • 0
  • 4341
article-image-gnu-guix-1-0-0-released-with-an-improved-user-interface-hassle-free-installation-and-more
Savia Lobo
03 May 2019
3 min read
Save for later

GNU Guix 1.0.0 released with an improved user interface, hassle-free installation and more

Savia Lobo
03 May 2019
3 min read
Yesterday, GNU Guix, a transactional package manager and an advanced distribution of the GNU system, announced the release of GNU Guix version 1.0.0. or “One-point-oh”. This release includes ISO-9660 installation images, a virtual machine image, and with tarballs to install the package manager on top of GNU/Linux distro, either from source or from binaries. Guix users can update by running guix pull. According to their official post, the team says that, “For Guix, 1.0 is the result of seven years of development, with code, packaging, and documentation contributions made by 260 people, translation work carried out by a dozen of people, and artwork and web site development by a couple of individuals, to name some of the activities that have been happening. During those years we published no less than 19 “0.x” releases.” This release, the team says, is a major milestone for those who’ve been on board for several years. Highlights of GNU Guix 1.0.0 On December 6, last year, the GNU Guix team released the 0.16.0 version where 99 people contributed over 5,700 commits at that time. This new One-point-oh release includes the following highlights since the previous version. Hassle-free system installation: The ISO installation image now runs a text-mode graphical installer, which makes system installation less tedious than it was before. The installer is fully translated to French, German, and Spanish. Improved user interface: This release includes aliases for common operations such as guix search and guix install. Diagnostics are now colorized, more operations show a progress bar, there’s a new --verbosity option recognized by all commands, and most commands are now “quiet” by default. New package transformation: There’s a new --with-git-url package transformation option, that goes with --with-branch and --with-commit. Guix now has a uniform mechanism to configure keyboard layout—a long overdue addition. Also, Xorg configuration has been streamlined with the new xorg-configuration record. guix pack -R: This creates tarballs containing relocatable application bundles that rely on user namespaces. Starting from 1.0, guix pack -RR generates relocatable binaries that fall back to PRoot on systems where user namespaces are not supported. Package addition and updates: More than 1,100 packages were added, leading to close to 10,000 packages, 2,104 packages were updated, and several system services were contributed. Multiple language availability: The manual has been fully translated to French, the German and Spanish translations are nearing completion. They have also planned to add a Simplified Chinese translation. One can also help translate the manual into their language by joining the Translation Project. The team also says that Guix 1.0 is a tool that’s both serviceable for one’s day-to-day computer usage and a great playground to explore. Whether users want to help on design, coding, maintenance, system administration, translation, testing, artwork, web services, funding, organizing a Guix install party, the team is welcome to contributions. To know more about the GNU Guix 1.0.0 in detail, read the official blog post. GNU Shepherd 0.6.0 releases with updated translations, faster services, and much more GNU Nano 4.0 text editor releases! GNU Octave 5.1.0 releases with new changes and improvements
Read more
  • 0
  • 0
  • 2230

article-image-github-deprecates-and-then-restores-network-graph-after-github-users-share-their-disapproval
Vincy Davis
02 May 2019
2 min read
Save for later

GitHub deprecates and then restores Network Graph after GitHub users share their disapproval

Vincy Davis
02 May 2019
2 min read
Yesterday, GitHub announced in a blog post that they are deprecating the Network Graph from the repository’s Insights panel and that visits to this page will be redirected to the forks page instead. Following this announcement, they removed the network graph. On the same day, however, they deleted the blog post and also added back the network graph. The network graph is one of the useful features for developers on GitHub. It is used to display the branch history of the entire repository network, including branches of the root repository and branches of forks that contain commits unique to the network. Users of GitHub were alarmed on seeing the blog post about the removal of network graph without any prior notification or provision of a suitable replacement. For many users, this meant a significant burden of additional work. https://twitter.com/misaelcalman/status/1123603429090373632 https://twitter.com/theterg/status/1123594154255187973 https://twitter.com/morphosis7/status/1123654028867588096 https://twitter.com/jomarnz/status/1123615123090935808 Following the backlash and requests to bring back the Graph Network, on the same day, the Community Manager of GitHub posted on its community forum, that they will be reverting this change, based on the users’ feedback. Later on, the blog post announcing the deprecation was removed and the network graph was back on its website. This has brought a huge sigh of relief amongst GitHub’s users. The feature is famous for checking the state of a repository and the relationship between active branches. https://twitter.com/dotemacs/status/1123851067849097217 https://twitter.com/AlpineLakes/status/1123765300862836737 GitHub has not yet officially commented on why they removed the network graph in the first place. A Reddit user has put up an interesting shortlist of suspicions: The cost-benefit analysis from "The Top" determined that the compute time for generating the graph was too expensive, and so they "moved" the feature to a more premium account. "Moved" could also mean unceremoniously kill off the feature because some manager thought it wasn't shiny enough. Microsoft buying GitHub made (and will continue to make) GitHub worse, and this is just a harbinger of things to come. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Apache Software Foundation finally joins the GitHub open source community Microsoft and GitHub employees come together to stand with the 996.ICU repository
Read more
  • 0
  • 0
  • 5023