Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Application Development

279 Articles
article-image-neuron-an-all-inclusive-data-science-extension-for-visual-studio
Prasad Ramesh
01 Nov 2018
3 min read
Save for later

Neuron: An all-inclusive data science extension for Visual Studio

Prasad Ramesh
01 Nov 2018
3 min read
A team of students from the Imperial College London developed a new Visual Studio extension called neuron. It is aimed to be an all-inclusive add-on for data science tasks in Visual Studio. Using neuron is pretty simple. You begin with regular Python or R code file in a window. Beside the code is neuron’s windows as shown in the following screenshot. It takes up half of the screen but is a blank page at the start. When you run your code snippets, the output starts showing up as interactive cards. Neuron can display outputs that are plain text, tables, images, graphs, or maps. Source: Microsoft Blog You can find neuron at the Visual Studio Marketplace. On installation, a button will be visible when you have a supported file open. Neuron uses the Jupyter Notebook in the background. Jupyter Notebook would already be installed in your computer considering it popularity, if not you will be prompted. Neron supports more output types than Jupyter Notebook. You can also generate 3D graphs, maps, LaTeX formulas, markdown, HTML, and static images with neuron. The output is displayed in a card on the right-hand side, it can be resized moved around or expanded into a separate window. Neuron also keeps a track of code snippets associated with each card. Why was neuron created? Data scientists come from various backgrounds and use a set of standard tools like Python, libraries, and the Jupyter Notebook. Microsoft approached the students from the Imperial College London to integrate the various set of tools into one single workspace. A single workspace being a Visual Studio extension that could enable users to run data analysis operations without breaking the current workflow. Neuron gets the advantage of an intelligent IDE, Visual Studio along with rapid execution and visualization of Jupyter Notebook all in a single window. It is not a new idea Although neuron is not a new idea. https://twitter.com/jordi_aranda/status/1057712899542654976 Comments on Reddit also suggest there are existing such tools in other IDEs. Reddit user kazi1 stated: “Seems more or less the same as Microsoft's current Jupyter extension (which is pretty meh). This seems like it's trying to reproduce the work already done by Atom's Hydrogen extension, why not contribute there instead." Another Redditor named procedural_ape said: “This looks like an awesome extension but shame on Microsoft for acting like this is their own fresh, new idea. Spyder has had this functionality for a while.” For more details, visit the Microsoft Blog and a demo is available on GitHub. Visual Studio code July 2018 release, version 1.26 is out! MIT plans to invest $1 billion in a new College of computing that will serve as an interdisciplinary hub for computer science, AI, data science Microsoft releases the Python Language Server in Visual Studio
Read more
  • 0
  • 0
  • 3403

article-image-kotlin-1-3-released-with-stable-coroutines-multiplatform-projects-and-more
Prasad Ramesh
30 Oct 2018
3 min read
Save for later

Kotlin 1.3 released with stable coroutines, multiplatform projects and more

Prasad Ramesh
30 Oct 2018
3 min read
In the Kotlin 1.3 release, coroutines are now stable, scalability is better, and Kotlin/Native Beta is added. Coroutines are stable in Kotlin 1.3 Coroutines provide a way to write non-blocking asynchronous code that’s easy to understand. It is a useful tool for activities ranging from offloading work onto background workers to implementing complicated network protocols. The kotlinx.coroutines library hits is at 1.0. It provides a solid foundation for managing asynchronous jobs various scales including composition, cancelation, exception handling and UI-specific use cases. Kotlin/Native Beta Kotlin/Native makes use of LLVM to compile Kotlin sources into standalone binaries without any VM required. Various operating systems and CPU architectures including iOS, Linux, Windows, and Mac are supported. The support extends to even WebAssembly and embedded systems like STM32. Kotlin/Native has a fully automatic memory management and can interoperate with C, Objective-C, and Swift. It exposes platform APIs like Core Foundation, POSIX, and any other native library of choice. The Kotlin/Native runtime promotes immutable data and blocks any attempts of sharing unprotected mutable state between threads. Threads don’t exist for Kotlin/Native, they are abstracted away as a low-level implementation. Threads are replaced by workers which are a safe and manageable way of achieving concurrency. Multiplatform projects in Kotlin 1.3 Kotlin supports JVM, Android, JavaScript, and Native. Hence code can be reused. This saves effort and time which can be used to perform other tasks. The multiplatform libraries in Kotlin 1.3 cover everyday tasks such as HTTP, serialization and managing coroutines. Using the libraries is the easiest way to write multi platform code. You can also create custom multi-platform libraries which wrap platform-specific dependencies into a common API. Tooling support for Kotlin/Native and Multiplatform Kotlin 1.3 has tooling support for Kotlin/Native and multiplatform projects. This is available in IntelliJ IDEA Community Edition, IntelliJ IDEA Ultimate, and Android Studio. All of the code editing features such as error highlighting, code completion, navigation and refactoring are available in all these IDEs. Ktor 1.0 Beta Ktor is a connected applications framework. It implements the entire HTTP stack asynchronously using coroutines and has reached Beta. Other features Some other features in Kotlin 1.3 release include experimental support for inline classes, incremental compilation for Kotlin/JS, and unsigned integers. This release also features a sequence debugger for visualizing lazy computations, contracts to improve static analysis for library calls, and no-arg entry point to provide a cleaner experience for new users. To know more details about all the changes, visit the changelog. KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta Kotlin/Native 0.8 recently released with safer concurrent programming 4 operator overloading techniques in Kotlin you need to know
Read more
  • 0
  • 0
  • 3039

article-image-what-to-expect-in-asp-net-core-3-0
Prasad Ramesh
30 Oct 2018
2 min read
Save for later

What to expect in ASP.NET Core 3.0

Prasad Ramesh
30 Oct 2018
2 min read
ASP.NET Core 3.0 will come with some changes in the way projects work with frameworks. The .NET Core integration will be tighter and will bring third-party open source integration. Changes to shared frameworks in ASP.NET Core 3.0 In ASP.NET Core 1.0, packages were referenced as just packages. From ASP.NET Core 2.1 this was available as a .NET Core shared framework. ASP.NET Core 3.0 aims to reduce issues working with a shared framework. This change removes some of the Json.NET (Newtonsoft.Json) and Entity Framework Core (Microsoft.EntityFrameworkCore.*) components from the shared framework ASP.NET Core 3.0. For areas in ASP.NET Core dependent on Json.NET, there will be packages that support the integration. The default areas will be updated to use in-box JSON APIs. Also, Entity Framework Core will be shipped as “pure” NuGet packages. Shift to .NET Core from .NET Framework The .NET Framework will get fewer new features that come to .NET Core in further releases. This change is made so that existing applications in .NET Core don’t break due to some changes. To leverage the features from .NET Core, ASP.NET Core will now only run on .NET Core starting from version 3.0. Developers currently using ASP.NET Core on .NET Framework can continue to do so till the LTS support period of August 21, 2021. Third party components will be filtered Third party components will be removed. But Microsoft will support the open source community with integration APIs, contributions to existing libraries by Microsoft engineers, and project templates to ensure smooth integration of these components. Work is also being done on streamlining the experience for building HTTP APIs, and a new API client generation system. For more details, visit the Microsoft website. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET Core 2.0 reaches end of life, no longer supported by Microsoft Microsoft’s .NET Core 2.1 now powers Bing.com
Read more
  • 0
  • 0
  • 5131
Visually different images

article-image-qt-design-studio-1-0-released-with-qt-photoshop-bridge-timeline-based-animations-and-qt-live-preview
Natasha Mathur
26 Oct 2018
2 min read
Save for later

Qt Design Studio 1.0 released with Qt photoshop bridge, timeline based animations and Qt live preview

Natasha Mathur
26 Oct 2018
2 min read
The Qt team released Qt Design Studio 1.0 yesterday. Qt Design Studio 1.0 explores features such as Qt photoshop bridge, timeline-based animations, and Qt live preview among other features. Qt Design Studio is a UI design and development environment which allows designers and developers around the world to rapidly prototype as well as develop complex and scalable UIs. Let’s discuss the features of Qt Design Studio 1.0 in detail. Qt Photoshop Bridge Qt Design Studio 1.0 comes with Qt photoshop bridge that allows users to import their graphics design from photoshop. Users can also create re-usable components directly via Photoshop. Moreover, exporting directly to specific QML types is also allowed. Other than that, Qt photoshop Bridge comes with an enhanced import dialog as well as basic merging capabilities. Timeline-based animations Timeline-based animations in Qt Design Studio 1.0 come with a timeline-/keyframe-based editor. This editor allows designers to easily create pixel-perfect animations without having to write a single line of code. You can also map and organize the relationship between timelines and states to create smooth transitions from state to state. Moreover, selecting multiple keyframes is also enabled. Qt Live Preview Qt Live Preview lets you run and preview your application or UI directly on the desktop, Android devices, as well as the Boot2Qt devices. You can also see how your changes affect the UI live on your target device. Moreover, it also comprises a zoom in and out functionality. Other Features You can insert a 3D studio element to preview it on the end target dice with the Qt Live Preview. There’s a Qt Safe Renderer integration that uses Safe Renderer items and also map them in your UI. You can use states and timeline for the creation of screen flows and transitions. Qt Design Studio is free, however, you will need a commercial Qt developer license to distribute the UIs created with Qt Design Studio. For more information, check out the official Qt Design Studio blog. Qt 3D Studio 2.1 released with new sub-presentations, scene preview, and runtime improvements Qt creator 4.8 beta released, adds language server protocol Qt Creator 4.7.0 releases!
Read more
  • 0
  • 0
  • 3618

article-image-the-llvm-project-is-ditching-svn-for-github-the-migration-to-github-has-begun
Prasad Ramesh
25 Oct 2018
2 min read
Save for later

The LLVM project is ditching SVN for GitHub. The migration to Github has begun.

Prasad Ramesh
25 Oct 2018
2 min read
The official LLVM monorepo repository was officially published on Github on Tuesday. Now is a good time to modify your workflows to use the monorepo as soon as possible. Any current SVN based workflows will be supported for at the most one more year. The move from SVN to GitHub for LLVM has been long under consideration. After positive responses in the mailing threads and in favor of the GitHub community, LLVM has finally decided to set the migration plan in motion. Two round-table meetings were held this week with the developers to discuss SVN to GitHub migration. Below are some highlights of these meetings. The most important outcome from the meetings is an agreed upon timeline for completing the transition. The latest monorepo prototype will be moved over to the LLVM organization Github project and has now begun mirroring the current SVN repository. Commits will still be made to the SVN repository just as they are currently done. All community members are advised to begin migrating their workflows relying on SVN or the current git mirrors to use the new monorepo. As for CI jobs or internal mirrors that pull from SVN or http://llvm.org/git/*.git they should be modified to pull from the new monorepo instead. Changes are advised to also make them work with the new repository layout. Developers are advised to begin using the new monorepo for development. The provided scripts should help to commit code. These scripts will enable you to commit to SVN from the monorepo without having to use git-svn. The commit access will be turned off to the SVN server and commit access to the monorepo will be enabled in a year. At this point, the monorepo will be the only source for the project. Keep an eye on the LLVM monorepo GitHub repository. There is a getting started guide to work with a GitHub monorepo and for more details you can take a look at the mailing list. LLVM will be relicensing under Apache 2.0 start of next year A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring
Read more
  • 0
  • 0
  • 3837

article-image-gitlab-11-4-is-here-with-merge-request-reviews-and-many-more-features
Prasad Ramesh
23 Oct 2018
3 min read
Save for later

GitLab 11.4 is here with merge request reviews and many more features

Prasad Ramesh
23 Oct 2018
3 min read
GitLab 11.4 was released yesterday with new features like merge request reviews, feature flags, and many more. Merge request reviews in GitLab 11.4 This feature will allow a reviewer to draft unlimited comments in a merge request as per preference. It will ensure consistency and then submit them all as a single action. A reviewer can spread their work over many sessions as the drafts are saved to GitLab. The draft comments appear as normal individual comments once they are submitted. This allows individual team members flexibility. They can review code the way they want, it will still be compatible with the entire team. Create and toggle feature flags for applications This alpha feature gives users the ability to create and manage feature flags for software directly in the product. It is as simple as creating a new feature flag and validating it using simple API instructions. Then you have the ability to control the behavior of the software in the field via the feature flag within GitLab. Feature flags offer a feature toggle system for applications. File tree for browsing merge request diff The file tree summarizes both the structure and size of the change. It is similar to diff-stats which provides an overview of the change thereby improving navigation between diffs. Search allows reviewers to limit code review to a subset of files. This simplifies reviews by specialists. Suggest code owners as merge request approvers It is not always obvious as to which person is the best to review changes. The code owners are now shown as suggested approvers when a merge request is created or edited. This makes assigning the right person easy. New user profile page overview With GitLab 11.4, a redesigned profile page overview is introduced. It shows your activity via the familiar but shortened contribution graph. It displays the latest activities and most relevant personal GitLab projects. Set and show user status message within the user menu Setting your status is even more simple with GitLab 11.4. There is a new “Set status” item in the user menu which provides a fresh modal allowing users to set and clear their status right within context. In addition, the status you set is also shown in your user menu, on top of your full name and username. There are some more features like: Move the ability to use includes in .gitlab-ci.yml from starter to core Run all jobs only/except for modifications on a path/file Add timed incremental rollouts to Auto DevOps Support Kubernetes RBAC for GitLab managed apps Auto DevOps support for RBAC Support PostgreSQL DB operations for Auto DevOps Other improvements for searching projects, UX improvements, and Geo improvements For a complete list of features visit the GitLab website. GitLab 11.3 released with support for Maven repositories, protected environments and more GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub GitLab is moving from Azure to Google Cloud in July
Read more
  • 0
  • 0
  • 2109
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-microsoft-bring-an-open-source-model-of-component-firmware-update-cfu-for-peripheral-developers
Prasad Ramesh
19 Oct 2018
4 min read
Save for later

Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers

Prasad Ramesh
19 Oct 2018
4 min read
Microsoft announced an open-source model for Component Firmware Update (CFU), for Windows developers. CFU enables delivering firmware updates for peripheral components through Windows Update by using CFU drivers. This protocol aims to enable system and peripheral developers to leverage the CFU protocol. It allows them to easily and automatically push firmware updates to Windows Update for their firmware components. CFU aims to bring smooth updates via Windows updates and verify the firmware version before download. CFU permits but does not specify authentication, encryption, rollback policies/ methods, or recovery of bricked firmware. Overview of CFU The CFU driver is the host and is created by the device manufacturer. It is delivered via a Windows Update. Then the driver is installed once the device is detected by Windows. Primary and sub-components A hierarchical system with a primary component and subcomponents is followed in a CFU compatible system. A primary component implements CFU on the device side and can receive updates for itself and the connected sub-components. A device may have multiple primary components with or without additional sub-components. Offers and payloads A CFU driver which is the host, may contain multiple firmware images for a primary component and its sub-components. A package in the host consists of an offer, a payload and other information. The offer contains information about the payload to allow the primary component in deciding if it is acceptable. A payload is the firmware image. Offer Sequence The primary component can accept, reject, or skip the offer of firmware update. On accepting, the payload is delivered immediately. On rejecting or skipping, the host cycles through all other offers in the list. Host independence The host’s (CFU driver) decisions are independent of the offers’ contents or payloads. It does not necessarily use any logic and simply sends the offers and the accepted payloads. Payload delivery On an offer being accepted, the host proceeds to download the firmware image or referred as the payload. Delivery is done in three phases—beginning, middle, and end. The payload is a set of addresses and fixed-size arrays of bytes. Payload validation and authentication Validation of the incoming firmware update is an important aspect. The primary component should verify bytes after each write ensuring that the data is stored properly before proceeding with the next set of data bytes. A CRC or hash should also be calculated on download, to be verified after the download is complete, ensuring the data wasn’t modified in transit. In addition, a cryptographic signature mechanism is recommended to provide end-to-end protection. An encryption mechanism can also be employed for confidential downloads. On image authentication, the properties should be validated against the offer and other rules the device manufacturer may specify. CFU does not specify any rules to be applied. Payload Invocation The CFU Protocol is run at the application level in the primary component. The component can continue to do other tasks as long it can receive and store the incoming payload without significant disruptions. The only real disruption occurs when the new firmware must be invoked. There are two recommended ways to avoid that disruption. A very generic approach is to use a small bootloader image to select one of multiple images to run when the device is reset. This is typically at boot time. The image selection algorithm is specific to the implementation. It is typically based on an algorithm which involves code version, and an indication of successful image validation. Another invocation method is to physically swap the memory of the desired image with the active address space. This is done upon reset. A disadvantage of this method is that it requires specialized hardware. The advantage being all images are statically linked to the same address space eliminating the need for a bootloader. CFU limitations There are some limitations of the protocol. It cannot update a bricked component that can no longer run the protocol. CFU does not provide any security. The CFU protocol requires extra memory to store the incoming images which helps in non-disruptive updates. Updating sub-component images larger than the component’s available storage requires dividing the sub-component image into smaller packages The CFU protocol allows pausing the download, so care needs to be taken for proper validation. CFU assumes that the primary component has set validation rules. If they need to be changed, the component must first be successfully updated by using the old rules first, only then new rules can be applied. For more details, visit the Microsoft website. How the Titan M chip will improve Android security Microsoft fixing and testing the Windows 10 October update after file deletion bug Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65
Read more
  • 0
  • 0
  • 2749

article-image-the-new-rstudio-package-manager-is-now-generally-available
Natasha Mathur
19 Oct 2018
2 min read
Save for later

The new RStudio Package Manager is now generally available

Natasha Mathur
19 Oct 2018
2 min read
The Rstudio team announced the general availability of their latest RStudio professional product, namely, RStudio Package Manager, two days ago. It explores features such as CRAN access, approved subsets of CRAN packages, adding internal packages from GitHub, and optimized experience for R users among others. RStudio Package Manager is an on-premises server product that helps teams and organizations centralize and organize R packages. In other words, it allows R users and the IT team to work together to build a central repository for R packages. Let’s discuss the features of this new Package Manager. CRAN access RStudio Package Manager allows R users to access CRAN (The Comprehensive R Archive Network) without requiring a network exception on every production node. It also helps automate CRAN updates on your schedule. Moreover, you can optimize the disk usage and only download the packages that you need. However, RStudio Package Manager does not provide binary packages from CRAN. It only provides source packages. This limitation will be addressed in the future. Approved subsets of CRAN packages RStudio Package Manager enables admins to create approved subsets of CRAN packages. It also makes sure that the subsets remain stable despite the adding or updating of packages. Adding internal packages using CLI Administrators can now add internal packages using the CLI. For instance, if your internal packages are in Git, then the RStudio Package Manager can automatically track your Git repositories. This is also capable of making the commits accessible to users. Optimized experience for R users RStudio Package Manager offers a seamless experience that gets optimized for R users. For instance, all packages are versioned which automatically makes the older versions accessible to users. This package Manager is also capable of recording the usage statistics. These metrics help administrators conduct audits and make it easy for R users to discover the popular and most useful packages. For more information, check out the official Rstudio package manager blog. Getting Started with RStudio Introducing R, RStudio, and Shiny
Read more
  • 0
  • 0
  • 2382

article-image-llvm-will-be-relicensing-under-apache-2-0-start-of-next-year
Prasad Ramesh
18 Oct 2018
3 min read
Save for later

LLVM will be relicensing under Apache 2.0 start of next year

Prasad Ramesh
18 Oct 2018
3 min read
After efforts since last year, LLVM, the set of compiler building tools is closer towards an Apache 2.0 license. Currently, the project has its own open source licence created by the LLVM team. This is a move to go forward with Apache 2.0 based on the mailing list discussions. Why the shift to Apache 2.0? The current licence is a bit vague and was not very welcoming to contributors and had some patent issues. Hence, they decided to shift to the industry standard Apache 2.0. The new licence was drafted by Heather Meeker, the same lawyer who worked on the Commons Clause. The goals of the relicensing as listed on their website are: Encourage ongoing contributions to LLVM by preserving a low barrier to entry for contributors. Protect users of LLVM code by providing explicit patent protection in the license. Protect contributors to the LLVM project by explicitly scoping their patent contributions with this license. Eliminate the schism between runtime libraries and the rest of the compiler that makes it difficult to move code between them. Ensure that LLVM runtime libraries may be used by other open source and proprietary compilers. The plan to shift LLVM to Apache 2.0 The relicence is not just Apache 2.0, the license header reads “Apache License v2.0 with LLVM Exceptions”. The exceptions are related to compiling source code. To know more about the exceptions follow the mailing list. The team plans to install the new license and the developer policy that references the new and old licenses. At this point, all subsequent contributions will be under both these licenses. They have a two-fold plan to ensure the contributors are aware. They’re going to ask many active contributors (both enterprises and individuals) to explicitly sign an agreement to relicense their contributions. Signing will make the change clear and known while also covering historical contributions. For any other contributors, their commit access will be revoked until the LLVM organization can confirm that they are covered by one of the agreements. The agreements For the plan to work, both individuals and companies need to sign an agreement to relicense. They have built a process for both companies and individuals. Individuals Individuals will have to fill out a form with the necessary information like email addresses, potential employers, etc. to effectively relicense your contributions. The form contains a link to a DocuSign agreement to relicense any of your individual contributions under the new license. Signing the document will make things easier as it will avoid confusion in contributions and if it is covered by some company. The form and agreement is available on Google forms. Companies There is a DocuSign agreement for companies too. Some companies like Argonne National Laboratory and Google have already signed the agreement. There will be no explicit copyright notice as they don’t feel it is worthwhile. The current planned timeline is to install the new developer policy and the new license after LLVM 8.0 release in January 2019. For more details, you can read the mail. A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring OpenMP, libc++, and libc++abi, are now part of llvm-toolchain package
Read more
  • 0
  • 0
  • 4521

article-image-gnu-guile-2-9-1-beta-released-jit-native-code-generation-to-speed-up-all-guile-programs
Prasad Ramesh
15 Oct 2018
2 min read
Save for later

GNU Guile 2.9.1 beta released JIT native code generation to speed up all Guile programs

Prasad Ramesh
15 Oct 2018
2 min read
GNU released Guile 2.9.1 beta of the extension language for the GNU project. It is the first pre-release leading up to the 3.0 release series. In comparison to the current stable series, 2.2.x, Guile 2.9.1 brings support for just-in-time native code generation to speed up all Guile programs. Just-in-time code generation in Guile 2.9 Relative to Guile 2.2, Guile programs now run up to 4 times faster. This is due to just-in-time (JIT) native code generation. JIT compilation is enabled automatically in this release. To disable it, configure Guile with either `--enable-jit=no' or `--disable-jit'. The default is `--enable-jit=auto', which enables the JIT. JIT support is limited to x86-64 platforms currently. Eventually, it will expand to all architectures supported by GNU Lightning. Users on other platforms can try passing `--enable-jit=yes' to see if JIT is available on their platform. Lower-level bytecode Relative to the virtual machine in Guile 2.2, Guile's VM instruction set is now more low-level.  This allows expressing advanced optimizations, like type check elision or integer devirtualization, and makes JIT code generation easier. This low-level change can mean that for a given function, the corresponding number of instructions in Guile 3.0 may be higher than Guile 2.2. This can lead to slowdowns when the function is interpreted. GOOPS classes are not redefinable by default All GOOPS classes were redefinable in theory if not practically. This was supported by an indirection (or dereference operator) in all "struct" instances. Even though only a subset of structs would need redefinition the indirection is removed to speed up Guile records. It also allows immutable Guile records to eventually be described by classes, and enables some optimizations in core GOOPS classes that shouldn't be redefined. In GOOPS, now there are classes that are both redefinable and not redefinable. The classes created with GOOPS by default are not redefinable. In order to make a class redefinable, it should be an instance of `<redefinable-class>'. Also, scm_t_uint8, etc are deprecated in favor of C99 stdint.h. This release does not offer any API or ABI stability guarantees. Stick to the stable 2.2 release if you want a stable working version. You can read more in the release notes on the GNU website. GNU nano 3.0 released with faster file reads, new shortcuts and usability improvements GIMP gets $100K of the $400K donation made to GNOME Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes
Read more
  • 0
  • 0
  • 2562
article-image-gnome-3-32-says-goodbye-to-application-menus
Bhagyashree R
12 Oct 2018
3 min read
Save for later

GNOME 3.32 says goodbye to application menus

Bhagyashree R
12 Oct 2018
3 min read
On Tuesday, Gnome announced that they are planning on retiring the app menus from its next release, which is GNOME 3.32. Application menus or app menus are the menus that you see in the GNOME 3 top bar, with the name and icon for the current app. Why application menus are being removed in GNOME? The following are the reasons GNOME is bidding adieu to the application menus: Poor user engagement: Since their introduction, application menus have been a source of usability issues. The app menus haven’t been performing well over the years, despite efforts to improve them. Users don’t really engage with them. Two different locations for menu items: Another reason for the application menus not doing well could be the split between app menus and the menus in application windows. With two different locations for menu items, it becomes easy to look in the wrong place, particularly when one menu is more frequently visited than the other. Limited adoption by third-party applications: Application menus have seen limited adoption by third-party applications. They are often kept empty, other than the default quit item, and people have learned to ignore them. What guidelines developers must follow? All GNOME applications will have to move the items from its app menu to a menu inside the application window. Here are the guidelines that developers need to follow: Remove the app menu and move its menu items to the primary menu If required, split the primary menus into primary and secondary menus The about menu item should be renamed from "About" to "About application-name" Guidelines for the primary menu Primary menu is the menu you see in the header bar and has the icon with three stacked lines, also referred to as the hamburger menu. In addition to app menu items, primary menus can also contain other menu items. 2. Quit menu item is not required so it is recommended to remove it from all locations. 3. Move other app menu items to the bottom of the primary menu. 4. A typical arrangement of app menu items in a primary menu is a single group of items: Preferences Keyboard Shortcuts Help About application-name 5. Applications that use a menu bar should remove their app menu and move any items to the menu bar menus. If an application fails to remove the application menu by the release of GNOME 3.32, it will be shown in the app’s header bar, using the fallback UI that is already provided by GTK. Read the full announcement on GNOME’s official website. Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more GIMP gets $100K of the $400K donation made to GNOME
Read more
  • 0
  • 0
  • 4550

article-image-qt-creator-4-8-beta-released-adds-language-server-protocol
Prasad Ramesh
12 Oct 2018
2 min read
Save for later

Qt creator 4.8 beta released, adds language server protocol

Prasad Ramesh
12 Oct 2018
2 min read
The Qt team announced the release of Qt creator 4.8 beta yesterday. It includes generic programming language support and some more C++ experimental features since 4.7. Generic programming languages in Qt creator 4.8 beta In Qt Creator 4.8 Beta experimental support for language server protocol (LSP) is introduced. Many programming languages have a language server, with Go also having plans to include it. An LSP provides features like auto code complete and reference finding in IDEs. Addition of LSP means that by providing a client for the language server protocol, Qt Creator gets some support for many programming languages. Currently the Qt Creator supports code completion, highlighting of the symbol under the cursor, and jumping to the symbol definition. It also integrates diagnostics from the language server. Highlighting and indentation are still provided by the generic highlighter. The client is tested with Python for the most part. Currently, there is no support for language servers requiring special handling. C++ support There are some C++ experimental features add in this release. Editing compilation databases A compilation database is a list of files and compiler flags used to compile them. You can now open a compilation database as a project solely for editing and navigating code. You can try it by enabling the CompilationDatabaseProjectManager plugin. Clang format based indentation Auto-indentation is done via LibFormat which is the backend used by Clang format. To try this, enable the ClangFormat plugin. Cppcheck diagnostics The diagnostics generated by the Cppcheck tool is integrated into the editor. Enable the Cppcheck plugin to use it. In addition to the many fixes, the Clang code model can now jump to the symbol indicated by the auto keyword. This also allows to generate a compilation database from the information the mode model has. This can be done via Build | Generate Compilation Database. Debugging Now there is support for running multiple debuggers on one or more executables simultaneously. When multiple debuggers are running, you can switch between them with a new drop-down menu in Debug mode. More about various improvements and fixes can be found in the changelog. For further details, visit the Qt Blog. Qt creator 4.8 can be downloaded from the Qt website. Qt 3D Studio 2.1 released with new sub-presentations, scene preview, and runtime improvements How to create multithreaded applications in Qt How to Debug an application using Qt Creator
Read more
  • 0
  • 0
  • 3840

article-image-vim-go-creator-faith-arslan-takes-an-indefinite-sabbatical-from-all-his-open-source-projects-as-hes-burnt-out
Natasha Mathur
11 Oct 2018
6 min read
Save for later

Vim-go creator, Faith Arslan, takes an “indefinite sabbatical” from all his open source projects as he’s burnt out

Natasha Mathur
11 Oct 2018
6 min read
The creator of vim-go, Faith Arslan, announced on his personal blog, yesterday that he is taking an “indefinite sabbatical” from his vim-go projects. He had been working on the project for the past 4.5 years. Arslan says that he won’t be maintaining vim-go anymore and is uncertain about when he’ll be coming back to work on it again. For now, he’ll only be working on a select few small projects that don’t need him to actively maintain them. “I’m working for DigitalOcean..this is my full-time job. I have a family to take care of and just like any other grown-up in the world, you do what you have to do. However, there is no place for Go tooling and editors here. It’s a hobby and passion. But if a hobby feels like it becomes a second full-time job, something is very wrong. The time has come to end this craziness.”, says Arslan. What’s interesting is that Arslan is not the first from the open source community to go on a break. This seems to be an ongoing trend in the open-source community lately which started with Guido Van Rossum, Python founder, taking a ‘permanent vacation from being BDFL’, in July. He does continue to work in his capacity as a core developer. Guido's decision to take a break stemmed from the physical, mental, and the emotional toll that his role at work had taken on him over the past years. He had mentioned that he was “tired, and need a very long break”. Arslan’s reason seems fairly similar as he said, “ For the last one year, I’m struggling to maintain my side projects. I feel like I’m burnt out. Working on a side project is fun until it becomes your second full-time job. One thing that I’m sure is, I’m not happy how my day to day life is evolving around me”.   Another recent example is Linus Torvalds, who had been working on the Linux Kernel for almost 30-years. Torvalds opened up about going on a break over his ‘hurtful’ behavior that ‘contributed to an unprofessional environment’. “I need to take a break to get help on how to behave differently and fix some issues in my tooling and workflow”, said Torvalds. Even though Linus left to take time for self-reflection and was not burnt out, it is symptomatic of the same underlying issue. When one wants to accomplish a lot in a short period of time, one tends to find efficiencies where they can. Often efficient communication may not be effective as it may come across as terse, sarcastic or uncaring. Arslan mentioned that when he first started with vim-go, it was fun, rewarding and solved a lot his problems. It was his favorite editor and enabled him to write Go inside vim, in a very efficient and productive way. As he started with vim-go, he got the chance to work on and create many other smaller Go packages and tools. Some of these such as color and struct packages even became popular. “Again, it solved many problems and back then I wanted to use Go packages that are easy to use and just works out of the box. I also really like to work on Go tooling and editors. But this is not the case for many of my projects, especially vim-go. With the popularity of all these projects, my day to day work also increased”, ” says Arslan. The problem of burnout seems epidemic in the open source community. They work long hours, neglect themselves and their personal lives, and don’t always get to see the results that they should for such hard work. Arslan mentioned that it used to take him 10-20 hours extra per week, outside of his day job, to maintain these projects. He could “no longer maintain this tempo” as every day he used to receive multiple GitHub emails regarding pull requests, issues, feedbacks, fixes, etc which was affecting his well-being. It also didn’t make any sense to him “economically”. “It’s very hard for me to do this, but trust me I’m thinking about this for a long time. I cannot continue this anymore without sacrificing my own well being”, mentions Arslan. Who will look after vim-go now? Arslan’s sabbatical won’t be affecting vim-go’s performance as he has assigned the duty of maintaining vim-go to two of the full-time contributors, namely, Martin Tournoij and Billie Cleek. Billie Cleek, who worked with Arslan at DigitalOcean will be the lead of the vim-go project. Cleek has already made hundreds of contributions to vim-go (recently added unified async support for Vim and Neovim) and is well-versed with vim-go’s code base. “I don’t know if I could find anyone else that would make a great fit than him. I’m very lucky to have someone like him. The vim-go community will be in very good hands”, said Arslan. As far as the other popular Go projects and packages are concerned, Arslan will be going over them one last time and will archive the repos such as color, structs, camelcase, images, vim-hclfmt, and many others. This means that you’ll still be able to fetch these repos and use it within your projects. Arslan believes that most of these packages are in “a very good state” and doesn’t require any more additions. That being said, there are three projects that Arslan will still be maintaining such as gomodifytags, structtag, and motion. The gomodifytags project was Arslan’s most enjoyed project so far as it had zero bugs and simple design because.  These projects will be maintained in a “sleep mode” and Arslan will only be going over “serious issues”. “I have now so much time that I’ll be spending for myself...I have a side project that I’m working for a couple of months privately..(I can) play more with my son and just hang out all day, without doing a single thing. The weekends belong to me. I no longer have to worry about the last opened pull request’s to vim-go or my other Go projects..it just feels so refreshing. I suggest everyone do the same thing, take a step back and see what’s happening around you. It’ll help you to become a better yourself”, says Arslan. Public reaction towards Arslan’s decision is majorly positive: https://twitter.com/rakyll/status/1050053991088840704 https://twitter.com/idanyliuk/status/1050053303814541312 https://twitter.com/corylanou/status/1050132111745794052 For more coverage, read Arslan’s official announcement. Golang 1.11 is here with modules and experimental WebAssembly port among other updates Why Golang is the fastest growing language on GitHub Golang 1.11 rc1 is here with experimental port for WebAssembly!
Read more
  • 0
  • 0
  • 2421
article-image-github-comes-to-your-code-editor-github-security-alerts-now-have-machine-intelligence
Savia Lobo
11 Oct 2018
3 min read
Save for later

GitHub comes to your code Editor; GitHub security alerts now have machine intelligence

Savia Lobo
11 Oct 2018
3 min read
On Tuesday, the GitHub team announced that they will be making life easy for developers by getting Git right into our editor. The insights on this extension will be announced on Day 2 (17th October, 2019) of the two-day GitHub Universe conference. GitHub, in collaboration with the Visual Studio Code Team at Microsoft will brief users about this update during their talk Cross Company Collaboration: Extending GitHub to a New IDE. Sarah Guthals, the Engineering Manager at GitHub in her post mentions, “We’ve been working since 2015 to provide a GitHub experience that meets you where you spend the majority of your time: in your editor.” What’s in store for developers from different communities? For .NET developers In 2015, GitHub brought all Visual Studio developers an extension that supports GitHub.com and GitHub Enterprise engagements within the editor. Sarah says, “today you can complete an entire pull request review without ever leaving Visual Studio.” For the Atom community GitHub also support a first class Git and GitHub experience for Atom developers. Users can now access basic Git operations like staging, commiting, and syncing, alongside more complex collaboration with the recently-released pull request experience. For game developers Unity game developers can now use Git within Unity for the first time to clone and sync with GitHub.com and lock files. The Conflux : GitHub and Visual Studio Code In the talk which will be presented in the coming week, Visual Studio Code team at Microsoft and the editor tools team at GitHub will share their experience on how both these teams began exploring the possibility of an integration between their two products. The team at Microsoft started to design a pull request experience within Visual Studio Code, while the GitHub team prototyped one modeled after the same experience in the Visual Studio IDE. This brought users an integrated GitHub experience in Visual Studio Code supported by the Visual Studio Code API. This new extension gives developers the ability to: Authenticate with GitHub within VS Code (for GitHub.com and GitHub Enterprise) List pull requests associated with your current repository, view their description, and browse the diffs of changed files Validate pull requests by checking them out and testing them without having to leave VS Code GitHub applies machine intelligence to its GitHub security alerts Github also announced that it has built a machine learning model that can scan text associated with public commits (the commit message and linked issues or pull requests) to filter out those related to possible security upgrades. With such smaller batch of commits, the model uses the diff to understand how required version ranges have changed. Further, it aggregates across a specific timeframe to get a holistic view of all dependencies that a security release might affect. Finally, the model outputs a list of packages and version ranges it thinks require an alert and currently aren’t covered by any known CVE in their system. To know more about these updates, visit the GitHub blog. Also know more about GitHub and Visual Studio Code integration in Sarah Guthals’ GitHub post. GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience 4 myths about Git and GitHub you should know about 7 tips for using Git and GitHub the right way
Read more
  • 0
  • 0
  • 2297

article-image-nvtop-an-htop-like-monitoring-tool-for-nvidia-gpus-on-linux
Prasad Ramesh
09 Oct 2018
2 min read
Save for later

NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux

Prasad Ramesh
09 Oct 2018
2 min read
People started using htop when the top just didn’t provide enough information. Now there is NVTOP, a tool that looks similar to htop but displays the process information loaded on your NVIDIA GPU. It works on Linux systems and displays detailed information about processes, memory used, which GPU and also displays the total GPU and memory usage. The first version of this tool was released in July last year. The latest change made the process list and command options scrollable. Some of the features of NVTOP are: Sorting by column To Select / Ignore a specific GPU by ID To kill selected process Monochrome option Yes, it has multi GPU support and can display the running processes from all of your GPUs. The information printed out looks like the following, and is similar to something htop would display. Source: GitHub There is also a manual page to give some guidance in using NVTOP. It can be accessed with this command: man nvtop There are OS specific installation steps on GitHub for Ubuntu/Debian, Fedora/RedHat/CentOS, OpenSUSE, and Arch Linux. Requirements There are two libraries needed to build and run NVTOP: The NVIDIA Management Library (NVML) for querying GPU information. The ncurses library for the user interface and make it colorful. Supported GPUs The NVTOP tool works only for NVIDIA GPUs and runs on Linux systems. One of the dependencies is the NVML library which does not support some queries from GPUs before the Kepler microarchitecture. That is anything before GeForce 600 series, GeForce 700 series, or GeForce 800M wouldn’t likely work. For AMD users, there is a tool called radeontop. The tool is provided under the GPLV3 license. For more details, head on to the NVTOP GitHub repository. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499 NVIDIA open sources its material definition language, MDL SDK
Read more
  • 0
  • 0
  • 23994