Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-llvm-will-be-relicensing-under-apache-2-0-start-of-next-year
Prasad Ramesh
18 Oct 2018
3 min read
Save for later

LLVM will be relicensing under Apache 2.0 start of next year

Prasad Ramesh
18 Oct 2018
3 min read
After efforts since last year, LLVM, the set of compiler building tools is closer towards an Apache 2.0 license. Currently, the project has its own open source licence created by the LLVM team. This is a move to go forward with Apache 2.0 based on the mailing list discussions. Why the shift to Apache 2.0? The current licence is a bit vague and was not very welcoming to contributors and had some patent issues. Hence, they decided to shift to the industry standard Apache 2.0. The new licence was drafted by Heather Meeker, the same lawyer who worked on the Commons Clause. The goals of the relicensing as listed on their website are: Encourage ongoing contributions to LLVM by preserving a low barrier to entry for contributors. Protect users of LLVM code by providing explicit patent protection in the license. Protect contributors to the LLVM project by explicitly scoping their patent contributions with this license. Eliminate the schism between runtime libraries and the rest of the compiler that makes it difficult to move code between them. Ensure that LLVM runtime libraries may be used by other open source and proprietary compilers. The plan to shift LLVM to Apache 2.0 The relicence is not just Apache 2.0, the license header reads “Apache License v2.0 with LLVM Exceptions”. The exceptions are related to compiling source code. To know more about the exceptions follow the mailing list. The team plans to install the new license and the developer policy that references the new and old licenses. At this point, all subsequent contributions will be under both these licenses. They have a two-fold plan to ensure the contributors are aware. They’re going to ask many active contributors (both enterprises and individuals) to explicitly sign an agreement to relicense their contributions. Signing will make the change clear and known while also covering historical contributions. For any other contributors, their commit access will be revoked until the LLVM organization can confirm that they are covered by one of the agreements. The agreements For the plan to work, both individuals and companies need to sign an agreement to relicense. They have built a process for both companies and individuals. Individuals Individuals will have to fill out a form with the necessary information like email addresses, potential employers, etc. to effectively relicense your contributions. The form contains a link to a DocuSign agreement to relicense any of your individual contributions under the new license. Signing the document will make things easier as it will avoid confusion in contributions and if it is covered by some company. The form and agreement is available on Google forms. Companies There is a DocuSign agreement for companies too. Some companies like Argonne National Laboratory and Google have already signed the agreement. There will be no explicit copyright notice as they don’t feel it is worthwhile. The current planned timeline is to install the new developer policy and the new license after LLVM 8.0 release in January 2019. For more details, you can read the mail. A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring OpenMP, libc++, and libc++abi, are now part of llvm-toolchain package
Read more
  • 0
  • 0
  • 4521

article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 3783

article-image-google-cloud-announces-new-go-1-11-runtime-for-app-engine
Bhagyashree R
17 Oct 2018
2 min read
Save for later

Google Cloud announces new Go 1.11 runtime for App Engine

Bhagyashree R
17 Oct 2018
2 min read
Yesterday, Google Cloud announced a new Go 1.11 runtime for the App Engine standard environment. This provides all the benefits of App Engine such as paying only for what you use, automatic scaling and managed infrastructure. Starting with Go 1.11, which was launched in August this year, Go on App Engine has no limits on application structure, supported packages, context.Context values, or HTTP clients. What are the changes in the Go 1.11 runtime as compared to Go 1.9? 1. Now, you can specify the Go 1.11 runtime in your app.yaml file by adding the following line: runtime: go111 2. Each of your services must include a package main statement in at least one source file. 3. The appengine build tag is now deprecated and will no longer be used when building an app for deployment. 4. The way you import dependencies has changed. You can specify the dependencies in this runtime by the following two ways: Putting your application and related code in your GOPATH. Or else, by creating a go.mod file to define your module. 5. Google App Engine now does not modify the Go toolchain to include the appengine package. Using Google Cloud client library or third party libraries instead of the App Engine-specific APIs is recommended. 6. You can deploy services that use the Go 1.11 runtime using the gcloud app deploy command. You can still use the appcfg.py commands the Go 1.9 runtime, but the gcloud command-line tool is preferred. This release of the Go 1.11 runtime in the App Engine uses the latest stable release of Go 1.11 and will automatically update to new minor versions upon deployment but will not for any major versions. Also, it is currently in beta and might be changed in backward-incompatible ways in future. You can read more about Go 1.11 runtime on The Go Blog and also the documentation published by Google. Golang plans to add a core implementation of an internal language server protocol Why Golang is the fastest growing language on GitHub Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 2508
Visually different images

article-image-gnu-guile-2-9-1-beta-released-jit-native-code-generation-to-speed-up-all-guile-programs
Prasad Ramesh
15 Oct 2018
2 min read
Save for later

GNU Guile 2.9.1 beta released JIT native code generation to speed up all Guile programs

Prasad Ramesh
15 Oct 2018
2 min read
GNU released Guile 2.9.1 beta of the extension language for the GNU project. It is the first pre-release leading up to the 3.0 release series. In comparison to the current stable series, 2.2.x, Guile 2.9.1 brings support for just-in-time native code generation to speed up all Guile programs. Just-in-time code generation in Guile 2.9 Relative to Guile 2.2, Guile programs now run up to 4 times faster. This is due to just-in-time (JIT) native code generation. JIT compilation is enabled automatically in this release. To disable it, configure Guile with either `--enable-jit=no' or `--disable-jit'. The default is `--enable-jit=auto', which enables the JIT. JIT support is limited to x86-64 platforms currently. Eventually, it will expand to all architectures supported by GNU Lightning. Users on other platforms can try passing `--enable-jit=yes' to see if JIT is available on their platform. Lower-level bytecode Relative to the virtual machine in Guile 2.2, Guile's VM instruction set is now more low-level.  This allows expressing advanced optimizations, like type check elision or integer devirtualization, and makes JIT code generation easier. This low-level change can mean that for a given function, the corresponding number of instructions in Guile 3.0 may be higher than Guile 2.2. This can lead to slowdowns when the function is interpreted. GOOPS classes are not redefinable by default All GOOPS classes were redefinable in theory if not practically. This was supported by an indirection (or dereference operator) in all "struct" instances. Even though only a subset of structs would need redefinition the indirection is removed to speed up Guile records. It also allows immutable Guile records to eventually be described by classes, and enables some optimizations in core GOOPS classes that shouldn't be redefined. In GOOPS, now there are classes that are both redefinable and not redefinable. The classes created with GOOPS by default are not redefinable. In order to make a class redefinable, it should be an instance of `<redefinable-class>'. Also, scm_t_uint8, etc are deprecated in favor of C99 stdint.h. This release does not offer any API or ABI stability guarantees. Stick to the stable 2.2 release if you want a stable working version. You can read more in the release notes on the GNU website. GNU nano 3.0 released with faster file reads, new shortcuts and usability improvements GIMP gets $100K of the $400K donation made to GNOME Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes
Read more
  • 0
  • 0
  • 2562

article-image-cimple-a-dsl-to-utilize-cpu-time-from-tens-to-hundreds-of-nanoseconds
Prasad Ramesh
15 Oct 2018
3 min read
Save for later

Cimple: A DSL to utilize CPU time from tens to hundreds of nanoseconds

Prasad Ramesh
15 Oct 2018
3 min read
Three MIT students and an associate professor published a paper in July. They introduce a concept called Instruction and Memory Level Parallelism (IMLP) task programming model for computing. They achieve this via a domain specific language (DSL) called Cimple (Coroutines for Instruction and Memory Parallel Language Extensions). Why Cimple? Before looking at what it is, let’s understand the motivation behind this work. As cited in the paper, currently there is a critical gap between millisecond and nanosecond latencies for process loading and execution. The existing software and hardware techniques hide the low latencies and are inadequate to fully utilize all of the memory from CPU caches to RAM. The work is based on a belief that an efficient, flexible, and expressive programming model can scale all of the memory hierarchy from tens to hundreds of nanoseconds. Modern processors with dynamic execution are more capable of exploiting instruction level parallelism (ILP) and memory level parallelism (MLP). They do this by using wide superscalar pipelines and vector execution units, and deep buffers for inflight memory requests. However, these resources “often exhibit poor utilization rates on workloads with large working sets”. With IMLP, the tasks execute as coroutines. These coroutines yield execution at annotated long-latency operations; for example, memory accesses, divisions, or unpredictable branches. The IMLP tasks are interleaved on a single process thread. They also integrate well with thread parallelism and vectorization. This led to a DSL embedded in C++ called Cimple. What is Cimple? It is a DSL embedded in C++, that allows exploring task scheduling and transformations which include buffering, vectorization, pipelining, and prefetching. A simple IMLP programming model is introduced. It is based on concurrent tasks being executed as coroutines. Cimple separates the program logic from programmer hints and scheduling optimizations. It allows exploring task scheduling and techniques like buffering, vectorization, pipelining, and prefetching. A compiler for CIMPLE automatically generates coroutines for the code. The CIMPLE compiler and runtime library are used via an embedded DSL. It separates the basic logic from scheduling hints and then into guide transformations. They also build an Abstract Syntax Tree (AST) directly from succinct C++ code. The DSL treats expressions as opaque AST blocks. It exposes conventional control flow primitives in order to enable the transformations. The results after using Cimple Cimple is used as a template library generator and then the performance gains are reported. The peak system throughput increased from 1.3× on HashTable to 2.5× on SkipList iteration. It speedups of the time to complete a batch of queries on one thread range from 1.2× on HashTable to 6.4× on BinaryTree. Source: Cimple: Instruction and Memory Level Parallelism Where the abbreviations are Binary Search (BS), Binary Tree (BT), Skip List (SL), Skip List iterator (SLi), and Hash Table (HT). Cimple reaches 2.5× throughput gains over hardware multithreading on a multi-core processor and 6.4× single thread. This is the resulting graph. Source: Cimple: Instruction and Memory Level Parallelism The final conclusions from the work is that Cimple is fast, maintainable, and portable. The paper will appear in PACT’18 to be held 1st to 4th November 2018. You can read it on the arXiv website. KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta Facebook releases Skiplang, a general purpose programming language low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 2125

article-image-nginx-hybrid-application-delivery-controller-platform-improves-api-management-manages-microservices-and-much-more
Melisha Dsouza
15 Oct 2018
3 min read
Save for later

NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more!

Melisha Dsouza
15 Oct 2018
3 min read
“Technology is front and center in every business strategy, and enterprises of all sizes and in all industries must embrace digital to attract, retain, and enrich customers,” -Gus Robertson, CEO, NGINX At the NGINX Conf 2018, the NGINX team has announced enhancements to its Application Platform that will serve as a common framework across monolithic and microservices based applications. The upgrade comes with 3 new releases; NGINX Plus, NGINX Controller, and NGINX Unit, which have been engineered to provide a built-in service mesh for managing microservices and an integrated application programming interface (API) management platform. They also maintain the traditional load balancing capabilities and a web application firewall (WAF). An application delivery controller (ADC) is used to improve the performance of web applications. The ADC acts as a mediator between web and application servers and their clients. It transfers requests and responses between them while enhancing performance using processes like load balancing, caching, compression, and offloading of SSL processing. The main aim of re-architecting NGINX’s platform and launching new updates was to provide a more comprehensive approach to integrating load balancing, service mesh technologies, and API management. This was to be done leveraging the modular architecture of the NGINX controller. Here is a gist of the three new NGINX product releases: #1 NGINX Controller 2.0 This controller is an upgrade on the NGINX Controller 1.0 that was launched in June of 2018. It was introduced with centralized management, monitoring, and analytics for NGINX Plus load balancers. Now, NGINX Controller 2.0 brings advanced NGINX Plus configuration. This includes version control, diffing, reverting and many more features. It also includes an all-newAPI Management Module which manages the NGINX Plus as an API gateway. Besides this, the controller will also include a future Service Mesh Module. #2 NGINX Plus R16 The R16 comes with dynamic clustering. It has a clustered state sharing and key-value stores for global rate limiting and DDoS mitigation. It also comes with load balancing algorithms for Kubernetes and microservices, enhanced UDP for VoIP and VDI, and AWS PrivateLink integration. #3 NGINX Unit 1.4 This unit improves security and language support while providing support for TLS. It also adds JavaScript with Node.js to extend existing Go, Perl, PHP, Python, and Ruby language support. Enterprises can now use the NGINX Application Platform to function as a Dynamic Application Gateway and a Dynamic Application Infrastructure. NGINX Plus and NGINX are used by popular, high-traffic sites such as Dropbox, Netflix, and Zynga. More than 319 million websites worldwide rely on NGINX Plus and NGINX application delivery platforms. To know more about this announcement, head over to DevOps.com Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0 Getting started with F# for .Net Core application development [Tutorial]  
Read more
  • 0
  • 0
  • 3511
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-gnome-3-32-says-goodbye-to-application-menus
Bhagyashree R
12 Oct 2018
3 min read
Save for later

GNOME 3.32 says goodbye to application menus

Bhagyashree R
12 Oct 2018
3 min read
On Tuesday, Gnome announced that they are planning on retiring the app menus from its next release, which is GNOME 3.32. Application menus or app menus are the menus that you see in the GNOME 3 top bar, with the name and icon for the current app. Why application menus are being removed in GNOME? The following are the reasons GNOME is bidding adieu to the application menus: Poor user engagement: Since their introduction, application menus have been a source of usability issues. The app menus haven’t been performing well over the years, despite efforts to improve them. Users don’t really engage with them. Two different locations for menu items: Another reason for the application menus not doing well could be the split between app menus and the menus in application windows. With two different locations for menu items, it becomes easy to look in the wrong place, particularly when one menu is more frequently visited than the other. Limited adoption by third-party applications: Application menus have seen limited adoption by third-party applications. They are often kept empty, other than the default quit item, and people have learned to ignore them. What guidelines developers must follow? All GNOME applications will have to move the items from its app menu to a menu inside the application window. Here are the guidelines that developers need to follow: Remove the app menu and move its menu items to the primary menu If required, split the primary menus into primary and secondary menus The about menu item should be renamed from "About" to "About application-name" Guidelines for the primary menu Primary menu is the menu you see in the header bar and has the icon with three stacked lines, also referred to as the hamburger menu. In addition to app menu items, primary menus can also contain other menu items. 2. Quit menu item is not required so it is recommended to remove it from all locations. 3. Move other app menu items to the bottom of the primary menu. 4. A typical arrangement of app menu items in a primary menu is a single group of items: Preferences Keyboard Shortcuts Help About application-name 5. Applications that use a menu bar should remove their app menu and move any items to the menu bar menus. If an application fails to remove the application menu by the release of GNOME 3.32, it will be shown in the app’s header bar, using the fallback UI that is already provided by GTK. Read the full announcement on GNOME’s official website. Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more GIMP gets $100K of the $400K donation made to GNOME
Read more
  • 0
  • 0
  • 4550

article-image-qt-creator-4-8-beta-released-adds-language-server-protocol
Prasad Ramesh
12 Oct 2018
2 min read
Save for later

Qt creator 4.8 beta released, adds language server protocol

Prasad Ramesh
12 Oct 2018
2 min read
The Qt team announced the release of Qt creator 4.8 beta yesterday. It includes generic programming language support and some more C++ experimental features since 4.7. Generic programming languages in Qt creator 4.8 beta In Qt Creator 4.8 Beta experimental support for language server protocol (LSP) is introduced. Many programming languages have a language server, with Go also having plans to include it. An LSP provides features like auto code complete and reference finding in IDEs. Addition of LSP means that by providing a client for the language server protocol, Qt Creator gets some support for many programming languages. Currently the Qt Creator supports code completion, highlighting of the symbol under the cursor, and jumping to the symbol definition. It also integrates diagnostics from the language server. Highlighting and indentation are still provided by the generic highlighter. The client is tested with Python for the most part. Currently, there is no support for language servers requiring special handling. C++ support There are some C++ experimental features add in this release. Editing compilation databases A compilation database is a list of files and compiler flags used to compile them. You can now open a compilation database as a project solely for editing and navigating code. You can try it by enabling the CompilationDatabaseProjectManager plugin. Clang format based indentation Auto-indentation is done via LibFormat which is the backend used by Clang format. To try this, enable the ClangFormat plugin. Cppcheck diagnostics The diagnostics generated by the Cppcheck tool is integrated into the editor. Enable the Cppcheck plugin to use it. In addition to the many fixes, the Clang code model can now jump to the symbol indicated by the auto keyword. This also allows to generate a compilation database from the information the mode model has. This can be done via Build | Generate Compilation Database. Debugging Now there is support for running multiple debuggers on one or more executables simultaneously. When multiple debuggers are running, you can switch between them with a new drop-down menu in Debug mode. More about various improvements and fixes can be found in the changelog. For further details, visit the Qt Blog. Qt creator 4.8 can be downloaded from the Qt website. Qt 3D Studio 2.1 released with new sub-presentations, scene preview, and runtime improvements How to create multithreaded applications in Qt How to Debug an application using Qt Creator
Read more
  • 0
  • 0
  • 3840

article-image-vim-go-creator-faith-arslan-takes-an-indefinite-sabbatical-from-all-his-open-source-projects-as-hes-burnt-out
Natasha Mathur
11 Oct 2018
6 min read
Save for later

Vim-go creator, Faith Arslan, takes an “indefinite sabbatical” from all his open source projects as he’s burnt out

Natasha Mathur
11 Oct 2018
6 min read
The creator of vim-go, Faith Arslan, announced on his personal blog, yesterday that he is taking an “indefinite sabbatical” from his vim-go projects. He had been working on the project for the past 4.5 years. Arslan says that he won’t be maintaining vim-go anymore and is uncertain about when he’ll be coming back to work on it again. For now, he’ll only be working on a select few small projects that don’t need him to actively maintain them. “I’m working for DigitalOcean..this is my full-time job. I have a family to take care of and just like any other grown-up in the world, you do what you have to do. However, there is no place for Go tooling and editors here. It’s a hobby and passion. But if a hobby feels like it becomes a second full-time job, something is very wrong. The time has come to end this craziness.”, says Arslan. What’s interesting is that Arslan is not the first from the open source community to go on a break. This seems to be an ongoing trend in the open-source community lately which started with Guido Van Rossum, Python founder, taking a ‘permanent vacation from being BDFL’, in July. He does continue to work in his capacity as a core developer. Guido's decision to take a break stemmed from the physical, mental, and the emotional toll that his role at work had taken on him over the past years. He had mentioned that he was “tired, and need a very long break”. Arslan’s reason seems fairly similar as he said, “ For the last one year, I’m struggling to maintain my side projects. I feel like I’m burnt out. Working on a side project is fun until it becomes your second full-time job. One thing that I’m sure is, I’m not happy how my day to day life is evolving around me”.   Another recent example is Linus Torvalds, who had been working on the Linux Kernel for almost 30-years. Torvalds opened up about going on a break over his ‘hurtful’ behavior that ‘contributed to an unprofessional environment’. “I need to take a break to get help on how to behave differently and fix some issues in my tooling and workflow”, said Torvalds. Even though Linus left to take time for self-reflection and was not burnt out, it is symptomatic of the same underlying issue. When one wants to accomplish a lot in a short period of time, one tends to find efficiencies where they can. Often efficient communication may not be effective as it may come across as terse, sarcastic or uncaring. Arslan mentioned that when he first started with vim-go, it was fun, rewarding and solved a lot his problems. It was his favorite editor and enabled him to write Go inside vim, in a very efficient and productive way. As he started with vim-go, he got the chance to work on and create many other smaller Go packages and tools. Some of these such as color and struct packages even became popular. “Again, it solved many problems and back then I wanted to use Go packages that are easy to use and just works out of the box. I also really like to work on Go tooling and editors. But this is not the case for many of my projects, especially vim-go. With the popularity of all these projects, my day to day work also increased”, ” says Arslan. The problem of burnout seems epidemic in the open source community. They work long hours, neglect themselves and their personal lives, and don’t always get to see the results that they should for such hard work. Arslan mentioned that it used to take him 10-20 hours extra per week, outside of his day job, to maintain these projects. He could “no longer maintain this tempo” as every day he used to receive multiple GitHub emails regarding pull requests, issues, feedbacks, fixes, etc which was affecting his well-being. It also didn’t make any sense to him “economically”. “It’s very hard for me to do this, but trust me I’m thinking about this for a long time. I cannot continue this anymore without sacrificing my own well being”, mentions Arslan. Who will look after vim-go now? Arslan’s sabbatical won’t be affecting vim-go’s performance as he has assigned the duty of maintaining vim-go to two of the full-time contributors, namely, Martin Tournoij and Billie Cleek. Billie Cleek, who worked with Arslan at DigitalOcean will be the lead of the vim-go project. Cleek has already made hundreds of contributions to vim-go (recently added unified async support for Vim and Neovim) and is well-versed with vim-go’s code base. “I don’t know if I could find anyone else that would make a great fit than him. I’m very lucky to have someone like him. The vim-go community will be in very good hands”, said Arslan. As far as the other popular Go projects and packages are concerned, Arslan will be going over them one last time and will archive the repos such as color, structs, camelcase, images, vim-hclfmt, and many others. This means that you’ll still be able to fetch these repos and use it within your projects. Arslan believes that most of these packages are in “a very good state” and doesn’t require any more additions. That being said, there are three projects that Arslan will still be maintaining such as gomodifytags, structtag, and motion. The gomodifytags project was Arslan’s most enjoyed project so far as it had zero bugs and simple design because.  These projects will be maintained in a “sleep mode” and Arslan will only be going over “serious issues”. “I have now so much time that I’ll be spending for myself...I have a side project that I’m working for a couple of months privately..(I can) play more with my son and just hang out all day, without doing a single thing. The weekends belong to me. I no longer have to worry about the last opened pull request’s to vim-go or my other Go projects..it just feels so refreshing. I suggest everyone do the same thing, take a step back and see what’s happening around you. It’ll help you to become a better yourself”, says Arslan. Public reaction towards Arslan’s decision is majorly positive: https://twitter.com/rakyll/status/1050053991088840704 https://twitter.com/idanyliuk/status/1050053303814541312 https://twitter.com/corylanou/status/1050132111745794052 For more coverage, read Arslan’s official announcement. Golang 1.11 is here with modules and experimental WebAssembly port among other updates Why Golang is the fastest growing language on GitHub Golang 1.11 rc1 is here with experimental port for WebAssembly!
Read more
  • 0
  • 0
  • 2421

article-image-github-comes-to-your-code-editor-github-security-alerts-now-have-machine-intelligence
Savia Lobo
11 Oct 2018
3 min read
Save for later

GitHub comes to your code Editor; GitHub security alerts now have machine intelligence

Savia Lobo
11 Oct 2018
3 min read
On Tuesday, the GitHub team announced that they will be making life easy for developers by getting Git right into our editor. The insights on this extension will be announced on Day 2 (17th October, 2019) of the two-day GitHub Universe conference. GitHub, in collaboration with the Visual Studio Code Team at Microsoft will brief users about this update during their talk Cross Company Collaboration: Extending GitHub to a New IDE. Sarah Guthals, the Engineering Manager at GitHub in her post mentions, “We’ve been working since 2015 to provide a GitHub experience that meets you where you spend the majority of your time: in your editor.” What’s in store for developers from different communities? For .NET developers In 2015, GitHub brought all Visual Studio developers an extension that supports GitHub.com and GitHub Enterprise engagements within the editor. Sarah says, “today you can complete an entire pull request review without ever leaving Visual Studio.” For the Atom community GitHub also support a first class Git and GitHub experience for Atom developers. Users can now access basic Git operations like staging, commiting, and syncing, alongside more complex collaboration with the recently-released pull request experience. For game developers Unity game developers can now use Git within Unity for the first time to clone and sync with GitHub.com and lock files. The Conflux : GitHub and Visual Studio Code In the talk which will be presented in the coming week, Visual Studio Code team at Microsoft and the editor tools team at GitHub will share their experience on how both these teams began exploring the possibility of an integration between their two products. The team at Microsoft started to design a pull request experience within Visual Studio Code, while the GitHub team prototyped one modeled after the same experience in the Visual Studio IDE. This brought users an integrated GitHub experience in Visual Studio Code supported by the Visual Studio Code API. This new extension gives developers the ability to: Authenticate with GitHub within VS Code (for GitHub.com and GitHub Enterprise) List pull requests associated with your current repository, view their description, and browse the diffs of changed files Validate pull requests by checking them out and testing them without having to leave VS Code GitHub applies machine intelligence to its GitHub security alerts Github also announced that it has built a machine learning model that can scan text associated with public commits (the commit message and linked issues or pull requests) to filter out those related to possible security upgrades. With such smaller batch of commits, the model uses the diff to understand how required version ranges have changed. Further, it aggregates across a specific timeframe to get a holistic view of all dependencies that a security release might affect. Finally, the model outputs a list of packages and version ranges it thinks require an alert and currently aren’t covered by any known CVE in their system. To know more about these updates, visit the GitHub blog. Also know more about GitHub and Visual Studio Code integration in Sarah Guthals’ GitHub post. GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience 4 myths about Git and GitHub you should know about 7 tips for using Git and GitHub the right way
Read more
  • 0
  • 0
  • 2297
article-image-swift-is-now-available-on-fedora-28
Melisha Dsouza
10 Oct 2018
2 min read
Save for later

Swift is now available on Fedora 28

Melisha Dsouza
10 Oct 2018
2 min read
Last week, the Fedora team announced that Swift will be available in Fedora 28.  Swift, Apple’s programming language, is built with a modern approach to safety and its addition to Fedora will facilitate Linux’s focus on the security aspect of its kernel. Why did the team opt for Swift? Swift’s applications are endless- right from systems programming to desktop applications leading right upto cloud services. This language was always focussed on being fast and safe. There is automatic memory management where arrays and integers are checked for overflow. Swift also supports a built-in mechanism for error handling. It is an efficient server-side programming language which performs fast iterations over collections of code. Additional features include: Closures with function pointers Tuples and multiple return values Generics Structs supporting methods, extensions, and protocols Functional programming patterns, like map and filter do, guard, defer, and repeat keywords provide an advanced control flow Swift is available in Fedora under the package name swift-lang. The flexible capabilities of Fedora coupled with the advantages offered by Swift make it an excellent choice for developers to work on. To know more about this news, head over to Fedora’s magazine. ABI stability may finally come in Swift 5.0 Swift 4.2 releases with language, library and package manager updates! Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes
Read more
  • 0
  • 0
  • 3343

article-image-openjdk-project-valhalla-is-now-in-phase-iii
Prasad Ramesh
10 Oct 2018
3 min read
Save for later

OpenJDK Project Valhalla is now in Phase III

Prasad Ramesh
10 Oct 2018
3 min read
Project Valhalla is an OpenJDK project started in 2014 in an experimental stage. It is headed by Oracle Java language architect Brian Goetz and supported by the HotSpot group. The project was created for introducing value-based optimizations to JDK 10 and above. The goal of Project Valhalla is explore and support development of advanced Java VM and language features like, value types, generic specialization, and variable handles. The Project Valhalla members met last week at Burlington MA to discuss in detail about the current project status and future plans. Goetz notes that it was a very productive meeting with members either attending the venue in person or connecting via calls. After over four years of the project, the members decided to meet as it seemed like a good time to assess the project. Goetz states: “And, like all worthwhile projects, we hadn't realized quite how much we had bitten off.  (This is good, because if we had, we'd probably have given up.)” This meeting indicates the initiation of Phase III project Valhalla. Phase I focused on language and libraries. Trying to figure out what exactly a clean migration to value types and specialized generics would look like. This included steps to migrate core APIs like Collections and Streams, and understanding the limitations of the current VM. This enabled a vision for the VM that was needed. Phase I produced three prototypes, Models 1-3. The exploration areas of these models included specialization mechanics (M1), handling of wildcards (M2) and classfile representations for specialization and erasure (M3). At this point, the list of VM requirements became too long and they had to take a different approach. Phase II took on the problem from the VM up, with two additional rounds of prototypes namely MVT and LW1. LW1 was a risky experiment; sharing the L-carrier and a* bytecodes between references and values while not losing performance. If this could be achieved, many of the problems from Phase I could go away.  This was successful and now they have a richer base for further work. The next target is L2, which will capture the choices made so far, provide a useful testbed for doing library experiments, and set the stage for tackle remaining open questions between now and L10.  L10 is the target for a first preview, which eventually should support value types and erased generics over values. For more information, you can read the mail on Project Valhalla mailing list. JDK 12 is all set for public release in March 2019 State of OpenJDK: Past, Present and Future with Oracle No more free Java SE 8 updates for commercial use after January 2019
Read more
  • 0
  • 0
  • 3535

article-image-net-team-announces-ml-net-0-6
Savia Lobo
10 Oct 2018
3 min read
Save for later

.NET team announces ML.NET 0.6

Savia Lobo
10 Oct 2018
3 min read
On Monday, .NET engineering team announced the latest monthly release of their cross-platform, open source machine learning framework for .NET developers, ML.NET 0.6. Some of the exciting features in this release include new API for building and using machine learning models, performance improvements, and much more. Improvements in the ML.NET 0.6 A new LearningPipeline API for building ML model The new API is more flexible and enables new tasks and code workflow that weren’t possible with the previous LearningPipeline API. The team further plans to deprecate the current LearningPipeline API. This new API is designed to support a wider set of scenarios. It closely follows ML principles and naming from other popular ML related frameworks like Apache Spark and Scikit-Learn. Know more about the new ML.NET API, visit the Microsoft blog. Ability to get predictions from pre-trained ONNX Models ONNX, an open and interoperable model format enables using models trained in one framework (such as scikit-learn, TensorFlow, xgboost, and so on) and use them in another (ML.NET). ML.NET 0.6 includes support for getting predictions from ONNX models. This is done by using a new transformer and runtime for scoring ONNX models. There are a large variety of ONNX models created and trained in multiple frameworks that can export models to ONNX format. Those models can be used for tasks like image classification, emotion recognition, and object detection. The ONNX transformer in ML.NET provides some data to an existing ONNX model and gets the score (prediction) from it. Performance improvements In the ML.NET 0.6 release, there are made several performance improvements in making single predictions from a trained model. Two improvements include: Moving the legacy LearningPipeline API to the new Estimators API. Optimizing the performance of PredictionFunction in the new API. Following are some comparisons of the LearningPipeline with the improved PredictionFunction in the new Estimators API: Predictions on Iris data: 3,272x speedup (29x speedup with the Estimators API, with a further 112x speedup with improvements to PredictionFunction). Predictions on Sentiment data: 198x speedup (22.8x speedup with the Estimators API, with a further 8.68x speedup with improvements to PredictionFunction). This model contains a text featurizer, so it is not surprising to see a smaller gain. Predictions on Breast Cancer data: 6,541x speedup (59.7x speedup with the Estimators API, with a further 109x speedup with improvements to PredictionFunction). Improvements in Type system In this ML.NET version, the Dv type system has been replaced with .NET’s standard type system. This makes ML.NET easy to use. ML.NET previously had its own type system, which helped it deal with missing values (a common case in ML). This type system required users to work with types like DvText, DvBool, DvInt4, etc. One effect of this change is, only floats and doubles have missing values which are represented by NaN. Due to the improved approach to dependency injection, users can also deploy ML.NET in additional scenarios using .NET app models such as Azure Functions easily without convoluted workarounds. To know more about other improvements in the ML.NET 0.6 visit the Microsoft Blog. Microsoft open sources Infer.NET, it’s popular model-based machine learning framework Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit .NET Core 3.0 and .NET Framework 4.8 more details announced
Read more
  • 0
  • 0
  • 2794
article-image-nvtop-an-htop-like-monitoring-tool-for-nvidia-gpus-on-linux
Prasad Ramesh
09 Oct 2018
2 min read
Save for later

NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux

Prasad Ramesh
09 Oct 2018
2 min read
People started using htop when the top just didn’t provide enough information. Now there is NVTOP, a tool that looks similar to htop but displays the process information loaded on your NVIDIA GPU. It works on Linux systems and displays detailed information about processes, memory used, which GPU and also displays the total GPU and memory usage. The first version of this tool was released in July last year. The latest change made the process list and command options scrollable. Some of the features of NVTOP are: Sorting by column To Select / Ignore a specific GPU by ID To kill selected process Monochrome option Yes, it has multi GPU support and can display the running processes from all of your GPUs. The information printed out looks like the following, and is similar to something htop would display. Source: GitHub There is also a manual page to give some guidance in using NVTOP. It can be accessed with this command: man nvtop There are OS specific installation steps on GitHub for Ubuntu/Debian, Fedora/RedHat/CentOS, OpenSUSE, and Arch Linux. Requirements There are two libraries needed to build and run NVTOP: The NVIDIA Management Library (NVML) for querying GPU information. The ncurses library for the user interface and make it colorful. Supported GPUs The NVTOP tool works only for NVIDIA GPUs and runs on Linux systems. One of the dependencies is the NVML library which does not support some queries from GPUs before the Kepler microarchitecture. That is anything before GeForce 600 series, GeForce 700 series, or GeForce 800M wouldn’t likely work. For AMD users, there is a tool called radeontop. The tool is provided under the GPLV3 license. For more details, head on to the NVTOP GitHub repository. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499 NVIDIA open sources its material definition language, MDL SDK
Read more
  • 0
  • 0
  • 23994

article-image-clojure-1-10-0-beta1-is-out
Bhagyashree R
08 Oct 2018
3 min read
Save for later

Clojure 1.10.0-beta1 is out!

Bhagyashree R
08 Oct 2018
3 min read
On October 6, the release of Clojure 1.10.0-beta1 was announced. With this release, Clojure 1.10 will now be considered feature complete and only critical bug fixes will be addressed. Changes introduced in Clojure 1.10 Detecting error phase Clojure errors can occur in five distinct phases, which include read, macroexpand, compile, eval, and print. Clojure and the REPL can now identify these phases in the exception and/or the message. The read/macroexpand/compile phases produce a CompilerException and indicate the location in the caller source code where the problem occurred. CompilerException now implements IExceptionInfo and ex-data reports exception data including the optional keys: :clojure.error/source: Name of the source file :clojure.error/line: Line in source file :clojure.error/column: Column of line in source file :clojure.error/phase: This indicates the phase (:read, :macroexpand, :compile) :clojure.error/symbol - Symbol being macroexpanded or compiled Also, clojure.main now contains a new function called ex-str that can be used by external tools to get a repl message for a CompilerException to match the clojure.main repl behavior. Introducing tap tap, a shared and globally accessible system, is used for distributing a series of informational or diagnostic values to a set of handler functions. It acts as a better debug prn and can also be used for facilities like logging. Read string capture mode A new function, read+string is added that not only mimics read, but also captures the string that is read. It then returns both the read value and the (whitespace-trimmed) read string. prepl (alpha) This is a new stream-based REPL with a structured output. These are the new functions that are added in clojure.core.server: prepl: It is a REPL with structured output (for programs). io-prepl: A prepl bound to *in* and *out* suitable for use with the Clojure socket server. remote-prepl: A prepl that can be connected to a remote prepl over a socket. prepl is now alpha and subject to change. Java 8 or above required Clojure 1.10 now requires Java 8 or above. The following are few of the updates related to this change and Java compatibility fixes for Java 8, 9, 10, and 11: Java 8 is now the minimum requirement for Clojure 1.10 Embedded ASM is updated to 6.2 Reliance on jdk166 jar is removed ASM regression is fixed Invalid bytecode generation for static interface method calls in Java 9+ is now fixed Reflection fallback for --illegal-access warnings in Java 9+ is added Brittle test that fails on Java 10 build due to serialization drift is fixed Type hint is added to address reflection ambiguity in JDK 11 Other new functions in core To increase the portability of the error-handling code, the following functions have been added: ex-cause: To extract the cause exception ex-message: To extract the cause message To know more about the changes in Clojure 1.10, check out its GitHub repository. Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new? Java 11 is here with TLS 1.3, Unicode 11, and more updates Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes
Read more
  • 0
  • 0
  • 2554