Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-soundation-releases-its-first-music-studio-built-on-webassembly
Savia Lobo
16 Nov 2018
2 min read
Save for later

Soundation releases its first music studio built on WebAssembly

Savia Lobo
16 Nov 2018
2 min read
Soundation, online music production software, released their new music studio built on WebAssembly Threads, after working closely with Google. It is the first music production software to run on WebAssembly Threads, which contributes to considerably improved speed, performance, and stability when producing music in a browser. Its online music studio is used by over 80,000 creatives who produce music directly in their web browsers. For Soundation’s users, the WebAssembly technology provides an improved performance on multicore machines, between 100-300 percent*, according to measurements. Soundation has been collaborating with Google’s WASM and Chrome Audio teams for over a year, working to optimize the implementation of Soundation Studio based on WebAssembly, with support of multithreading and shared memory. Adam Hasslert, CEO, Soundation, said, “Implementing WebAssembly Threads is a key part of our mission to build the next-generation music production service online. This technology will have a significant impact on how web apps are made in the future, and it’s essential for us to lead this development and offer our users the most powerful alternative.” Thomas Nattestad at CDS, Product Manager, WebAssembly, said, “Soundation is one of the first adopters of WebAssembly Threads. They use these Threads to achieve fast, parallelized processing to seamlessly mix songs. Adding just a single Thread doubled their performance, and by the time they added five threads, they more than tripled their performance.” How did Soundation conduct the tests? Soundation made tests of complex Soundation Studio project (consisting of 10 audio tracks, 12 synthesizers, 270 audio regions with audio samples and notes with 84 filter effects applied) to generate the audio file. The test was later run on Ubuntu 16.04, Chrome 72.0.3584.0 (64-bit) on board they had Intel® Core™ i7-6700HQ. They then compared systems based on WebAssembly, PNaCL, native application using different processing buffer sizes in ring buffer. WebAssembly version has been tested using different number of threads. Here’s a video by Thomas Nattestad, the Product Manager for WebAsssembly, introducing Soundation. https://www.youtube.com/watch?v=zgOGZgAPUjQ&feature=youtu.be&t=474 Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web
Read more
  • 0
  • 0
  • 2987

article-image-kdevelop-5-3-released-with-new-analyzer-plugin-and-improved-language-support
Prasad Ramesh
15 Nov 2018
3 min read
Save for later

KDevelop 5.3 released with new analyzer plugin and improved language support

Prasad Ramesh
15 Nov 2018
3 min read
20 years after KDevelop’s first release, KDevelop 5.3 is now released with features like a new analyzer and improved support for some languages. A new type of analyzer plugin in KDevelop 5.3 In version 5.1 KDevelop got a menu entry Analyzer which provides a set of actions to work with analyzer-like plugins. With version 5.2, a runtime analyzer called Heaptrack and a static analyzer called cppcheck were added. In the development phase of KDevelop 5.3, another analyzer plugin was added which is available with the current release. The new analyzer named Clazy is a clang analyzer plugin specialized for Qt-using code. It can now also be run from within KDevelop by default displaying the issues inline. The KDevelop plugin for Clang-Tidy support will be released as part of KDevelop starting with version 5.4. It is released independently as of now. Internal changes in KDevelop 5.3 KDevelop's own codebase has been subject for using analyzers. A lot of code has been optimized and also stabilized in places indicated by the analyzers. There is also modernization to the new standards of languages like C++ and Qt5 with the aid of analyzers. Improved support for C++ Lot of work was done in KDevelop 5.3 on stabilizing and improving KDevelop’s clang-based language support for C++. The notable fixes include: In clang tooltips were included, range check was fixed. The path to the builtin clang compiler headers can now be overridden. Now the clang builtin headers are always used for the libclang version used. Requests are completed in a group and only the last one is handled. The template for Class/function signatures in Clang Code Completion is fixed. A workaround for constructor argument hints to find declarations. In clang, argument hint code completion is improved. Improved support for PHP With the help of Heinz Wiesinger, there are improvements for PHP support in KDevelop 5.3. There is much-improved support for PHP Namespaces. Support for Generators and Generator delegation is added. The integrated documentation of PHP internals has been updated and expanded. Support for context-sensitive lexer of PHP 7. Installing the parser as a library so other projects can use them. The type detection of object properties is improved. Support is added for the object typehint. ClassNameReferences is better supported. Improvements to expression syntax support particularly around 'print'. Optional function parameters are allowed before non-optional ones. Support for magic constants: __DIR__ and __TRAIT__ are added. Improved Python language support The focus is on fixing bugs, which have been added to the 5.2 series. A couple of improved features in 5.3 are: Environment profile variables are injected into debug process environment. The support for 'with' statements is improved. There is also experimental, but maintainer-seeking support for macOS and port for Haiku. For more details, visit the KDevelop website. Neuron: An all-inclusive data science extension for Visual Studio The LLVM project is ditching SVN for GitHub. The migration to Github has begun. Microsoft announces .NET standard 2.1
Read more
  • 0
  • 0
  • 2118

article-image-c-8-0-to-have-async-streams-recursive-patterns-and-more
Prasad Ramesh
14 Nov 2018
4 min read
Save for later

C# 8.0 to have async streams, recursive patterns and more

Prasad Ramesh
14 Nov 2018
4 min read
C# 8.0 will introduce some new features and will likely ship out the same time as .NET Core 3.0. Developers will be able to use the new features with Visual Studio 2019. Nullable reference types in C# 8.0 This feature aims to help prevent the null reference exceptions that appear everywhere. They have riddled object-oriented programming for half a century now. The null reference exceptions stop developers from using null in ordinary reference types like string. They make these types non-nullable. They are warnings, however, not errors. On existing code, there will be new warnings. Developers will have to opt into using the new feature at the project, file, or source line level. C# 8.0 will let you express your “nullable intent”, and throws a warning when you don’t follow it. string s = null; // Warning: Assignment of null to non-nullable reference type string? s = null; // Ok Asynchronous streams with IAsyncEnumerable<T> The async feature that was from C# 5.0 lets developers consume and produce asynchronous results. This is in straightforward code, without callbacks. This isn’t helpful when developers want to consume or produce continuous streams of results. For example, data from an IoT device or a cloud service. Async streams are present for this use. C# 8.0 will come with IAsyncEnumerable<T>. It is an asynchronous version of the existing IEnumerable<T>. Now you can await foreach over functions to consume their elements, then yield return to them in order to produce elements. async IAsyncEnumerable<int> GetBigResultsAsync() {    await foreach (var result in GetResultsAsync())    {        if (result > 20) yield return result;    } } Ranges and indices A type Index is added which can be used for indexing. A type index can be created from an int that counts from the beginning. It can be alternatively created with a prefix ^ operator that counts from the end. Index i1 = 3;  // number 3 from beginning Index i2 = ^4; // number 4 from end int[] a = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; Console.WriteLine($"{a[i1]}, {a[i2]}"); // "3, 6" C# 8.0 will have an added Range type consisting of two Indexes. One will be for the start and one for the end. They can be written with an x..y range expression. Default implementations of interface members Currently, once an interface is published members can’t be added anymore without breaking all its existing implementers. With the new release, a body for an interface member can be provided. If somebody doesn’t implement that member, the default implementation will be available instead. Allowing recursive patterns C# 8.0 will allow patterns to contain other patterns. IEnumerable<string> GetEnrollees() {    foreach (var p in People)    {        if (p is Student { Graduated: false, Name: string name }) yield return name;    } } The pattern in the above code checks that the Person is a Student. It then applies the constant pattern false to their Graduated property to see if they’re still enrolled. Then checks if the pattern string name to their Name property to get their name. Hence, if p is a Student who has not graduated and has a non-null name, that name will yield return. Switch expressions Switch statements with patterns a powerful feature in C# 7.0. But since they can be cumbersome to write, the next C# version will have switch expressions. They are a lightweight version of switch statements, where all the cases are expressions. Target-typed new-expressions In many cases, on creating a new object, the type is already given from context. C# 8.0 will let you omit the type in those cases. For more details, visit the Microsoft Blog. ReSharper 18.2 brings performance improvements, C# 7.3, Blazor support and spellcheck Qml.Net: A new C# library for cross-platform .NET GUI development Microsoft announces .NET standard 2.1
Read more
  • 0
  • 0
  • 4931
Visually different images

article-image-version-1-29-of-visual-studio-code-is-now-available
Amrata Joshi
13 Nov 2018
3 min read
Save for later

Version 1.29 of Visual Studio Code is now available

Amrata Joshi
13 Nov 2018
3 min read
Visual Studio Code 1.29 was released yesterday - this was the October update of Microsoft’s planned monthly updates. This update to the code editor includes multiline search, and improved support for macOS. Features of Visual Studio Code 1.29 Multiline search Visual Studio Code now supports multiline search. A regex search executes in multiline mode only if it contains a \n literal. The search view pops up a hint next to each multiline match. The ripgrep tool helps in implementing multiline search. macOS full-screen support To enable full-screen mode for Visual Studio Code, window.nativeFullScreen is set to false. Visual Studio 1.29 has an advantage of entering full-screen mode without creating a macOS space on the desktop. By default, Visual Studio Code uses macOS native full screen. Highlight modified tabs Visual Studio Code 1.29 comes with a new setting workbench.editor.highlightModifiedTabs.  Whenever the editor has unsaved changes, then this new setting displays a thick border at the top of editor tabs. It makes easier to find files that need to be saved. Even the color of the border can be customized. File and folder icons in IntelliSense The IntelliSense widget is now updated. It shows file and folder icons for file completions based on the File Icon theme. This provides a unique look which helps in quickly identifying the different file types. Format Selection With Visual Studio Code 1.29, it is now possible to speed up the small formatting operations. Without an editor selection, the Format Selection command will now format the current line. Show error codes This editor of this version, now shows the error code of a problem if an error code is defined. One can check the error code at the end of the line in square brackets. Normalized extension samples The Visual Studio Code extension samples at vscode-extension-samples have been updated in this release for consistency. Each extension sample includes a uniform coding style and structure and a README that explains the sample's functionality with a short animation. It also includes a listing of the vscode API or Contribution Points used in each sample. Start debugging with a stop on entry The team at Visual Studio Code has introduced a command for Node.js debugging. The command, Debug: Start Debugging and Stop On Entry(extension.node-debug.startWithStopOnEntry) is used for debugging and immediately stopping on the entry of your program. Clear terminal before executing the task A new property called clear got added to the task presentation configuration in this release. If the clear property is set to true then it is possible to clear the terminal before the task is run. Major Bug Fixes Previously, the startDebugging method in Visual Studio Code used to return the value ‘true’ even when the build failed. This issue has been fixed in this release. In previous releases, the Settings UI never used to remember its search on reloading. But with this release, this issue has been resolved. Earlier it wasn't possible to cancel a debug session while it was initializing. But now it’s possible with Visual Studio Code 1.29. Read more on this news on the Visual Studio Code website. Visual Studio code July 2018 release, version 1.26 is out! Unit Testing in .NET Core with Visual Studio 2017 for better code quality Neuron: An all-inclusive data science extension for Visual Studio
Read more
  • 0
  • 0
  • 2091

article-image-github-has-passed-an-incredible-100-million-repositories
Richard Gall
12 Nov 2018
2 min read
Save for later

GitHub has passed an incredible 100 million repositories

Richard Gall
12 Nov 2018
2 min read
It has been a big year for GitHub. The code sharing platform has this year celebrated its 10th birthday, been bought by Microsoft for an impressive $7.5 billion, and has now reached an astonishing 100 million repositories. While there will be rumblings of discontent following the huge Microsoft acquisition, it doesn't look like threats to leave GitHub have come to fruition. True, it has only been a matter of weeks since Microsoft finally took over, but there are no signs that GitHub is losing favor with developers. 1 in 3 of all GitHub repositories were created in 2018 According to GitHub, 1 in 3 of the 100 million repositories were created in 2018. That demonstrates the astonishing growth of the platform, and just how embedded it is within the day to day life of software engineers. This is further underlined by more data in GitHub's Octoverse report, published in October. "We've seen more new accounts in 2018 so far than in the first six years of GitHub combined," the report states. Perhaps the new relationship with Microsoft has actually helped push GitHub from strength to strength - MicrosoftDocs/azure-docs is the fastest growing repository in 2018. Of course, some credit should probably go to Microsoft as well - the organization has done a lot to change its image and ethos, becoming much more friendly towards open source software. Meanwhile, at Packt, we've been delighted to play a small part in helping GitHub get to its 100 million milestone. Earlier this year we hit 2,000 project repos.
Read more
  • 0
  • 0
  • 2380

article-image-golang-just-celebrated-its-ninth-anniversary
Prasad Ramesh
12 Nov 2018
2 min read
Save for later

Golang just celebrated its ninth anniversary

Prasad Ramesh
12 Nov 2018
2 min read
Saturday was the ninth anniversary of the day when the Go team open-sourced the initial sketch of Golang. On each anniversary they list what has happened over the past year for Go. Golang adoption indicated in surveys In 2018 Go users expressed in multiple surveys about their happiness with using Go. Many developers who do not use Golang currently also indicated their intent to learn Go before any other language. In the Stack Overflow 2018 Developer Survey, Golang was in the top 5 most loved and top 3 most wanted languages. This indicated that developers using Go like it, and developers not using Go want to. ActiveState’s 2018 Developer Survey had Go topping the charts with 36% of users responding with “Extremely Satisfied” using Go and 61% of the users responded with “Very Satisfied” or better. While the JetBrains’s 2018 Developer Survey awarded Go the “Most promising language” with 12% of respondents using Go today and 16% with the intention to use Go in the future. Also in the HackerRank 2018 Developer Survey, 38% developer responses indicated that were intending to learn Go next. The evolution of the Golang community The first Go conferences and Go meetups were held five years ago. Since then, there has been major growth in community leadership. Now there are more than 20 Go conferences and over 300 Go-related meetups across the world. There have also been hundreds of great talks in 2018. The Go code of conduct has been revised to better support inclusivity in the Go community. Go 2 After five years since Go 1, the Go core team is looking into changes to support the language at scale. A draft design for Go modules was published in August, which included ideas to better support error values, error handling, and generic programming. And the most recent release, Golang 1.11, included preliminary support for modules. Golang contributors There has been an increasing number of contributors for Go through the years. In Q2 2018, a milestone was hit when for the first time, the contributions from the community were more than that of the Go team. For more details, visit the Go Blog. Go 2 design drafts include plans for better error handling and generics Why Golang is the fastest growing language on GitHub Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 3163
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-uk-researchers-build-the-worlds-first-quantum-compass-to-overthrow-gps
Sugandha Lahoti
12 Nov 2018
2 min read
Save for later

UK researchers build the world’s first quantum compass to overthrow GPS

Sugandha Lahoti
12 Nov 2018
2 min read
British researchers have successfully built the world’s first standalone quantum compass, which will act as a replacement for GPS as it allows highly accurate navigation without the need for satellites. This quantum compass was built by researchers from Imperial College London and Glasgow-based laser firm M Squared. The project received funding from the UK Ministry of Defence (MoD) under the UK National Quantum Technologies Programme. The device is completely self-contained and transportable and measures how an object's velocity changes over time, by using the starting point of an object and measuring how an object's velocity changes. Thereby, it overcomes issues of traditional GPS systems, such as blockages from tall buildings or signal jamming. High precision and accuracy are achieved by measuring properties of super-cool atoms, which means any loss in accuracy is "immeasurably small". Dr. Joseph Cotter, from the Centre for Cold Matter at Imperial, said: “When the atoms are ultra-cold we have to use quantum mechanics to describe how they move, and this allows us to make what we call an atom interferometer. As the atoms fall, their wave properties are affected by the acceleration of the vehicle. Using an ‘optical ruler’, the accelerometer is able to measure these minute changes very accurately.” The first real-world application for the device could be seen in the shipping industry, The size currently is suitable for large ships or aircraft. However, researchers are already working on a miniature version that could eventually fit in a smartphone. The team is also working on using the principle behind the quantum compass for research in dark energy and gravitational waves. Dr. Graeme Malcolm, founder, and CEO of M Squared said: “This commercially viable quantum device, the accelerometer, will put the UK at the heart of the coming quantum age. The collaborative efforts to realize the potential of quantum navigation illustrate Britain’s unique strength in bringing together industry and academia – building on advancements at the frontier of science, out of the laboratory to create real-world applications for the betterment of society.” Read the press release on the Imperial College blog. Quantum computing – Trick or treat? D-Wave launches Leap, a free and real-time Quantum Cloud Service Did quantum computing just take a quantum leap? A two-qubit chip by UK researchers makes controlled quantum entanglements possible
Read more
  • 0
  • 0
  • 4060

article-image-github-now-allows-repository-owners-to-delete-an-issue-curse-or-a-boon
Amrata Joshi
09 Nov 2018
3 min read
Save for later

Github now allows repository owners to delete an issue: curse or a boon?

Amrata Joshi
09 Nov 2018
3 min read
On Saturday Github released the public beta version for a new feature to delete issues. This feature lets repository admins, delete an issue from any repository, permanently. This might give more power to the repository owners now. Since the time Github tweeted about this news, the controversy around this feature seems to be on fire. According to many, this new feature might lead to the removal of issues that disclose severe security issues. Also, many users can take help of the closed issue and resolve their problems as the conversation history of repository sometimes has a lot of information. https://twitter.com/thegreenhouseio/status/1060257920158498817 https://twitter.com/aureliari/status/1060279790706589710 In case, someone posts a security vulnerability publicly as an issue, it might turn out to be a big problem to the project owner, as there’s a high possibility of people avoiding the future updates coming on the same project. This feature could be helpful to many organizations, as this feature might work as a damage control for them. Few of the issues posted by users on Github aren’t really issues, so this feature might be helpful in that direction. Also, there are a lot of duplicate issues which get posted on purpose or mistakenly by the users, so this feature could work a rescue tool! In contrast to this, a lot of users are opposing this feature. This feature might not be so helpful because no matter how fast one erases a vulnerability report, the info gets leaked via the mail inbox. The poll posted by one of the users on Twitter which has 71 votes as of the time of writing, shows that 69% of the participants disliked this feature. While only 14% of users have given a thumbs up to this feature. And the rest 17% have no views on it. The poll is still on, it would be interesting to see the final report of the same. https://twitter.com/d4nyll/status/1060422721589325824 The users are requesting for a better option which might just highlight a way to report security issues in a non-public way. While few others prefer an archive option instead of deleting the issue permanently. And some others just strongly favor removing the feature. https://twitter.com/kirilldanshin/status/1060265945598492677 With many users now blaming Microsoft for this feature on Github, it would be interesting to see the next update on the same feature, could it possibly just be an UNDO option? Read more about this news on Github’s official Twitter page. GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation GitHub now allows issue transfer between repositories; a public beta version GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage
Read more
  • 0
  • 0
  • 4801

article-image-microsoft-announces-net-standard-2-1
Prasad Ramesh
06 Nov 2018
3 min read
Save for later

Microsoft announces .NET standard 2.1

Prasad Ramesh
06 Nov 2018
3 min read
After a year of shipping .NET standard 2.0, Microsoft has now announced .NET standard 2.1 yesterday. In all, 3,000 APIs are planned to be included in .NET standard 2.1 and the progress on GitHub has reached 85% completion at the time of writing. The new features in .NET standard 2.1 are as follows. Span<T> in .NET standard 2.1 Span<T> has been added in .NET Core 2.1. It is an array-like type that allows representing managed and unmanaged memory in a uniform way. Span<T> is an important performance improvement since it allows managing buffers in a more efficient way. It supports slicing without copying and can help in reducing allocations and copying. Foundational-APIs working with spans Span<T> is available as a .NET Standard compatible NuGet package. This package does not help extend the members of .NET Standard types that deal with spans. For example, .NET Core 2.1 added many APIs that allowed working with spans. To add span to .NET Standard some companion APIs were added. Reflection emit added in .NET standard 2.1 In .NET Standard 2.1 Lightweight Code Generation (LCG) and Reflection Emit are added. Two new capability APIs are exposed to allow checking for the ability to generate code at all (RuntimeFeature.IsDynamicCodeSupported). It is also supported if the generated code is interpreted or compiled (RuntimeFeature.IsDynamicCodeCompiled). SIMD There has been support for SIMD for a while now. They have been used to speed up basic operations like string comparisons in the BCL. There have been requests to expose these APIs in .NET Standard as the functionality requires runtime support. This cannot be provided meaningfully as a NuGet package. ValueTask and ValueTask<T> In .NET Core 2.1, the biggest feature was improvements to support high-performance scenarios. This also included making async/await more efficient. ValueTask<T> allows returning results if the operation completed synchronously without having to allocate a new Task<T>. In .NET Core 2.1 this has been improved which made it useful to have a corresponding non-generic ValueTask. This allows reducing allocations even for cases where the operation has to be completed asynchronously. This is a feature that types like Socket and NetworkStream now utilize. By exposing these APIs in .NET Standard 2.1, library authors now benefit from these improvements as a consumer as well as a producer. DbProviderFactories DbProviderFactories wasn’t available for .NET Standard 2.0, now it will be in 2.1. DbProviderFactories allows libraries and applications to make use of a specific ADO.NET provider without knowing any of its specific types at compile time. Other changes Many small features across the base class libraries have been added. These include System.HashCode for combining hash codes or new overloads on System.String. There are roughly 800 new members in .NET Core and all of them are added in .NET Standard 2.1. .NET Framework 4.8 will remain on .NET Standard 2.0. .NET Core 3.0 and the upcoming versions of Xamarin, Mono, and Unity will be updated to implement .NET Standard 2.1. To ensure correct implementation of APIs, a review board is made to sign-off on API additions to the .NET Standard. The board chaired by Miguel de Icaza comprises of representatives from .NET platform, Xamarin and Mono, Unity and the .NET Foundation. There will also be a formal approval process for new APIs. To know more, visit the Microsoft Blog. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 What to expect in ASP.NET Core 3.0
Read more
  • 0
  • 0
  • 4206

article-image-crystal-0-27-0-released
Prasad Ramesh
05 Nov 2018
4 min read
Save for later

Crystal 0.27.0 released

Prasad Ramesh
05 Nov 2018
4 min read
Crystal is a general-purpose, object-oriented programming language with support from over 300 contributors. Last Friday, Crystal 0.27.0 was released. Language changes in Crystal 0.27.0 From Crystal 0.27.0, if the arguments of a method call need to be splitted across multiple lines, the comma must be put at the end of the line just before the line break. This is more in line with other conventional languages. Better handling of stack overflows A program entering an infinite recursion or running out of space in the stack memory is known as a stack overflow. Crystal 0.27.0 ships with a boundary check that allows a better error message on stack overflow. Concurrency and parallelism changes The next releases should start showing parallelism. There are some steps in preparation for that. The Boehm GC has API that enables support for multithreading environment in v7.6.x. From this version of Crystal, GC 7.6.8 or greater is used. As Crystal 0.26.1 was shipped with v7.4.10, the dependency needed to be updated first so that the CI can compile the compiler with the new GC API. Also, refactoring was done to separate the responsibilities of Fiber, Event, Scheduler, and EventLoop. Arithmetic symbols added In Crystal 0.27.0, arithmetic operators like &+, &-, &* were added. They are for additions, subtraction and multiplication with wrapping. In one of the next versions, the regular operators will raise on overflow. This will allow users to trust the result of the operations when reaching the limits of the representable range. Collection names changed In Indexable module and Hash, there are some breaking changes. The Indexable#at was replaced in favor of Indexable#fetch. The API between Indexable and Hash is now more aligned in the latest version. This includes ways to deal with default values in case of a missing key. If no default value is needed, the #[] method must be used. This is true even for Hash, since Hash#fetch(key) was dropped. Time changes There are breaking changes to support cleaner and more portable names. All references to “epoch” should now be replaced to “unix”. Also effectively, Time#epoch was renamed to Time#to_unix, #epoch_ms to #unix_ms, and #epoch_f to #to_unix_f. ISO calendar week numbers are now supported. Changing the time zone while maintaining the wall clock is also easy. File changes Working with temporal files and directories needed the Tempfile class. Now the creation of such files are handled by File.tempfile or File.tempname. This change also tidies up the usage of prefix, suffix and default temp path. Platform support There was an issue detected in Boehm GC regarding while running in Google Cloud because. The fix for this will be released in the next version of GC. Meanwhile, a patch is included in Crystal 0.27.0. There is some preparation for Windows support related to processes, forking, file handlers and arguments. Other fixes include fixing signals between forked processes, and dealing how IO on a TTY behaves in different environments. Networking changes HTTP::Server#bind_ssl was dropped since #bind_tls was introduced. It wasn’t removed to avoid a breaking change. The bindings for OpenSSL were updated to support v1.1.1. Compiler changes Support for annotations inside enums is added. Calling super will by default forward all the method arguments. Even if the call was expanded by macros in this version. When using splats argument the type of values can be restricted. This also goes for the whole Tuple or NamedTuple that is expected as splatted arguments. A bug was present when these restrictions were used, now fixed. For a complete list of changes, visit the Crystal changelog. WebAssembly – Trick or Treat? Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web The D language front-end support finally merged into GCC 9
Read more
  • 0
  • 0
  • 1406
article-image-gocity-turn-your-golang-program-into-a-3d-city
Prasad Ramesh
05 Nov 2018
2 min read
Save for later

GoCity: Turn your Golang program into a 3D city

Prasad Ramesh
05 Nov 2018
2 min read
A team from Federal University of Minas Gerais (UFMG) created a Code City metaphor for visualizing Golang source code called GoCity. You simply paste the IRL to a GitHub repository and GoCity plots it out as a city with districts and buildings. It allows you to visualize your code as a neat three-dimensional city. GoCity represents a program written in Go as a city: The folders are represented as districts Files in the program are shown as buildings of varying heights, shapes, and sizes The structs are represented as buildings stacked on the top of their files Characteristics of the structures The Number of Lines of Source Code (LOC) represents the building color. Higher values make the building dark. The Number of Variables (NOV) in the program affects the building's base size. The Number of methods (NOM) in the program affects the height of the. The UI/front-end The UI for GoCIty is built with React and uses babylon.js to plot the 3D structures. The source code for the front-end is available in the front-end branch on GitHub. What the users are saying A comment on Hacker news by user napsterbr reads: “Cool! Interestingly I always use a similar metaphor on my own projects. For instance, the event system may be seen as the roads linking different blocks (domains), each with their own building (module).” The Kubernetes repository does seem to take a toll as it forms a lot of buildings spaced out. “The granddaddy of them all, Kubernetes, takes quite a toll performance-wise. https://go-city.github.io/#/github.com/kubernetes/kubernetes.” But like another user jackwilsdon pointed out on Reddit: “Try github.com/golang/go if you want some real browser-hanging action!” For more details, visit the GitHub repository. For an interactive live demonstration, visit the Go City website. Golang plans to add a core implementation of an internal language server protocol Why Golang is the fastest growing language on GitHub GoMobile: GoLang’s Foray into the Mobile World
Read more
  • 0
  • 0
  • 5078

article-image-salesforces-open-sourcing-centrifuge-a-library-for-accelerating-jvm-restarts
Amrata Joshi
02 Nov 2018
3 min read
Save for later

Salesforce’s open sourcing Centrifuge: A library for accelerating JVM restarts

Amrata Joshi
02 Nov 2018
3 min read
Yesterday, Paymon Teyer, a principal member of Technical Staff at Salesforce, introduced Centrifuge as a library, which is also a framework for scheduling and running startup and warmup tasks. It focuses mainly on accelerating JVM restarts. It also provides an interface for implementing warmup tasks, like, calling an HTTP endpoint, populating caches and handling pre-compilation tasks for generated code. When the JVM restarts in a production environment, it affects the performance of the server. The JVM has to reload classes, trigger reflection inflation, rerun its JIT compiler on any code paths, reinitialize objects and dependency injections, and populate component caches. The performance impact of JVM restarts can be minimized by allowing individual components to execute arbitrary warmup logic themselves, after a cold start. To make this possible, Centrifuge was created with the goal of executing warmup tasks. It also manages resource usage and handles failures. Centrifuge allows users to register and configure warmup tasks either descriptively or programmatically. It also schedules tasks, manages and monitors threads, handles exceptions and retries, and provides status reports. Centrifuge supports the following two categories of warmup tasks: Blocking tasks Blocking tasks prevent the application from returning to the available server pool until they complete. These tasks must be executed for the application to function properly. For example, executing source code generators or populating a cache from storage to meet SLA requirements. Non-blocking tasks Non- blocking tasks execute asynchronously and don’t interfere with the application’s readiness. These tasks do the work which is needed after an application restarts but is not required immediately for the application to be in a consistent state. Examples include warmup logic that triggers JIT compilation on code paths or eagerly triggering dependency injection and object creation. How to Use Centrifuge? The first step is to include a Maven dependency for Centrifuge in the POM Then implementing the Warmer interface for each of the warmup tasks. The warmer class should have an accessible default constructor and it should not swallow InterruptedException. The warmers can be registered either programmatically with code or descriptively with a configuration file. For adding and removing warmers without recompiling, the warmers should be registered descriptively within a configuration file. Then the configuration file needs to be loaded into the Centrifuge. How is the HTTP Warmer useful? Centrifuge provides a simple HTTP warmer which is used to call HTTP endpoints to trigger code path. It is also exercised by the resource implementing the endpoint. If an application provides a homepage URL and when called, connects to a database, populates the caches, etc., then the HTTP warmer can warm these code paths. Read more about Centrifuge on Salesforce’s official website. About Java Virtual Machine – JVM Languages Tuning Solr JVM and Container Concurrency programming 101: Why do programmers hang by a thread?
Read more
  • 0
  • 0
  • 2839

article-image-a-kernel-vulnerability-in-apple-devices-gives-access-to-remote-code-execution
Prasad Ramesh
01 Nov 2018
2 min read
Save for later

A kernel vulnerability in Apple devices gives access to remote code execution

Prasad Ramesh
01 Nov 2018
2 min read
A heap buffer overflow vulnerability was found in Apple’s XNU OS kernels by Kevin Backhouse. An exploit can potentially cause any iOS or macOS device on the same network to reboot, without any user interaction. Apple has classified this kernel vulnerability as a remote code execution (RCE) vulnerability in the kernel. It may be possible to exploit buffer overflow to execute arbitrary code in the kernel. The vulnerability is fixed in iOS 12 and macOS Mojave. The vulnerability is caused by a heap buffer overflow in the networking code within the XNU kernel. XNU is a kernel system developed by Apple. It is used in both iOS and macOS, hence most iPhones, iPads, and Macbooks are affected. An attacker merely needs to send a malicious IP packet the target device’s IP address to trigger this. The vulnerability is triggered only if the attacker is in the same network as the target. This becomes easy if you’re using a free WiFi network from a coffee shop. The vulnerability being in the kernel, anti-viruses cannot protect your device. The attacker can control the size and content of the heap buffer giving a potential to gain remote code execution of a device. There are two known mitigations against this kernel vulnerability: Enabling stealth mode in the macOS firewall prevents the attack from taking place. Don’t use public WiFi networks as there is a high risk of being attacked. These OS versions and devices are vulnerable: All devices with Apple iOS 11 and earlier All Apple macOS High Sierra devices up to 10.13.6. This is patched in security update 2018-001. Devices using Apple macOS Sierra up to 10.12.6. This is patched in security update 2018-005. Apple OS X El Capitan and earlier devices The kernel vulnerability was reported by Kevin Backhouse to Apple in time to be rolled out with iOS 12 and macOS Mojave. The vulnerabilities were announced on October 30. For more details visit the LGMT website. Final release for macOS Mojave is here with new features, security changes and a privacy flaw The kernel community attempting to make Linux more secure Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks
Read more
  • 0
  • 0
  • 5703
article-image-neuron-an-all-inclusive-data-science-extension-for-visual-studio
Prasad Ramesh
01 Nov 2018
3 min read
Save for later

Neuron: An all-inclusive data science extension for Visual Studio

Prasad Ramesh
01 Nov 2018
3 min read
A team of students from the Imperial College London developed a new Visual Studio extension called neuron. It is aimed to be an all-inclusive add-on for data science tasks in Visual Studio. Using neuron is pretty simple. You begin with regular Python or R code file in a window. Beside the code is neuron’s windows as shown in the following screenshot. It takes up half of the screen but is a blank page at the start. When you run your code snippets, the output starts showing up as interactive cards. Neuron can display outputs that are plain text, tables, images, graphs, or maps. Source: Microsoft Blog You can find neuron at the Visual Studio Marketplace. On installation, a button will be visible when you have a supported file open. Neuron uses the Jupyter Notebook in the background. Jupyter Notebook would already be installed in your computer considering it popularity, if not you will be prompted. Neron supports more output types than Jupyter Notebook. You can also generate 3D graphs, maps, LaTeX formulas, markdown, HTML, and static images with neuron. The output is displayed in a card on the right-hand side, it can be resized moved around or expanded into a separate window. Neuron also keeps a track of code snippets associated with each card. Why was neuron created? Data scientists come from various backgrounds and use a set of standard tools like Python, libraries, and the Jupyter Notebook. Microsoft approached the students from the Imperial College London to integrate the various set of tools into one single workspace. A single workspace being a Visual Studio extension that could enable users to run data analysis operations without breaking the current workflow. Neuron gets the advantage of an intelligent IDE, Visual Studio along with rapid execution and visualization of Jupyter Notebook all in a single window. It is not a new idea Although neuron is not a new idea. https://twitter.com/jordi_aranda/status/1057712899542654976 Comments on Reddit also suggest there are existing such tools in other IDEs. Reddit user kazi1 stated: “Seems more or less the same as Microsoft's current Jupyter extension (which is pretty meh). This seems like it's trying to reproduce the work already done by Atom's Hydrogen extension, why not contribute there instead." Another Redditor named procedural_ape said: “This looks like an awesome extension but shame on Microsoft for acting like this is their own fresh, new idea. Spyder has had this functionality for a while.” For more details, visit the Microsoft Blog and a demo is available on GitHub. Visual Studio code July 2018 release, version 1.26 is out! MIT plans to invest $1 billion in a new College of computing that will serve as an interdisciplinary hub for computer science, AI, data science Microsoft releases the Python Language Server in Visual Studio
Read more
  • 0
  • 0
  • 3403

article-image-github-october-21st-outage-rca-how-prioritizing-data-integrity-launched-a-series-of-unfortunate-events-that-led-to-a-day-long-outage
Natasha Mathur
31 Oct 2018
7 min read
Save for later

GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage

Natasha Mathur
31 Oct 2018
7 min read
Yesterday, GitHub posted the root-cause analysis of its outage that took place on 21st October. The outage started at 23:00 UTC on 21st October and left the site broken until 23:00 UTC, 22nd October. Although the backend git services were up and running during the outage, multiple internal systems were affected. Users were unable to log in, submit Gists or bug reports, outdated files were being served, branches went missing, and so forth. Moreover, GitHub couldn’t serve webhook events or build and publish GitHub Pages sites. “At 22:52 UTC on October 21, routine maintenance work to replace failing 100G optical equipment resulted in the loss of connectivity between our US East Coast network hub and our primary US East Coast data center. Connectivity between these locations was restored in 43 seconds, but this brief outage triggered a chain of events that led to 24 hours and 11 minutes of service degradation” mentioned the GitHub team. GitHub uses MySQL to store GitHub metadata. It operates multiple MySQL clusters of different sizes. Each cluster consists of up to dozens of read replicas that help GitHub store non-Git metadata. These clusters are how GitHub’s applications are able to provide pull requests and issues, manage authentication, coordinate background processing, and serve additional functionality beyond raw Git object storage. For improved performance, GitHub applications direct writes to the relevant primary for each cluster, but delegate read requests to a subset of replica servers. Orchestrator is used to managing the GitHub’s MySQL cluster topologies. It also handles automated failover. Orchestrator considers a number of factors during this process and is built on top of Raft for consensus. In some cases, Orchestrator implements topologies that the applications are unable to support, which is why it is very crucial to align Orchestrator configuration with application-level expectations. Here’s a timeline of the events that took place on 21st October leading to the Outage 22:52 UTC, 21st Oct Orchestrator began a process of leadership deselection as per the Raft consensus. After the Orchestrator managed to organize the US West Coast database cluster topologies and the connectivity got restored, write traffic started directing to the new primaries in the West Coast site.   The database servers in the US East Coast data center contained writes that had not been replicated to the US West Coast facility. Due to this, the database clusters in both the data centers included writes that were not present in the other data center. This is why the GitHub team was unable to failover (a procedure via which a system automatically transfers control to a duplicate system on detecting failures) the primaries back over to the US East Coast data center safely. 22:54 UTC, 21st Oct GitHub’s internal monitoring systems began to generate alerts indicating that the systems are undergoing numerous faults. By 23:02 UTC, GitHub engineers found out that the topologies for numerous database clusters were in an unexpected state. Later, Orchestrator API displayed a database replication topology including the servers only from the US West Coast data center.   23:07 UTC, 21st Oct The responding team then manually locked the deployment tooling to prevent any additional changes from being introduced. At 23:09 UTC, the site was placed into yellow status. At 23:11 UTC, the incident coordinator changed the site status to red.   23:13 UTC, 21st Oct As the issue had affected multiple clusters, additional engineers from GitHub’s database engineering team started investigating the current state. This was to determine the actions that should be taken to manually configure a US East Coast database as the primary for each cluster and rebuild the replication topology. This was quite tough as the West Coast database cluster had ingested writes from GitHub’s application tier for nearly 40 minutes.   To preserve this data, engineers decided that 30+ minutes of data written to the US West Coast data center. This prevented them from considering options other than failing-forward in order to keep the user data safe. So, they further extended the outage to ensure the consistency of the user’s data. 23:19 UTC, 21st Oct After querying the state of the database clusters, GitHub stopped running jobs that write metadata about things such as pushes. This lead to partially degraded site usability as the webhook delivery and GitHub Pages builds had been paused.   “Our strategy was to prioritize data integrity over site usability and time to recovery” as per the GitHub team. 00:05 UTC, 22nd Oct Engineers started resolving data inconsistencies and implementing failover procedures for MySQL.Recovery plan included failing forward, synchronization, fall back, then churning through backlogs before returning to green.   The time needed to restore multiple terabytes of backup data caused the process to take hours. The process to decompress, checksum, prepare, and load large backup files onto newly provisioned MySQL servers took a lot of time. 00:41 UTC, 22nd Oct A backup process started for all affected MySQL clusters. Multiple teams of engineers started to investigate ways to speed up the transfer and recovery time.   06:51 UTC, 22nd Oct Several clusters completed restoration from backups in the US East Coast data center and started replicating new data from the West Coast. This resulted in slow site load times for pages executing a write operation over a cross-country link.   The GitHub team identified the ways to restore directly from the West Coast in order to overcome the throughput restrictions caused by downloading from off-site storage. The status page was further updated to set an expectation of two hours as the estimated recovery time. 07:46 UTC, 22nd Oct GitHub published a blog post for more information. “We apologize for the delay. We intended to send this communication out much sooner and will be ensuring we can publish updates in the future under these constraints”, said the GitHub team.   11:12 UTC, 22nd Oct All database primaries established in US East Coast again. This resulted in the site becoming far more responsive as writes were now directed to a database server located in the same physical data center as GitHub’s application tier. This improved performance substantially but there were dozens of database read replicas that delayed behind the primary. These delayed replicas made users experience inconsistent data on GitHub.   13:15 UTC, 22nd Oct GitHub.com started to experience peak traffic load and the engineers began to provide the additional MySQL read replicas in the US East Coast public cloud earlier in the incident.   16:24 UTC, 22nd Oct Once the replicas got in sync, a failover to the original topology was conducted. This addressed the immediate latency/availability concerns. The service status was kept red while GitHub began processing the backlog of data accumulated. This was done to prioritize data integrity.   16:45 UTC, 22nd Oct At this time, engineers had to balance the increased load represented by the backlog. This potentially overloaded GitHub’s ecosystem partners with notifications. There were over five million hook events along with 80 thousand Pages builds queued.   “As we re-enabled processing of this data, we processed ~200,000 webhook payloads that had outlived an internal TTL and were dropped. Upon discovering this, we paused that processing and pushed a change to increase that TTL for the time being”, mentions the GitHub team.   To avoid degrading the reliability of their status updates, GitHub remained in degraded status until the entire backlog of data had been processed. 23:03 UTC, 22nd Oct At this point in time, all the pending webhooks and Pages builds had been processed. The integrity and proper operation of all systems had also been confirmed. The site status got updated to green.     Apart from this, GitHub has identified a number of technical initiatives and continue to work through an extensive post-incident analysis process internally. “All of us at GitHub would like to sincerely apologize for the impact this caused to each and every one of you. We’re aware of the trust you place in GitHub and take pride in building resilient systems that enable our platform to remain highly available. With this incident, we failed you, and we are deeply sorry”, said the GitHub team. For more information, check out the official GitHub Blog post. Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub is bringing back Game Off, its sixth annual game building competition, in November GitHub comes to your code Editor; GitHub security alerts now have machine intelligence
Read more
  • 0
  • 0
  • 4118