Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-the-d-language-front-end-support-finally-merged-into-gcc-9
Amrata Joshi
30 Oct 2018
2 min read
Save for later

The D language front-end support finally merged into GCC 9

Amrata Joshi
30 Oct 2018
2 min read
The D Language front-end got finally merged into GNU Compiler Collection (GCC) 9, yesterday, as reported by Phoronix. The D language front-end is written in C++ and it supports the D 2.0 run-time and shared libraries. Iain Buclaw in his e-mail thread titled ‘Submission of D Front End’ says,  "The front-end is split into two parts. First being a standalone D language implementation that does the source file lexing, parsing and semantic analysis. Second being the binding layer that sits between GCC and the DMD front-end, doing the actual code generation.” Approval on the plan for merging D language front-end into GCC 9 According to a report by Phoronix, last year in June, the GCC Steering Committee had approved the plan of adding the D front-end. However, it took the project more than a year as a set of 13 patches of code, which is nearly 800k lines of code was worked upon and which had undergone revisions for getting the code in adequate shape for merging. Iain Buclaw from the GDC project took the initiative of posting these patches after carefully cleaning them up and also addressing the feedback he had received before. The patch series is available on GCC-patches. Updates on the future plan As per a report by Phoronix, Richard Biener of SUSE announced on 17th October that GCC's "stage 1" development will shift to "stage 3" on 11 November. It’s clear that the open feature development is over and the focus is now on bug-fixing. 6 January 2019 is the tentative date, to begin with the fixes. GCC 9.1, the initial GCC9 stable compiler release with GDC support is expected to be out around the end of the first quarter of 2019. Read more about this news on the official site of Phoronix. GCC 8.1 Standards released! What is a micro frontend? Frontend development with Bootstrap 4
Read more
  • 0
  • 0
  • 2317

article-image-kotlin-1-3-released-with-stable-coroutines-multiplatform-projects-and-more
Prasad Ramesh
30 Oct 2018
3 min read
Save for later

Kotlin 1.3 released with stable coroutines, multiplatform projects and more

Prasad Ramesh
30 Oct 2018
3 min read
In the Kotlin 1.3 release, coroutines are now stable, scalability is better, and Kotlin/Native Beta is added. Coroutines are stable in Kotlin 1.3 Coroutines provide a way to write non-blocking asynchronous code that’s easy to understand. It is a useful tool for activities ranging from offloading work onto background workers to implementing complicated network protocols. The kotlinx.coroutines library hits is at 1.0. It provides a solid foundation for managing asynchronous jobs various scales including composition, cancelation, exception handling and UI-specific use cases. Kotlin/Native Beta Kotlin/Native makes use of LLVM to compile Kotlin sources into standalone binaries without any VM required. Various operating systems and CPU architectures including iOS, Linux, Windows, and Mac are supported. The support extends to even WebAssembly and embedded systems like STM32. Kotlin/Native has a fully automatic memory management and can interoperate with C, Objective-C, and Swift. It exposes platform APIs like Core Foundation, POSIX, and any other native library of choice. The Kotlin/Native runtime promotes immutable data and blocks any attempts of sharing unprotected mutable state between threads. Threads don’t exist for Kotlin/Native, they are abstracted away as a low-level implementation. Threads are replaced by workers which are a safe and manageable way of achieving concurrency. Multiplatform projects in Kotlin 1.3 Kotlin supports JVM, Android, JavaScript, and Native. Hence code can be reused. This saves effort and time which can be used to perform other tasks. The multiplatform libraries in Kotlin 1.3 cover everyday tasks such as HTTP, serialization and managing coroutines. Using the libraries is the easiest way to write multi platform code. You can also create custom multi-platform libraries which wrap platform-specific dependencies into a common API. Tooling support for Kotlin/Native and Multiplatform Kotlin 1.3 has tooling support for Kotlin/Native and multiplatform projects. This is available in IntelliJ IDEA Community Edition, IntelliJ IDEA Ultimate, and Android Studio. All of the code editing features such as error highlighting, code completion, navigation and refactoring are available in all these IDEs. Ktor 1.0 Beta Ktor is a connected applications framework. It implements the entire HTTP stack asynchronously using coroutines and has reached Beta. Other features Some other features in Kotlin 1.3 release include experimental support for inline classes, incremental compilation for Kotlin/JS, and unsigned integers. This release also features a sequence debugger for visualizing lazy computations, contracts to improve static analysis for library calls, and no-arg entry point to provide a cleaner experience for new users. To know more details about all the changes, visit the changelog. KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta Kotlin/Native 0.8 recently released with safer concurrent programming 4 operator overloading techniques in Kotlin you need to know
Read more
  • 0
  • 0
  • 3039

article-image-what-to-expect-in-asp-net-core-3-0
Prasad Ramesh
30 Oct 2018
2 min read
Save for later

What to expect in ASP.NET Core 3.0

Prasad Ramesh
30 Oct 2018
2 min read
ASP.NET Core 3.0 will come with some changes in the way projects work with frameworks. The .NET Core integration will be tighter and will bring third-party open source integration. Changes to shared frameworks in ASP.NET Core 3.0 In ASP.NET Core 1.0, packages were referenced as just packages. From ASP.NET Core 2.1 this was available as a .NET Core shared framework. ASP.NET Core 3.0 aims to reduce issues working with a shared framework. This change removes some of the Json.NET (Newtonsoft.Json) and Entity Framework Core (Microsoft.EntityFrameworkCore.*) components from the shared framework ASP.NET Core 3.0. For areas in ASP.NET Core dependent on Json.NET, there will be packages that support the integration. The default areas will be updated to use in-box JSON APIs. Also, Entity Framework Core will be shipped as “pure” NuGet packages. Shift to .NET Core from .NET Framework The .NET Framework will get fewer new features that come to .NET Core in further releases. This change is made so that existing applications in .NET Core don’t break due to some changes. To leverage the features from .NET Core, ASP.NET Core will now only run on .NET Core starting from version 3.0. Developers currently using ASP.NET Core on .NET Framework can continue to do so till the LTS support period of August 21, 2021. Third party components will be filtered Third party components will be removed. But Microsoft will support the open source community with integration APIs, contributions to existing libraries by Microsoft engineers, and project templates to ensure smooth integration of these components. Work is also being done on streamlining the experience for building HTTP APIs, and a new API client generation system. For more details, visit the Microsoft website. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET Core 2.0 reaches end of life, no longer supported by Microsoft Microsoft’s .NET Core 2.1 now powers Bing.com
Read more
  • 0
  • 0
  • 5131
Visually different images

article-image-qt-design-studio-1-0-released-with-qt-photoshop-bridge-timeline-based-animations-and-qt-live-preview
Natasha Mathur
26 Oct 2018
2 min read
Save for later

Qt Design Studio 1.0 released with Qt photoshop bridge, timeline based animations and Qt live preview

Natasha Mathur
26 Oct 2018
2 min read
The Qt team released Qt Design Studio 1.0 yesterday. Qt Design Studio 1.0 explores features such as Qt photoshop bridge, timeline-based animations, and Qt live preview among other features. Qt Design Studio is a UI design and development environment which allows designers and developers around the world to rapidly prototype as well as develop complex and scalable UIs. Let’s discuss the features of Qt Design Studio 1.0 in detail. Qt Photoshop Bridge Qt Design Studio 1.0 comes with Qt photoshop bridge that allows users to import their graphics design from photoshop. Users can also create re-usable components directly via Photoshop. Moreover, exporting directly to specific QML types is also allowed. Other than that, Qt photoshop Bridge comes with an enhanced import dialog as well as basic merging capabilities. Timeline-based animations Timeline-based animations in Qt Design Studio 1.0 come with a timeline-/keyframe-based editor. This editor allows designers to easily create pixel-perfect animations without having to write a single line of code. You can also map and organize the relationship between timelines and states to create smooth transitions from state to state. Moreover, selecting multiple keyframes is also enabled. Qt Live Preview Qt Live Preview lets you run and preview your application or UI directly on the desktop, Android devices, as well as the Boot2Qt devices. You can also see how your changes affect the UI live on your target device. Moreover, it also comprises a zoom in and out functionality. Other Features You can insert a 3D studio element to preview it on the end target dice with the Qt Live Preview. There’s a Qt Safe Renderer integration that uses Safe Renderer items and also map them in your UI. You can use states and timeline for the creation of screen flows and transitions. Qt Design Studio is free, however, you will need a commercial Qt developer license to distribute the UIs created with Qt Design Studio. For more information, check out the official Qt Design Studio blog. Qt 3D Studio 2.1 released with new sub-presentations, scene preview, and runtime improvements Qt creator 4.8 beta released, adds language server protocol Qt Creator 4.7.0 releases!
Read more
  • 0
  • 0
  • 3618

article-image-rust-1-30-releases-with-procedural-macros-and-improvements-to-the-module-system
Sugandha Lahoti
26 Oct 2018
3 min read
Save for later

Rust 1.30 releases with procedural macros and improvements to the module system

Sugandha Lahoti
26 Oct 2018
3 min read
Yesterday, the rust team released a new version of the Rust systems programming language known for its safety, speed, and concurrency. Rust 1.30 comes with procedural macros, module system improvements, and more. It has been an incredibly successful year for the Rust programming language in terms of its popularity. It jumped from being the 46th most popular language on GitHub last year to the 18th position this year. The 2018 survey of the RedMonk Programming Language Rankings marked the entry of Rust in their Top 25 list. It topped the list of the most loved programming language among the developers who took the Stack overflow survey of 2018 survey for a straight third year in the row. Still not satisfied? Here are 9 reasons why Rust programmers love Rust. Key improvements in Rust 1.30 Procedural macros are now available Procedural macros allow for more powerful code generation. Rust 1.30 introduces two different kinds of advanced macros, “attribute-like procedural macros” and “function-like procedural macros.” Attribute-like macros are similar to custom derive macros, but instead of generating code for only the #[derive] attribute, they allow you to create new, custom attributes of your own. They’re also more flexible: derive only works for structs and enums, but attributes can go on other places, like functions. Function-like macros define macros that look like function calls. Developers can now also bring macros into scope with the use keyword. Updates to the Module system The module system has received significant improvements to make it more straightforward and easy to use. In addition to bringing macros into scope, the use keyword has two other changes. First, external crates are now in the prelude. Previously, on moving a function to a submodule, developers would have some of their code break. Now, on moving a function, it will check the first part of the path and see if it’s an extern crate, and if it is, it will use it regardless of where developers are in the module hierarchy. Second, use supports bringing items into scope with paths starting with crate. Previously, paths specified after use would always start at the crate root, but paths referring to items directly would start at the local path, meaning the behavior of paths was inconsistent. Now, the crate keyword at the start of the path will indicate if developers would like the path to start at their crate root. These changes combined will lead to a more straightforward understanding of how paths resolve. Other changes Developers can now use keywords as identifiers using the raw identifiers syntax (r#), e.g. let r#for = true; Using anonymous parameters in traits is now deprecated with a warning and will be a hard error in the 2018 edition. Developers can now catch visibility keywords (e.g. pub, pub(crate)) in macros using the vis specifier. Non-macro attributes now allow all forms of literals, not just strings. Previously, you would write #[attr("true")], now you can write #[attr(true)]. Developers can now specify a function to handle a panic in the Rust runtime with the #[panic_handler] attribute. These are just a select few updates. For more information, and code examples, go through the Rust Blog. 3 ways to break your Rust code into modules Rust as a Game Programming Language: Is it any good? Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes
Read more
  • 0
  • 0
  • 2977

article-image-bitbucket-goes-down-for-over-an-hour
Natasha Mathur
25 Oct 2018
2 min read
Save for later

BitBucket goes down for over an hour

Natasha Mathur
25 Oct 2018
2 min read
Bitbucket, a web-based version control repository that allows users to manage and share their Git repositories as a team, suffered an outage today. As per the Bitbucket’s incident page, the outage started at 8 AM UTC today and lasted for over an hour, till 9:02 AM UTC,  before finally getting back to its normal state. The Bitbucket team tweeted regarding the outage, saying: https://twitter.com/BitbucketStatus/status/1055372361036312576 It was only earlier this week when GitHub went down for a complete day due to failure in its data storage system. In the case of GitHub, there was no obvious way to tell if the site was down as the website’s backend git services were working. However, users were not able to log in, outdated files were being served, branches went missing, and they were unable to submit Gists, bug reports, posts, etc among other related issues. Bitbucket, however, was completely broken during the entirety of the outage, as all the services from pipelines to actually getting at the code were down. It was evidently clear that the site was not working as it showed the “Internal Server” error. BitBucket hasn’t spoken out regarding the real cause of the outage, however, as per the BitBucket status page, the site had been experiencing elevated error rates, and degraded BitBucket functionality, for the past two days. This could be the possible reason for the outage. After the outage was over, Bitbucket tweeted about the recovery, saying: https://twitter.com/BitbucketStatus/status/1055384158392922112 As the services were down, developers and coders around the world took to Twitter to vent their frustration. https://twitter.com/HeinrichCoetzee/status/1055370890127519744 https://twitter.com/montakurt/status/1055372412651495424 https://twitter.com/CapAmericanec/status/1055370560606294016   Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub is bringing back Game Off, its sixth annual game building competition, in November GitHub comes to your code Editor; GitHub security alerts now have machine intelligenc
Read more
  • 0
  • 0
  • 2528
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-the-llvm-project-is-ditching-svn-for-github-the-migration-to-github-has-begun
Prasad Ramesh
25 Oct 2018
2 min read
Save for later

The LLVM project is ditching SVN for GitHub. The migration to Github has begun.

Prasad Ramesh
25 Oct 2018
2 min read
The official LLVM monorepo repository was officially published on Github on Tuesday. Now is a good time to modify your workflows to use the monorepo as soon as possible. Any current SVN based workflows will be supported for at the most one more year. The move from SVN to GitHub for LLVM has been long under consideration. After positive responses in the mailing threads and in favor of the GitHub community, LLVM has finally decided to set the migration plan in motion. Two round-table meetings were held this week with the developers to discuss SVN to GitHub migration. Below are some highlights of these meetings. The most important outcome from the meetings is an agreed upon timeline for completing the transition. The latest monorepo prototype will be moved over to the LLVM organization Github project and has now begun mirroring the current SVN repository. Commits will still be made to the SVN repository just as they are currently done. All community members are advised to begin migrating their workflows relying on SVN or the current git mirrors to use the new monorepo. As for CI jobs or internal mirrors that pull from SVN or http://llvm.org/git/*.git they should be modified to pull from the new monorepo instead. Changes are advised to also make them work with the new repository layout. Developers are advised to begin using the new monorepo for development. The provided scripts should help to commit code. These scripts will enable you to commit to SVN from the monorepo without having to use git-svn. The commit access will be turned off to the SVN server and commit access to the monorepo will be enabled in a year. At this point, the monorepo will be the only source for the project. Keep an eye on the LLVM monorepo GitHub repository. There is a getting started guide to work with a GitHub monorepo and for more details you can take a look at the mailing list. LLVM will be relicensing under Apache 2.0 start of next year A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring
Read more
  • 0
  • 0
  • 3837

article-image-gitlab-11-4-is-here-with-merge-request-reviews-and-many-more-features
Prasad Ramesh
23 Oct 2018
3 min read
Save for later

GitLab 11.4 is here with merge request reviews and many more features

Prasad Ramesh
23 Oct 2018
3 min read
GitLab 11.4 was released yesterday with new features like merge request reviews, feature flags, and many more. Merge request reviews in GitLab 11.4 This feature will allow a reviewer to draft unlimited comments in a merge request as per preference. It will ensure consistency and then submit them all as a single action. A reviewer can spread their work over many sessions as the drafts are saved to GitLab. The draft comments appear as normal individual comments once they are submitted. This allows individual team members flexibility. They can review code the way they want, it will still be compatible with the entire team. Create and toggle feature flags for applications This alpha feature gives users the ability to create and manage feature flags for software directly in the product. It is as simple as creating a new feature flag and validating it using simple API instructions. Then you have the ability to control the behavior of the software in the field via the feature flag within GitLab. Feature flags offer a feature toggle system for applications. File tree for browsing merge request diff The file tree summarizes both the structure and size of the change. It is similar to diff-stats which provides an overview of the change thereby improving navigation between diffs. Search allows reviewers to limit code review to a subset of files. This simplifies reviews by specialists. Suggest code owners as merge request approvers It is not always obvious as to which person is the best to review changes. The code owners are now shown as suggested approvers when a merge request is created or edited. This makes assigning the right person easy. New user profile page overview With GitLab 11.4, a redesigned profile page overview is introduced. It shows your activity via the familiar but shortened contribution graph. It displays the latest activities and most relevant personal GitLab projects. Set and show user status message within the user menu Setting your status is even more simple with GitLab 11.4. There is a new “Set status” item in the user menu which provides a fresh modal allowing users to set and clear their status right within context. In addition, the status you set is also shown in your user menu, on top of your full name and username. There are some more features like: Move the ability to use includes in .gitlab-ci.yml from starter to core Run all jobs only/except for modifications on a path/file Add timed incremental rollouts to Auto DevOps Support Kubernetes RBAC for GitLab managed apps Auto DevOps support for RBAC Support PostgreSQL DB operations for Auto DevOps Other improvements for searching projects, UX improvements, and Geo improvements For a complete list of features visit the GitLab website. GitLab 11.3 released with support for Maven repositories, protected environments and more GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub GitLab is moving from Azure to Google Cloud in July
Read more
  • 0
  • 0
  • 2109

article-image-mio-a-header-only-c11-memory-mapping-library-released
Amrata Joshi
22 Oct 2018
3 min read
Save for later

Mio, a header-only C++11 memory mapping library, released!

Amrata Joshi
22 Oct 2018
3 min read
Mio, a cross-platform header-only C++11 memory mapping library with an MIT license, got released yesterday. Mio has been created with an objective of getting easily integrated into any C++ project. It uses a memory-mapped file IO without the need to pull in Boost libraries. The users faced issues with the Boost.Iostreams library as it didn’t work efficiently with respect to memory mapping. However, Mio has a lot of advantages over Boost.Iostreams. Advantages of Mio over Boost.Iostreams With Mio, the support for establishing a memory mapping with an already open file handle/descriptor became possible, which otherwise didn’t work with the Boost.Iostreams. Mio makes the memory mapping process easier by accepting any offset and finding the nearest page boundary. Whereas, Boost.Iostreams requires the user to pick offsets exactly at page boundaries, which may lead to errors. Boost.Iostreams implements a memory mapped file IO with a std::shared_ptr to provide shared semantics, even when it is not needed. This may lead to an overhead of the heap allocation, which may not be required. On the other hand, Mio solves this problem with its two use-cases, one that is move-only, which is a zero-cost abstraction over the system specific mapping functions and the other one which is similar to its Boost.Iostreams counterpart, with shared semantics. How does the memory mapping in Mio work? The three ways to map a file into memory are: Use the constructor, which throws on failure: mio::mmap_source mmap(path, offset, size_to_map Use the factory function: std::error_code error; mio::mmap_source mmap = mio::make_mmap_source(path, offset, size_to_map, error); Use the map member function: std::error_code error; mio::mmap_source mmap; mmap.map(path, offset, size_to_map, error); In each of the cases, you can either provide some string type for the file's path or you can simply use an existing, valid file handle. Mio does not check if the provided file descriptor has the same access permissions as the desired mapping, so the mapping process might fail. Such errors are reported via the std::error_code out parameter which is passed to the mapping function. CMake: A build system to help Mio As Mio is a header-only library, it has no compiled components. CMake build system assists Mio, by providing easy testing, installation, and subproject composition on many platforms and operating systems. In Testing When Mio is configured as the highest level CMake project, the suite of executables is built by default. Mio's test executables are integrated with the CMake test driver program, CTest. In Installation The CMake's find package intrinsic function helps Mio's build system to provide an installation target and support for downstream consumption to an arbitrary location. This can be specified by defining CMAKE_INSTALL_PREFIX at the time of configuration.  CMake will install Mio to conventional location based on the platform operating system in the absence of a user specification. Read more about Mio, in detail on the official GitHub page. Google releases Oboe, a C++ library to build high-performance Android audio apps Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet Ebiten 1.8, a 2D game library in Go, is here with experimental WebAssembly support and newly added APIs
Read more
  • 0
  • 0
  • 4223

article-image-python-3-7-1-and-python-3-6-7-released
Prasad Ramesh
22 Oct 2018
4 min read
Save for later

Python 3.7.1 and Python 3.6.7 released

Prasad Ramesh
22 Oct 2018
4 min read
Python 3.7.1 and 3.6.7 are maintenance releases for Python 3.7 and Python 3.6 respectively. They contain a variety of fixes. New features in Python 3.7 Python 3.7 was released in June and is the next branch after Python 3.6. Some of the new features in Python 3.7 are: Postponed evaluation of annotations By postponing the evaluation of annotations in Python, two bugs were fixed: Annotations were restricted to using names which were already available in the current scope. That is, they did not support forward references. Annotating source code had negative effects on starting a Python program. Legacy C locale coercion Determining a sensible default strategy for handling the 7-bit ASCII text encoding is a problem in Python 3. It is implied by the use of the default C or POSIX locale on non-Windows platforms. With PEP 538, the default interpreter command line interface is updated to automatically coerce that locale to an available UTF-8 based locale. Automatically setting LC_CTYPE using this method means that both the core interpreter and locale-aware C extensions will assume the use of UTF-8 as the default text encoding. Forced UTF-8 runtime mode The new -X utf8 command line option and PYTHONUTF8 environment variable can now be used to enable the CPython UTF-8 mode. While in UTF-8 mode, CPython ignores the locale settings to use the UTF-8 encoding by default. Built-in breakpoint() The new built-in breakpoint() function is included as an easy and consistent way to enter the Python debugger. The built-in breakpoint() function calls sys.breakpointhook(). A new C API for thread-local storage Anew Thread Specific Storage (TSS) API is added to CPython which annuls use of the existing TLS API within the CPython interpreter. It deprecates the existing API. Customization of access to module attributes Python 3.7 allows defining __getattr__() on modules. It will be called whenever a module attribute is not found otherwise. Also, defining __dir__() on modules is now allowed. New time functions with nanosecond resolution The range of clocks in modern systems can exceed the precision of a floating point number returned by the time.time() function. For having greater precision, six new nanosecond variants are added. Show DeprecationWarning in __main__ The default handling of DeprecationWarning has been changed in Python 3.7. These warnings are again shown by default. It happens only when the code triggering them is running directly in the __main__ module. Core support for typing module and generic types Earlier, any changes to the core CPython interpreter were not made by PEP 484. Since type hints and the typing module are extensively used by developers, this restriction is now removed. Hash-based .pyc Files The pyc format is extended to allow the hash of the source file to be used for invalidation instead. This was previously used for the source timestamp. Such .pyc files are considered “hash-based”. Hash-based .pyc files come can be of the types checked and unchecked. New documentation translations Three new translations namely Japanese, French and Korean are added. Some fixes in Python 3.7.1 Fix for a possible null pointer dereference in bytesobject.c. Fix a bug where iteration order of OrderedDict was not copied. A Fix for async generators not being finalized. This used to happen even when the event loop was in debug mode and garbage collector runs in another thread. Fix self-cancellation in a C implementation of asyncio.Task. Fix for a reference issue inside multiprocessing.Pool that caused the pool to remain alive on being deleted without being closed or terminated explicitly. Ensure that PyObject_Print() returns -1 on error every time. Also, Python 3.6.7 is released as the seventh maintenance release for Python 3.6. Visit the Python documentation for a complete list of bug fixes in Python 3.7.1 visit the Python documentation and know more about features in Python 3.7 in the documentation. Meet Pypeline, a simple python library for building concurrent data pipelines Python comes third in TIOBE popularity index for the first time Home Assistant: an open source Python home automation hub to rule all things smart
Read more
  • 0
  • 0
  • 3222
article-image-openssh-7-9-released
Prasad Ramesh
22 Oct 2018
3 min read
Save for later

OpenSSH 7.9 released

Prasad Ramesh
22 Oct 2018
3 min read
OpenSSH 7.9 has been released with some new features and bug fixes. There are new features like support for signalling sessions and client and server configs. In bug fixes, invalid format errors and bugs in closing connections are solved. New features in OpenSSH 7.9 Most port numbers are now allowed to be specified using service names from getservbyname(3). This is typically /etc/services. The IdentityAgent configuration directive is allowed to accept environment variable names. This adds the support to use multiple agent sockets without having to use fixed paths. Support is added for signalling sessions via the SSH protocol. However, only a limited subset of signals is supported. The support is only for login or command sessions and not subsystems that were exempt from a forced command via authorized_keys or sshd_config. Support for "ssh -Q sig" to list supported signature options is added. There is also "ssh -Q help" that will show the full set of supported queries. A CASignatureAlgorithms option is added for the client and server configs. It allows control over which signature formats are allowed for CAs to sign certificates. As an example, this allows to ban CAs that sign certificates using the RSA-SHA1 signature algorithm. Key revocation lists (KRLs) are allowed to revoke keys specified by SHA256 hash. Allowing creation of key revocation lists straight from base64-encoded SHA256 fingerprints. This supports removing keys using only the information contained in sshd(8) authentication log messages. Bug fixes in OpenSSH 7.9 ssh(1), ssh-keygen(1): Avoiding Spurious "invalid format" errors while attempting to load PEM private keys when using an incorrect passphrase. sshd(8): On receiving a channel closed message from a client, the stderr file descriptor and stdout are closed at the same time. Processes don’t stop anymore if they were waiting for stderr to close and were indifferent to the closing of stdin/out. ssh(1): You can now set ForwardX11Timeout=0 to disable the untrusted X11 forwarding timeout and support X11 forwarding endlessly. In previous versions, ForwardX11Timeout=0 was undefined. sshd(8): On compiling with GSSAPI support, cache supported method OIDs regardless of whether GSSAPI authentication is enabled in the main section of sshd_config. This behaviour avoids sandbox violations when GSSAPI authentication was enabled later in a Match block. sshd(8): Closing a connection does not failed when configuration is done with a text key revocation list that contains a very short key. ssh(1): Connections with specified ProxyJump are treated the same as ones with a ProxyCommand set with regards to hostname canonicalisation. This means that unless CanonicalizeHostname is set to 'always' the hostname should not be canonicalised. ssh(1): Fixed a regression in OpenSSH 7.8 that could prevent public- key authentication using certificates hosted in an ssh-agent(1) or against sshd(8) from OpenSSH 7.8 or newer. For more details, visit the OpenSSH website. How the Titan M chip will improve Android security IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 2689

article-image-github-down-for-over-7-hours-due-to-failure-in-its-data-storage-system
Natasha Mathur
22 Oct 2018
3 min read
Save for later

GitHub down for a complete day due to failure in its data storage system

Natasha Mathur
22 Oct 2018
3 min read
Update, 23rd October 2018: As of Monday at 23:00 UTC, all the GitHub services returned back to normal. The GitHub team posted an update on their blog mentioning, "we take reliability very seriously and sincerely apologize for this disruption. Millions of people and businesses depend on GitHub, and we know that our community feels the effects of our availability issues acutely. We are conducting a thorough and transparent root cause analysis and mitigation plan, which will be published in the coming days".  Github is facing issues due to a failure in its data storage system which left the site broken for a complete day. The outage started at about 23:00 UTC on Sunday. GitHub engineers are working on fixing this issue and the GitHub team tweeted out about 2 hours ago saying: https://twitter.com/githubstatus/status/1054224055673462786 What’s confusing about this outage is that there's no obvious way to tell the site is down as the website’s backend git services are still up and running. However, users are facing a range of issues such as not being able to log in, outdated files being served, branches missing, unable to submit Gists, bug reports, posts, etc. The team updated their status to “We continue working to repair a data storage system for GitHub.com. You may see inconsistent results during this process”.  The GitHub team further updated the users, "During this time, information displayed on GitHub.com is likely to appear out of date; however no data was lost. Once service is fully restored, everything should appear as expected. Further, this incident only impacted website metadata stored in our MySQL databases, such as issues and pull requests. Git repository data remains unaffected and has been available throughout the incident". The team also mentioned that it will continue to update the users and will provide an estimated time to resolution via their status page. GitHub is a very popular web-based hosting service for software development projects that use the Git revision control system. It is used extensively by software engineers, developers and open source projects all around the world. Since a major chunk of people’s daily work depends on GitHub, developers are venting their frustration over social media sites. https://twitter.com/AmeliasBrain/status/1054149648108085248 https://twitter.com/michaelansaldi/status/1054175097609732096 https://twitter.com/sajaraki/status/1054189413616373761 GitHub is also used by major corporates such as Twitter, Yelp, Adobe and others to host their open source projects. There haven’t been any further updates from the GitHub team and we can only wait to know the real problem behind the outage. Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub is bringing back Game Off, its sixth annual game building competition, in November GitHub comes to your code Editor; GitHub security alerts now have machine intelligence
Read more
  • 0
  • 0
  • 4143

article-image-microsoft-bring-an-open-source-model-of-component-firmware-update-cfu-for-peripheral-developers
Prasad Ramesh
19 Oct 2018
4 min read
Save for later

Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers

Prasad Ramesh
19 Oct 2018
4 min read
Microsoft announced an open-source model for Component Firmware Update (CFU), for Windows developers. CFU enables delivering firmware updates for peripheral components through Windows Update by using CFU drivers. This protocol aims to enable system and peripheral developers to leverage the CFU protocol. It allows them to easily and automatically push firmware updates to Windows Update for their firmware components. CFU aims to bring smooth updates via Windows updates and verify the firmware version before download. CFU permits but does not specify authentication, encryption, rollback policies/ methods, or recovery of bricked firmware. Overview of CFU The CFU driver is the host and is created by the device manufacturer. It is delivered via a Windows Update. Then the driver is installed once the device is detected by Windows. Primary and sub-components A hierarchical system with a primary component and subcomponents is followed in a CFU compatible system. A primary component implements CFU on the device side and can receive updates for itself and the connected sub-components. A device may have multiple primary components with or without additional sub-components. Offers and payloads A CFU driver which is the host, may contain multiple firmware images for a primary component and its sub-components. A package in the host consists of an offer, a payload and other information. The offer contains information about the payload to allow the primary component in deciding if it is acceptable. A payload is the firmware image. Offer Sequence The primary component can accept, reject, or skip the offer of firmware update. On accepting, the payload is delivered immediately. On rejecting or skipping, the host cycles through all other offers in the list. Host independence The host’s (CFU driver) decisions are independent of the offers’ contents or payloads. It does not necessarily use any logic and simply sends the offers and the accepted payloads. Payload delivery On an offer being accepted, the host proceeds to download the firmware image or referred as the payload. Delivery is done in three phases—beginning, middle, and end. The payload is a set of addresses and fixed-size arrays of bytes. Payload validation and authentication Validation of the incoming firmware update is an important aspect. The primary component should verify bytes after each write ensuring that the data is stored properly before proceeding with the next set of data bytes. A CRC or hash should also be calculated on download, to be verified after the download is complete, ensuring the data wasn’t modified in transit. In addition, a cryptographic signature mechanism is recommended to provide end-to-end protection. An encryption mechanism can also be employed for confidential downloads. On image authentication, the properties should be validated against the offer and other rules the device manufacturer may specify. CFU does not specify any rules to be applied. Payload Invocation The CFU Protocol is run at the application level in the primary component. The component can continue to do other tasks as long it can receive and store the incoming payload without significant disruptions. The only real disruption occurs when the new firmware must be invoked. There are two recommended ways to avoid that disruption. A very generic approach is to use a small bootloader image to select one of multiple images to run when the device is reset. This is typically at boot time. The image selection algorithm is specific to the implementation. It is typically based on an algorithm which involves code version, and an indication of successful image validation. Another invocation method is to physically swap the memory of the desired image with the active address space. This is done upon reset. A disadvantage of this method is that it requires specialized hardware. The advantage being all images are statically linked to the same address space eliminating the need for a bootloader. CFU limitations There are some limitations of the protocol. It cannot update a bricked component that can no longer run the protocol. CFU does not provide any security. The CFU protocol requires extra memory to store the incoming images which helps in non-disruptive updates. Updating sub-component images larger than the component’s available storage requires dividing the sub-component image into smaller packages The CFU protocol allows pausing the download, so care needs to be taken for proper validation. CFU assumes that the primary component has set validation rules. If they need to be changed, the component must first be successfully updated by using the old rules first, only then new rules can be applied. For more details, visit the Microsoft website. How the Titan M chip will improve Android security Microsoft fixing and testing the Windows 10 October update after file deletion bug Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65
Read more
  • 0
  • 0
  • 2749
article-image-the-new-rstudio-package-manager-is-now-generally-available
Natasha Mathur
19 Oct 2018
2 min read
Save for later

The new RStudio Package Manager is now generally available

Natasha Mathur
19 Oct 2018
2 min read
The Rstudio team announced the general availability of their latest RStudio professional product, namely, RStudio Package Manager, two days ago. It explores features such as CRAN access, approved subsets of CRAN packages, adding internal packages from GitHub, and optimized experience for R users among others. RStudio Package Manager is an on-premises server product that helps teams and organizations centralize and organize R packages. In other words, it allows R users and the IT team to work together to build a central repository for R packages. Let’s discuss the features of this new Package Manager. CRAN access RStudio Package Manager allows R users to access CRAN (The Comprehensive R Archive Network) without requiring a network exception on every production node. It also helps automate CRAN updates on your schedule. Moreover, you can optimize the disk usage and only download the packages that you need. However, RStudio Package Manager does not provide binary packages from CRAN. It only provides source packages. This limitation will be addressed in the future. Approved subsets of CRAN packages RStudio Package Manager enables admins to create approved subsets of CRAN packages. It also makes sure that the subsets remain stable despite the adding or updating of packages. Adding internal packages using CLI Administrators can now add internal packages using the CLI. For instance, if your internal packages are in Git, then the RStudio Package Manager can automatically track your Git repositories. This is also capable of making the commits accessible to users. Optimized experience for R users RStudio Package Manager offers a seamless experience that gets optimized for R users. For instance, all packages are versioned which automatically makes the older versions accessible to users. This package Manager is also capable of recording the usage statistics. These metrics help administrators conduct audits and make it easy for R users to discover the popular and most useful packages. For more information, check out the official Rstudio package manager blog. Getting Started with RStudio Introducing R, RStudio, and Shiny
Read more
  • 0
  • 0
  • 2382

article-image-express-gateway-v1-13-0-releases-drops-support-for-node-6
Sugandha Lahoti
18 Oct 2018
2 min read
Save for later

Express Gateway v1.13.0 releases; drops support for Node 6

Sugandha Lahoti
18 Oct 2018
2 min read
Express Gateway v1.13.0 was released yesterday. Express Gateway is a simple, agnostic, organic, and portable, microservices API Gateway built on Express.js. The release 1.13.0 drops support for Node 6. What’s new in this version? Changes The development Dockerfile is updated to better leverage the caching. The COPY statements are included at the very bottom to leverage caching for all the layers. Developers need not manually create work directory, WORKDIR does that automatically. In the Express Gateway v1.13.0, the automated deployment process has been updated to provide updated README to the official Helm chart. The Express Gateway v1.13.0 policy file is updated to be exposed as a set of functions instead of as a class which does not really hold any state nor extended anywhere. It transforms the current policy to be a singleton class to an object which exports 3 functions. This might help people get started in hacking with Express Gateway. They have updated all their dependencies before the minor release. Fixes A lot of new changes have been made in Winston after the 3.0.0 migration. These include A better default log level info which avoids using console.log in production code They have updated all references in the code to use verbose to hide statements that do not matter Added color to log context to differentiate between timestamp, context, level, and message Deprecated different functions that aren't used anywhere but are harming the general test coverage Also, it is now possible to provide raw regular expression to Express Gateway’s CORS policy. This allows cors origin configuration to have regular expressions as values. Read more about the release on Github. Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js API Gateway and its need Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 2628