Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-python-3-7-2rc1-and-3-6-8rc1-released
Natasha Mathur
12 Dec 2018
2 min read
Save for later

Python 3.7.2rc1 and 3.6.8rc1 released

Natasha Mathur
12 Dec 2018
2 min read
Python team released Python versions 3.7.2rc1 and 3.6.9 rc1 yesterday.  Python 3.7.2rc1 is the release preview of the second maintenance release of Python 3.7. Python. 3.6.8rc1 is the release preview of the eighth and last maintenance release of Python 3.6. These latest releases include the addition of new features. Key Updates in Python 3.7.2rc1 A new C API for thread-local storage has been added in Python 3.7.2rc1. A new Thread Specific Storage (TSS) API has been added to CPython which takes over the existing TLS API within the CPython interpreter while removing the existing API. Deterministic .pyc files have been added. These .pyc files are called “hash-based”. Python still uses timestamp-based invalidation by default and does not generate hash-based .pyc files at runtime. Hash-based .pyc files can be generated with py_compile or compileall. Core support added in Python 3.7.2rc1 for typing module and generic types. Customized access to module attributes is allowed, meaning you can now define __getattr__() on modules and can call it whenever a module attribute is not found. Defining __dir__() on modules is also allowed. DeprecationWarning handling has been improved. The insertion-order preservation nature of dict objects has now become an official part of the Python language spec. Key Updates in Python 3.6.8rc1 Preserving Keyword Argument order has been added in Python 3.6.9rc1, meaning that **kwargs in a function signature is now guaranteed to be an insertion-order-preserving mapping. Python 3.6.8rc1 offers simple customization of subclass creation without using a metaclass.The new __init_subclass__ classmethod gets called on the base class when a new subclass is created. A new “secrets” module has been added to the standard library that reliably generates cryptographically strong pseudo-random values suited for managing secrets like account authentication, tokens, etc. A frame evaluation API has been added to CPython that makes frame evaluation pluggable at the C level. This allows debuggers and JITs to intercept frame evaluation before Python code execution begins. Python 3.6.8rc1 offers formatted string literals or f-strings. Formatted string literals work similarly to the format strings accepted by str.format(). They comprise replacement fields that are surrounded by curly braces. The replacement fields are expressions, that are evaluated at run time, and formatted using the format() protocol. For more information, check out the official release notes for Python 3.7.2rc1 and 3.6.8rc1. Python 3.7.1 and Python 3.6.7 released IPython 7.2.0 is out! SatPy 0.10.0, python library for manipulating meteorological remote sensing data, released
Read more
  • 0
  • 0
  • 2475

article-image-git-v2-20-0-released-with-improved-git-clone-process-packfiles-consolidation-and-more
Amrata Joshi
11 Dec 2018
3 min read
Save for later

Git v2.20.0 released with improved Git clone process, packfiles consolidation, and more

Amrata Joshi
11 Dec 2018
3 min read
Last week, the team at GitHub released Git v2.20.0, a free and open source distributed version control system that tracks changes in computer files and coordinates work on those files among multiple people. Features Git clone process gets better with Git v2.20.0 The Git clone process will now warn users while they are cloning a project to a case-insensitive file-system, where there are files in the repository that only differ with their cases but have the same pathnames. Git v2.20.0 requires Vista With this new release, Git will now at least require Windows Vista or above versions to operate. Improvements to the Windows port has been observed, such as better support and DLL handling of nanosecond resolution file timestamps. Even the logic for selecting the default username and e-mail on Windows has been improved. Shows a progress bar The git status now shows a progress bar when refreshing the index takes a long time. Git multi-pack-index has been updated The git multi-pack-index has been updated for detecting corruption in the .midx file, and this feature has been integrated into "git fsck". Consolidation When there are too many packfiles in a repository, looking up an object requires consulting multiple pack .idx files. Git v2.20.0 comes with a new mechanism which consolidates all of these .idx files in a single file. Major Improvements The generation of (experimental) commit-graph files now shows progress in the output. On platforms with recent cURL library, http.sslBackend configuration variable can now be used for choosing a different SSL backend at runtime. With Windows port, it is possible to switch between OpenSSL and secure channel while talking over the HTTPS protocol. A pattern with '**' that does not have a slash on either side was considered an invalid one in previous versions. With this update, double-asterisks are treated the same way as two asterisks adjacent to each other are. "git rev-list --stdin </dev/null" initially used to be an error in the previous version but it now shows no output without an error. The developer builds in Git now use Wunused-function compilation option. With this release, it is possible to create an alias that expands to another alias. The test scripts have been updated in Gitv2.20.0 for style and correct handling of exit status of various commands. Major bug fixes The issue with registering same path under multiple worktree entries has been fixed. The "git interpret-trailers" had a buggy code that ignored patch text after committing log message and that triggered various codepaths. This has been fixed now. The bug that leaves the index file corrupt during a partial commit has been fixed now. This release has received some positive response from users.  An interesting fact stated by one of the users on Twitter is, “In Git for Windows, if we build cURL on one machine,it will run on an estimated 3 million different machines. This release has already created some buzz around, it would be interesting to see what GitHub plans next.” Read more about this news on the official mailing list. Upgrade to Git 2.19.1 to avoid a Git submodule vulnerability that causes arbitrary code execution Git 2.19.0 released with better git grep, Python 3 compatibility for git p4 4 myths about Git and GitHub you should know about
Read more
  • 0
  • 0
  • 1974

article-image-erlang-turns-20-tracing-the-journey-from-ericsson-to-whatsapp
Amrata Joshi
10 Dec 2018
3 min read
Save for later

Erlang turns 20: Tracing the journey from Ericsson to Whatsapp

Amrata Joshi
10 Dec 2018
3 min read
Just two days back, Erlang, a functional programming language turned twenty. Erlang has been one of the most popular open source languages with compelling features like concurrent processes, memory management, scheduling, distribution, networking, etc.WhatsApp, the most popular messaging platform’s server is almost completely implemented in Erlang. Twenty years back, on 8th December 1998, Ericsson released its development environment, Erlang/OTP (Open Telecom Platform), as an open source. It was used to make the process of building telecommunications products easier with functionalities like speed, distribution, and concurrency. It also supports a number of processors and operating systems and can be easily integrated with different development languages. Erlang fosters Ericsson’s GPRS, 3, and 4G/LTE and it also powers the internet and mobile data networks. How did Erlang become open source? When Håkan Millroth, head of the Ericsson Software Architecture Lab suggested his team to try ‘open source’, Jane Walerud, an entrepreneur agreed to it and convinced the entire Ericsson management team to release the source code for the Erlang VM. Erlang was released without any publicity, marketing buzz or media coverage. Ericsson just sent an email to Erlang’s mailing list and an announcement was posted on slashdot. During the dot-com bust era, when an extreme growth in the usage and adaptation of the Internet was observed, Erlang/OTP was used in the creation of an XMPP based instant messaging server, ejabberd,developed by Alexey Shchepin. He chose Erlang over all the other languages as it was the most suitable language for implementing a Jabber server. Ejabberd 1.0 was released in December 2005 and it formed a base for many platforms like WhatsApp. Ejabberd showed a 280% increase in throughput when it was compiled to the latest version of Erlang. In May of 2005, a version of the BEAM VM also known as Erlang VM was released that proved the Erlang concurrency and programming models are ideal for multi-core architectures. In May 2006, Erlang was also used to program RabbitMQ, an Advanced Message Queuing Protocol (AMQP). Post that, Erlang has become the language of choice for most of the messaging solutions and is now the backbone of thousands of systems. In 2007, ‘Programming Erlang’ by Joe Armstrong got published by the Pragmatic Programmers. Further in June 2008, the first paper copy of Erlang Programming got publicly available. In 2011, Elixir, a functional and concurrent programming language that runs on Erlang VM was released. In August 2015, Phoenix 1.0, a framework for web applications was released. Phoenix 1.0 uses Erlang VM capabilities to create the same effect as Rails did on Ruby, by bringing making Elixir, popular. Read more about this news on Erlang’s blog post. Elixir 1.7, the programming language for Erlang virtual machine, releases Phoenix 1.4.0 is out with ‘Presence javascript API’, HTTP2 support, and more! Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!
Read more
  • 0
  • 0
  • 4333
Visually different images

article-image-code-completion-suggestions-via-intellicode-comes-to-c-in-visual-studio-2019
Prasad Ramesh
10 Dec 2018
2 min read
Save for later

Code completion suggestions via IntelliCode comes to C++ in Visual Studio 2019

Prasad Ramesh
10 Dec 2018
2 min read
Last week, Microsoft announced in a blog that IntelliCode code completion suggestions will come to Visual C++ in Visual Studio 2019. There are some common usage patterns that occur after coding for some time. For example, an open stream will be closed eventually. When a string is used in the context of an if-statement, it is usually to check if the string is empty or has a certain size. Developers identify and use these coding patterns over time. IntelliCode already knows these common patterns and can suggest them to the developers as code. With the help of machine learning, IntelliCode trains over thousands of real-world projects which includes open-source projects on GitHub. So, IntelliCode will be of most help to developers when using common libraries like STL. IntelliCode saves time by putting most used items on the top of the IntelliSense completion list. On using the IntelliCode extension for a while, starred items will begin to appear at the top of Member List. They are IntelliCode recommendations. In a future release of the extension, Microsoft will give C++ developers the ability to let IntelliCode learn from their own code. They are also considering adding C++ IntelliCode support to Visual Studio Code. This is a welcome feature for developers, as it saves them time. A comment on Hacker News reads: “That's very nice and I'll probably be using it a lot, after VS2019 will stabilize (it's just a preview now). However, the fact that this thing works so well, says a lot about the design of C++ standard library. They should have encapsulated the pair of iterators into a single structure, and implement implicit casts from vectors/arrays to that object. Requiring to type begin/end every single time is counterproductive.” For more details, visit the Microsoft Blog. Visual Studio 2019: New features you should expect to see Neuron: An all-inclusive data science extension for Visual Studio Microsoft releases the Python Language Server in Visual Studio
Read more
  • 0
  • 0
  • 4000

article-image-qt-team-releases-qt-creator-4-8-0-and-qt-5-12-lts
Amrata Joshi
07 Dec 2018
5 min read
Save for later

Qt team releases Qt Creator 4.8.0 and Qt 5.12 LTS

Amrata Joshi
07 Dec 2018
5 min read
Yesterday, the Qt team came up with two major releases, Qt Creator 4.8.0 and the long term support of Qt 5.12. In October, the team released the beta version of Qt Creator 4.8.0 and Qt 5.12 LTS beta. Qt, a cross-platform SDK, helps in quickly and cost-effectively designing, developing, deploying, and maintaining software. What’s new in Qt Creator 4.8.0? Programming Language Support Qt Creator 4.8.0 comes with an added support to Language Server Protocol (LSP), a  standardized bridge between an editor/IDE and a programming language that supports various programming languages. Generic highlighter Qt Creator features code completion, highlighting the symbol under cursor, jumping to the symbol definition, and integrating diagnostics from the language server. This code highlighting is possible with the help of generic highlighter. C++ Support This version of Qt Creator has updated Clang code model to LLVM 7.0. The project information that the code model has, can be exported as a compilation database using the  new Build > Generate Compilation Database. New plugins Compilation Database Projects Qt Creator 4.8.0 comes with CompilationDatabaseProjectManager plugin, which helps users to work with compilation databases as projects. A compilation database is a list of files and the compiler flags are used for compiling them. Clang Format The ClangFormat plugin does the auto-indentation with the help of LibFormat, a library that implements automatic source code formatting based on Clang Format. Cppcheck The Cppcheck plugin integrates the diagnostics which are generated by the tool Cppcheck into the editor. LanguageClient This version comes with an experimental plugin, LanguageClient for supporting the language server protocol. Editing Qt Creator 4.8.0 comes with added support for the pastecode.xyz code pasting service. Also, now it is possible to change default editors in MIME type settings. Debugging With Qt Creator 4.8.0, it is possible to simultaneously run multiple debuggers. The debugger toolbar has an additional pop up menu where users can easily switch between running debugger instances and the preset view for starting new debuggers. The running debugger instances can also maintain its own set of views and their layout. Git This version comes with an added support for GitHub and GitLab. A navigation pane has been added to this version that shows branches. Also, an option for copy/move detection to git blame has been added. Android This release comes with an added support for command line arguments, environment variables, and API level 28. Improvements There is also an option for disabling automatic creation of run configurations in Qt Creator 4.8.0 An option that opens terminal with build or run environment has been added in the in this release. In Qt Creator 4.8.0, the process of handling the relative file paths for custom error parsers has been improved. It is now possible to add libraries for other target platforms in Add Library wizard. There are improvements made to the Qbs projects as it has added qmlDesignerImportPaths property for specifying QML import paths for Qt Quick Designer. The remote Linux has been updated to Botan 2.8 in this version. Major bug fixes Issues with local references for operator arguments has been fixed Qt Creator 4.8.0 now supports UI headers. The crash that occurs while removing diagnostics configuration has been fixed now. The issues regarding the detecting language version have been fixed now. It is now possible to process function extraction from nested classes. The startup issue with localized debugger output has been fixed. The previous version gave invalid access to network paths, this problem has now been fixed with Qt Creator 4.8.0. Get more information about Qt Creator 4.8.0 on Qt’s official blog post. Qt 5.12 LTS releases with support for Python, WebAssembly and more Qt for Python Qt 5.12 LTS supports Python by making all of the Qt APIs available to Python developers. The tech preview is currently available to the users for testing, while the official release will come up shortly after Qt 5.12 LTS. Qt for Python also supports Qt’s C++ APIs and makes them accessible to Python programmers. Python developers will now be able to create complex graphical applications and user interfaces. Qt for WebAssembly Qt 5.12 contains the technology preview for Qt for WebAssembly. Qt for WebAssembly compiles a Qt application to run in any modern Web browser. Qt Remote Objects Qt Remote Objects helps in making the IPC between Qt based processes seamless. It exposes the properties, signals, and slots of a QObject to other processes. Major improvements in Qt Creator 5.12 LTS Improvements to JavaScript engine The new release brings improvements to JavaScript engine, which now supports QML. This release now fully supports ECMAScript 7 which enables modern JavaScript and also simplifies the integration of Javascript libraries. Qt Creator 5.12 LTS supports ECMAScript modules and they can be loaded from C++ as well as QML/JS. TableView Qt Creator 5.12 LTS comes with TableView as another type of Item View in Qt Quick, a free software application framework developed and maintained by the Qt Project. Its performance is better than its previous QQC1 implementation. Pointer Handlers The Pointer Handlers of Qt 5.11 are now renamed as Input Handlers and are also fully supported as a feature in Qt Quick in Qt 5.12. The Input Handlers now simplify the creation of complex interactions. This release comes with two new Input Handlers for hovering and dragging items. Windows UI Automation This version comes with Windows UI Automation that allows Qt-based UWP applications to operate with accessibility and programmatic UI control tools. The tablet/touchscreen/touchpad/mouse input has been replaced with a unified implementation based Windows Pointer Input Messages on Windows 8 and above versions. To know more about Qt Creator 5.12 LTS, check out Qt’s official blog post. Qt creator 4.8 beta released, adds language server protocol Qt Creator 4.7.0 releases! How to Debug an application using Qt Creator
Read more
  • 0
  • 0
  • 7290

article-image-how-has-rust-and-webassembly-evolved-in-2018
Prasad Ramesh
07 Dec 2018
3 min read
Save for later

How has Rust and WebAssembly evolved in 2018

Prasad Ramesh
07 Dec 2018
3 min read
In a blog post, the state of Rust and WebAssembly for 2018 was discussed by the Rust-Wasm team. The Rust and WebAssembly domain working group worked to make a shared vision into a reality: “Compiling Rust to WebAssembly should be the best choice for fast, reliable code for the Web.” With the evolution of ideas, another core value was formed: “Rust and WebAssembly is here to augment your JavaScript, not replace it.” Goals were set for the joint ecosystem. #1 JavaScript interoperation with zero-cost By leveraging zero-cost abstractions Rust enables fast and expressive code. The Rust team wanted to apply this principle to the whole JS interop infrastructure. Developers can write their own boilerplate to pass DOM nodes to wasm generated by Rust but that shouldn’t be the case. Hence they created wasm-bindgen as the foundation for JavaScript interoperation with zero cost. The communication between JavaScript and WebAssembly is facilitated by wasm-bindgen. This generates glue code which developers would have had to write themselves. With the wasm-bindgen ecosystem helps developers to: Exporting rich APIs from Rust-generated wasm libraries. This makes them callable from JavaScript. Import JavaScript and Web APIs into the Rust-generated wasm. #2 Rust-Generated Wasm as an NPM library Good integration is about fitting Rust-generated WebAssembly into the JavaScript’s distribution mechanisms. A big part of that is NPM. The Rust team built a wasm-pack to creating and publishing NPM packages from Rust and WebAssembly code. Sharing Rust-generated wasm modules is now as simple as: wasm-pack publish #3 To get developers productive fast The Rust team wrote a Rust and WebAssembly book to teach all the ins and outs of WebAssembly development with Rust. It features a tutorial to build an implementation of the Conway's Game of Life and teaches you how to write tests, debugging, and diagnosing slow code paths. Getting a Rust-WebAssembly project set up initially involves a boilerplate and configuration that new users may find difficult or experienced ones may find as a waste of time. Hence the Rust team has created a variety of project templates for different use cases: wasm-pack-template to create NPM libraries with Rust and Wasm. create-wasm-app to create Web applications built on top of Rust-generated wasm NPM libraries. rust-webpack-template to create whole Web applications with Rust, WebAssembly, and the Webpack bundler. rust-parcel-template to create whole Web applications with Rust, WebAssembly, and the Parcel bundler. #4 Rust-Generated Wasm needs to be testable and debuggable wasm can’t log any panics or errors because by default as it doesn’t have any “syscall” or I/O functionality. Imports have to be manually added for that, and then instantiate the module with appropriate functions. To remedy this, and to ensure that panics are always debuggable, the Rust team created the console_error_panic_hook crate. It redirects panic messages into the browser’s devtools console. For more details on the state of the joint ecosytem in 2018, visit the Rust and WebAssembly Blog. Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web Red Hat announces full support for Clang/LLVM, Go, and Rust WebAssembly – Trick or Treat?
Read more
  • 0
  • 0
  • 3408
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-rust-1-31-is-out-with-stable-rust-2018
Prasad Ramesh
07 Dec 2018
3 min read
Save for later

Rust 1.31 is out with stable Rust 2018

Prasad Ramesh
07 Dec 2018
3 min read
Yesterday, Rust 1.31.0 and Rust 2018 was announced in the official blog of the programming language. Rust 1.31.0 is the first stable iteration of Rust 2018 and many features in this version are now stable. Rust 2018 Rust 2018 brings all of the work that the Rust team has been doing since 2015 to create a cohesive package. This goes beyond just language features and includes tooling, documentation, domain working groups work, and a new website. Each Rust package can be in either Rust 2015 or Rust 2018 and they work seamlessly together. Projects made in Rust 2018 can use dependencies from 2015, and a 2015 project can use 2018 dependencies. This is done so that the ecosystem doesn't split. The new features are opt-in to preserve compatibility in existing code. Non-lexical lifetimes Non-lexical lifetimes or NLL simply means that the borrow checker is now smarter, and accepts some valid code that was rejected by it previously. Module system changes People new to Rust struggle with its module system. Even if there are simple and consistent rules that define the module system, their consequences can come across as inconsistent, counterintuitive and mysterious. Hence Rust 2018 introduces a few changes to how paths work. These changes ended up simplifying the module system, and now there is better clarity as to what is going on in the module system. More lifetime elision rules Some additional elision rules for impl blocks and function definitions are added. For example: impl<'a> Reader for BufReader<'a> {    // methods go here } This can now be written like this: impl Reader for BufReader<'_> {    // methods go here } Lifetimes still need to be defined in structs. But now no longer require as much boilerplate as before. const fn There are many ways to define a function in Rust. A regular function with fn An unsafe function with unsafe fn An external function with extern fn Rust 1.31 adds a new way to qualify a function: const fn. New tools in Rust 1.31 Tools like Cargo, Rustdoc, and Rustup have been crucial in Rust since version 1.0. In Rust 2018, a new generation of tools are ready for all users— Clippy: Rust's linter. Rustfmt: A tool for formatting Rust code. IDE support: Rust is now supported in popular IDEs like Visual Studio Code, IntelliJ, Atom, Sublime Text 3, Eclipse. Tool lints "tool attributes", like #[rustfmt::skip] were stabilized in Rust 1.30. In Rust 1.31, "tool lints," like #[allow(clippy::bool_comparison)] are being stabilized. These give a namespace to lints making their tool of origin more clear. Other additions Apart from changes in the language itself, there are changes to other areas too. Documentation: "The Rust Programming Language" book has been rewritten. Domain working groups: Four new domain working groups are introduced—network services, command-line applications, WebAssembly, embedded devices. New website: There’s a new iteration of the website for Rust 2018. Library stabilizations: Some From implementations have been added to stabilize libraries. Cargo changes: In Rust 1.31 cargo will download packages in parallel using HTTP/2. Rust Survey 2018 key findings: 80% developers prefer Linux, WebAssembly growth doubles, and more Rust Beta 2018 is here GitHub Octoverse: The top programming languages of 2018
Read more
  • 0
  • 0
  • 2409

article-image-microsoft-connect-2018-net-foundation-open-membership-net-core-2-2-net-core-3-preview-1-released-wpf-winui-windows-forms-open-sourced
Prasad Ramesh
05 Dec 2018
4 min read
Save for later

Microsoft Connect(); 2018: .NET foundation open membership, .NET Core 2.2, .NET Core 3 Preview 1 released, WPF, WinUI, Windows forms open sourced

Prasad Ramesh
05 Dec 2018
4 min read
Yesterday Microsoft made a handful of announcements at Connect (); 2018. The membership to the .NET foundation is now open, .NET Core 2.2 is released, .NET Core 3 Preview 1 is released and Windows Forms, WinUI are now open source. Membership is now open to the .NET Foundation Found in 2014, the .NET Foundation was formed to foster .NET open source development and collaboration. Microsoft has set the membership open to the community. It is also expanding on the number of board members from three to seven and only one of the seats will be occupied by a Microsoft employee with the remaining elected from the open source community. The board elections will commence in January 2019 and any individual who has contributed to a .NET Foundation open source project is eligible. This criteria also applies to become a member and the election will be held every year. You can apply for a membership on their website. To know more about membership and eligibility, head to the Microsoft Blog. New features in .NET Core 2.2 .NET Core 2.2 comes with diagnostic improvements to the runtime, ARM32 support for Windows and Azure Active Directory for SQL Client. Tiered compilation Tiered compilation enables the runtime to use the Just-In-Time (JIT) compiler more adaptively. This will give better performance at startup to maximize throughput. It is an opt-in option and is enabled by default in .NET Core 3.0. Runtime events With .NET Core 2.2, CoreCLR events can be consumed using the EventListener class. These CoreCLR events describe the behavior of GC, JIT, ThreadPool, and interop. They are the same events exposed as part of the CoreCLR ETW provider on Windows. This allows applications to consume these events or use a transport mechanism to send them to a telemetry aggregation service. Support for AccessToken in SqlConnection Setting the AccessToken property to authenticate SQL Server connections are now supported in the ADO.NET provider for SQL Server, SqlClient. This is done using Azure Active Directory. To use the feature, the access token value can be obtained using Active Directory Authentication Library for .NET. This library is present in the Microsoft.IdentityModel.Clients.ActiveDirectory NuGet package. Injecting code prior to Main .NET Core 2.2 enables injecting code prior to running an application main method. This can be done via a startup hook. Startup hooks allow for a host to customize application behavior after it has been deployed. Windows ARM32 Windows ARM32 is now supported in .NET Core 2.2 just like Linux ARM32 which was added in .NET Core 2.1. A bug prevented publishing of .NET Core builds for Windows ARM32. These builds will be available for .NET Core 2.2.1, in January 2019. .NET Core 3 Preview 1 .NET Core 3 Preview 1 is the first public release of .NET Core 3. Visual Studio 2019 Preview 1 will support development with .NET Core 3. .NET Core 3 is a major update. It adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). Read more about the preview on the .NET Blog. WPF, Windows Forms, and WinUI are now open source After .NET Core went open source in 2014, it saw many contributions from the community. Microsoft is now open sourcing WPF, Windows Forms, and WinUI. Some code will be available in GitHub now and more will be added over the next few months. Repositories for WPF and WinUI are ready too. WPF and Windows Forms projects are under the .NET Foundation. This happened at the Connect(); conference yesterday when Microsoft employees merged the first two community pull requests on stage. This is another step from Microsoft towards open source, strongly signaling the seriousness of their open source commitment. Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser Microsoft becomes the world’s most valuable public company, moves ahead of Apple Microsoft announces official support for Windows 10 to build 64-bit ARM apps
Read more
  • 0
  • 0
  • 2087

article-image-github-acquires-spectrum-a-community-centric-conversational-platform
Savia Lobo
03 Dec 2018
2 min read
Save for later

GitHub acquires Spectrum, a community-centric conversational platform

Savia Lobo
03 Dec 2018
2 min read
Last week, Bryn Jackson, CEO of Spectrum, a real-time community-centered conversational platform, announced that the project is now acquired by GitHub. Bryn, along with Brian Lovin, and Max Stoiber founded the Spectrum community platform in February 2017. This community is a place to ask questions, request features, report bugs, and also chat with the Spectrum team for queries. In a blogpost Bryn wrote, “After releasing an early prototype, people told us they also wanted to use it for their communities, so we decided to go all-in and build an open, inclusive home for developer and designer communities. Since officially launching the platform late last year, Spectrum has become home to almost 5,000 communities!” What will Spectrum bring to GitHub communities? By joining GitHub, Spectrum aims to align to GitHub’s goals of making developer lives easier and of fostering a strong community across the globe. For communities across GitHub, Spectrum will provide: A space for different communities across the internet. Free access to its full suite of features - including unlimited moderators, private communities and channels, and community analytics. A deeper integration with GitHub Spectrum has also opened a pull request to add some of GitHub’s policies to Spectrum’s Privacy Policy, which will be merged this week. Though many users have not heard about Spectrum, they are positively reacting towards its acquisition by GitHub. Many users have also compared it with other platforms such as Slack, Discord, and Gitter. To know more about this news, read Bryn Jackson’s blog post. GitHub Octoverse: The top programming languages of 2018 GitHub has passed an incredible 100 million repositories Github now allows repository owners to delete an issue: curse or a boon?
Read more
  • 0
  • 0
  • 6077

article-image-haskell-is-moving-to-gitlab-due-to-issues-with-phabricator
Prasad Ramesh
03 Dec 2018
3 min read
Save for later

Haskell is moving to GitLab due to issues with Phabricator

Prasad Ramesh
03 Dec 2018
3 min read
The Haskell functional programming language is moving from Phabricator to GitLab. Last Saturday, Haskell Consultant Ben Gamari listed down some details about the move in a mail. It started with a proposal to move to GitLab A few weeks back, Gamari wrote to the Haskell mailing list about moving the Glasgow Haskell Compiler (GHC) development infrastructure to GitLab. The original proposal wasn’t complete enough to be used but did provide a small test instance to experiment on. The staging URL https://gitlab.staging.haskell.org is ready to use. While this is not the final version of the migration, it does have most of the features a user would expect. Trac tickets are fully imported, including attachments Continuous integration (CI) is available via CircleCI The mirrors of all boot libraries are present Users can also login using their GitHub credentials if they choose to Issues in the migration There are also a few issues listed by Gamari that needs to be worked on: Timestamps associated with ticket open and close events aren't accurate Some of the milestone changes have problems on being imported Currently, CircleCI fails when forked Trac Wiki pages aren’t imported as of now Gamari said that the listed issues have either been resolved in the import tool or are in-progress to be resolved. The goal of this staging instance is to let contributors gain experience using GitLab and identify any obstacles in the eventual migration. Developers need to note that any comments, merge requests, or issues created on the temporary instance may not be preserved. The focus is on identifying workflows that will become harder under GitLab and ways to improve on them, pending issues in importing Trac, and areas that do not have documentation. Why the move to GitLab? The did not choose GitHub as stated by Gamari in another mail: “Its feature set is simply insufficient enough to handle the needs of a larger project like GHC”. The move to GitLab is due to a number of reasons. Phacility, the company that owns Phabricator has now closed support to non paying customers As Phalicity now focuses on paying customers, open-source parts used by GHC seem half finished Phabricator tool Harbormaster causing breaking CI Their surveys indicated developers leaning towards Git rather than the PHP tool Arcanist used by Phabricator The final migration will happen in about two weeks and the date mentioned is December 18. For more details, you can follow the Haskell mailing list. What makes functional programming a viable choice for artificial intelligence projects? GitLab open sources its Web IDE in GitLab 10.7 GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub
Read more
  • 0
  • 0
  • 4589
article-image-typescript-3-2-released-with-configuration-inheritance-and-more
Prasad Ramesh
30 Nov 2018
7 min read
Save for later

TypeScript 3.2 released with configuration inheritance and more

Prasad Ramesh
30 Nov 2018
7 min read
TypeScript 3.2 was released yesterday. It is a language that brings static type-checking to JavaScript which enables developers to catch issues even before the code is run. TypeScript 3.2 includes the latest JavaScript features from ECMAScript standard. In addition to type-checking, it provides tooling in editors to jump to variable definitions, find the user of a function, and automate refactorings. You can install TypeScript 3.2 via NuGet or install via npm as follows: npm install -g typescript Now let’s look at the new features in TypeScript 3.2. strictBindCallApply TypeScript 3.2 comes with stricter checking for bind, call, and apply. In JavaScript, bind, call, and apply are methods on functions that allow actions like bind this and partially apply arguments. They also allow you to call functions with a different value for this, and call functions with an array for their arguments. Earlier, TypeScript didn’t have the power to model these functions. Demand to model these patterns in a type-safe way led the TypeScript developers to revisit this problem. There were two features that opened up the right abstractions to accurately type bind, call, and apply without any hard-coding: this parameter types from TypeScript 2.0 Modeling the parameter lists with tuple types from TypeScript 3.0 The combination of these two can ensure that the uses of bind, call, and apply are more strictly checked when we use a new flag called strictBindCallApply. When this new flag is used, the methods on callable objects are described by a new global type—CallableFunction. It declares stricter versions of the signatures for bind, call, and apply. Similarly, any methods on constructable (which are not callable) objects are described by a new global type called NewableFunction. A caveat of this new functionality is that bind, call, and apply can’t yet fully model generic functions or functions having overloads. Object spread on generic types JavaScript has a handy way of copying existing properties from an existing object into a new object called “spreads”. To spread an existing object into a new object, an element with three consecutive periods (...) can be defined. TypeScript does well in this area when it has enough information about the type. But it wouldn’t work with generics at all until now. A new concept in the type system called an “object spread type” could have been used. This would be a new type operator that looks like this: { ...T, ...U } to reflect the syntax of an object spread. If T and U are known, that type would flatten to some new object type. This approach was complex and required adding new rules to type relationships and inference. After exploring several different avenues, two conclusions arrived: Users were fine modeling the behavior with intersection types for most uses of spreads in JavaScript. For example, Foo & Bar. Object.assign: This a function that exhibits most of the behavior of spreading objects. It is already modeled using intersection types. There has been very little negative feedback around that. Intersections model the common cases and they’re relatively easy to reason about for both users and the type system. So now TypeScript 3.2 allows object spreads on generics and models them using intersections. Object rest on generic types Object rest patterns are kind of a dual to object spreads. It creates a new object that lacks some specified properties instead of creating a new object with some extra or overridden properties. Configuration inheritance via node_modules packages TypeScript has supported extending tsconfig.json files by using the extends field for a long time. This feature is useful to avoid duplicating configuration which can easily fall out of sync. It really works best when multiple projects are co-located in the same repository. This way each project can reference a common “base” tsconfig.json. But some projects are written and published as fully independent packages. Such projects don’t have a common file they can reference. So as a workaround, users could create a separate package and reference that. TypeScript 3.2 resolves tsconfig.jsons from node_modules. TypeScript will dive into node_modules packages when a bare path for the "extends" field in tsconfig.json is used. Diagnosing tsconfig.json with --showConfig The TypeScript compiler, tsc, now supports a new flag called --showConfig. On running tsc --showConfig, TypeScript will calculate the effective tsconfig.json and print it out. BigInt BigInts are a part of an upcoming ECMAScript proposal that allow modeling theoretically arbitrarily large integers. TypeScript 3.2 comes with type-checking for BigInts along with support for emitting BigInt literals when targeting esnext. Support for BigInt in TypeScript introduces a new primitive type called bigint. BigInt support is only available for the esnext target. Object.defineProperty declarations in JavaScript When writing in JavaScript files using allowJs, TypeScript 3.2 recognizes declarations that use Object.defineProperty. This means better completions, and stronger type-checking when enabling type-checking in JavaScript files. Improvements in error messages A few things have been added in TypeScript 3.2 that will make the language easier to use. Better missing property errors Better error spans in arrays and arrow functions Error on most-overlapping types in unions or “pick most overlappy type” Related spans on a typed this being shadowed A new warning message that says Did you forget a semicolon? on parenthesized expressions on the next line is added More specific messages are displayed when assigning to const/readonly bindings When extending complex types, more accurate messages are shown Relative module names are used in error messages Improved narrowing for tagged unions TypeScript now makes narrowing easier by relaxing rules for a discriminant property. The common properties of unions are now considered discriminants as long as they contain some singleton type and contain no generics. For example, a string literal, null, or undefined. Editing improvements The TypeScript project doesn’t have a compiler/type-checker. The core components of the compiler provide a cross-platform open-source language service that can power smart editor features. These features include go-to-definition, find-all-references, and a number of quick fixes and refactorings. Implicit any suggestions and “infer from usage” fixes noImplicitAny is a strict checking mode, and it helps ensure the code is as fully typed as possible. This also leads to a better editing experience. TypeScript 3.2 produces suggestions for most of the variables and parameters that would have been reported as having implicit any types. TypeScript provides a quick fix to automatically infer the types when an editor reports these suggestions. Other fixes There are two smaller quick fixes: A missing new is added when a constructor is called accidentally. An intermediate assertion is added to unknown when types are sufficiently unrelated. Improved formatting TypeScript 3.2 is smarter in formatting several different constructs. Breaking changes and deprecations TypeScript has moved more to generating DOM declarations in lib.d.ts by leveraging IDL files. Certain parameters no longer accept null or accept more specific types. Certain WebKit-specific properties have been deprecated. wheelDelta and friends have been removed as they are deprecated properties on WheelEvents. JSX resolution changes The logic for resolving JSX invocations has been unified with the logic for resolving function calls. This action has simplified the compiler codebase and improved certain use-cases. The further TypeScript releases will need Visual Studio 2017 or higher. For more details, visit the Microsoft Blog. Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new? Babel 7 released with Typescript and JSX fragment support
Read more
  • 0
  • 0
  • 3177

article-image-cirq-0-4-0-released-for-writing-quantum-circuits
Prasad Ramesh
30 Nov 2018
3 min read
Save for later

Cirq 0.4.0 released for writing quantum circuits

Prasad Ramesh
30 Nov 2018
3 min read
Cirq is a Python library for writing quantum circuits and running them against quantum computers created by Google. Cirq 0.4.0 is now released and is available on GitHub. Changes in Cirq 0.4.0 themes The API is now more pythonic and more consistent with respect to breaking changes and refactoring. The simulation is faster. New functionality in Cirq 0.4.0 The following functions, parameters are added. cirq.Rx, cirq.Ry, and cirq.Rz cirq.XX, cirq.YY, cirq.ZZ, and cirq.MS related to the Mølmer–Sørensen gate cirq.Simulator cirq.SupportsApplyUnitary protocol is added to specify fast simulation methods cirq.Circuit.reachable_frontier_from and cirq.Circuit.findall_operations_between cirq.decompose sorted(qubits) and cirq.QubitOrder.DEFAULT.order_for(qubits) are now equivalent cirq.experiments.generate_supremacy_circuit_[...] dtype parameters are added to control the precision versus speed of simulations cirq.TrialResult helper methods (dirac_notation / bloch_vector / density_matrix) cirq.TOFFOLI and cirq.CCZ can be raised to powers Breaking changes in Cirq 0.4.0 Most of the gate classes have been standardized. They can now take an exponent argument and have a name which is of the form NamePowGate. For example, RotXGate is now XPowGate and it no longer takes rads, degs, or half_turns. The xmon gate set has now been merged into the common gate set. The capability marker classes have been replaced by magic method protocols. As an example, gates now just implement a _unitary_ method as opposed to inheriting from KnownMatrix. cirq.Extensions and cirq.PotentialImplementation are removed. Many decomposition classes and methods have been moved from cirq.google.* to cirq.*. Example: cirq.google.EjectFullW is now cirq.EjectPhasedPaulis. The classes and methods related to line placement are moved into cirq.google. Notable bug fixes A two-qubit gate decomposition will no longer produce a glut of single qubit gates. When multi-line entries are given, circuit diagrams stay aligned. They now include "same moment" indicators. The false-positives and false-negatives are fixed in cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent. Many repr methods returning code are fixed that assumed from cirq import * instead of import cirq. Example code now runs in both Python 2 and Python 3 without the need for transpilation. Notable dev changes The test files now import cirq instead of just specific modules. There is better testing and packaging of scripts. The package versions for Python 2 and Python 3 are no longer different. cirq.value_equality decorator is added. New cirq.testing methods and classes are added. Additions to contrib cirq.contrib.acquaintance: New utilities for defining permutation gates cirq.contrib.paulistring: Utilities for optimizing non-Clifford operations which are separated by Clifford operations cirq.contrib.tpu: Utilities for converting circuits into an executable form to be used on cloud TPUs. This requires TensorFlow. Google AdaNet, a TensorFlow-based AutoML framework Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet A new Model optimization Toolkit for TensorFlow can make models 3x faster
Read more
  • 0
  • 0
  • 3106

article-image-the-golang-team-has-started-working-on-go-2-proposals
Prasad Ramesh
30 Nov 2018
4 min read
Save for later

The Golang team has started working on Go 2 proposals

Prasad Ramesh
30 Nov 2018
4 min read
Yesterday, Google engineer Robert Griesemer published a blog post highlighting the outline of the next steps for Golang towards the Go 2 release. Google developer Russ Cox started the thought process behind Go 2 in his talk at GopherCon 2017. The talk was about the future of Go and pertaining to the changes that were talked about, the talk was informally called Go 2. A major change between the two versions is in the way design and changes are influenced. The first version only involved a small team but the second version will have much more participation from the community. The proposal process started in 2015, the Go core team will now work in the proposals for the second version of the programming language. The current status of Go 2 proposals As of November 2018, there are about 120 open issues on GitHub labeled Go 2 proposal. Most of them revolve around significant language or library changes often not compatible with Go 1. The ideas from the proposals will probably influence the language and libraries of the second version. Now there are millions of Go programmers and a large Go code body that needs to be brought together without an ecosystem split. Hence the changes done need to be less and carefully selected. To do this, the Go core team is implementing a proposal evaluation process for significant potential changes. The proposal evaluation process The purpose of the evaluation process is to collect feedback on a small number of select proposals to make a final decision. This process runs in parallel to a release cycle and has five steps. Proposal selection: The Go core team selects a few Go 2 proposals that seem good to them for acceptance. Proposal feedback: After selecting, the Go team announces the selected proposals and collects feedback from the community. This gives the large community an opportunity to make suggestions or express concerns. Implementation: The proposals are implemented based on the feedback received. The goal is to have significant changes ready to submit on the first day up an upcoming release. Implementation feedback: The Go team and community have a chance to experiment with the new features during the development cycle. This helps in getting further feedback. Final launch decision: The Go team makes the final decision on shipping each change at the end of the three-month development cycle. At this time, there is an opportunity to consider if the change delivers the expected benefits or has created any unexpected costs. When shipped, the changes become a part of the Go language. Proposal selection process and the selected proposals For a proposal to be selected, the minimum criteria are that it should: address an important issue for a large number of users have a minimal impact on other users is drafted with a clear and well-understood solution For trials a select few proposals will be implemented that are backward compatible and hence are less likely to break existing functionality. The proposals are: General Unicode identifiers based on Unicode TR31 which will allow using non-Western alphabets. Adding binary integer literals and support for_ in number literals. Not a very big problem solving change, but this brings Go up to par with other languages in this aspect. Permit signed integers as shift counts. This will clean up the code and get shift expressions better in sync with index expressions and built-in functions like cap and len. The Go team has now started with the proposal evaluation process and now the community can provide feedback. Proposals with clear, positive feedback will be taken ahead as they aim to implement changes by  February 1, 2019. The development cycle is Feb-May 2019 and the chosen features will be implemented as per the outlined process. For more details, you can visit the Go Blog. Golang just celebrated its ninth anniversary GoCity: Turn your Golang program into a 3D city Golang plans to add a core implementation of an internal language server protocol
Read more
  • 0
  • 0
  • 3762
article-image-introducing-wavemaker-10-an-apaas-software-to-rapidly-build-applications-with-angular-7-and-kubernetes-support
Bhagyashree R
29 Nov 2018
2 min read
Save for later

Introducing WaveMaker 10: An aPaaS software to rapidly build applications with Angular 7 and Kubernetes support

Bhagyashree R
29 Nov 2018
2 min read
Last week, the WaveMaker team released its enhanced platform, WaveMaker 10. This version comes with an advanced technology stack leveraging Angular 7, integrated artifact repository, IDE synchronization features, and more. WaveMaker is an application platform-as-a-service (aPaaS) software that allows developers to rapidly build and run custom apps. It enables developers to build extensible and customizable apps with standard enterprise-grade technologies. The platform also comes with built-in templates, layouts, themes, and widgets to help you build responsive apps without having to write any code. Key enhancements in WaveMaker 10 Improved application stack with Angular 7 and Kubernetes support Developers can now leverage Angular 7 to build responsive web and mobile apps. Angular 7 support provides greater performance and efficiency, type safety, and modern user experience. Scaling applications with Kubernetes is supported via a 1-click deployment feature. You can now natively pack your apps as containers and deploy them to a running Kubernetes cluster. Enhanced developer productivity and collaboration To give developers more control over their code and help them build apps faster, WaveMaker 10 comes with enhanced IDE support. With the newly introduced workspace sync plugin, developers can pull code changes seamlessly between WaveMaker and any IDE without having to manually export and import them. To allow developers to share reusable application elements like service prefabs, templates, themes, and data models, an integrated artifact repository is introduced. The platform can now be localized in a regional language enabling better collaboration between global development teams. Increased enterprise security and accessibility Support for configuring and implementing role-based access at both platform and project levels is introduced in WaveMaker 10. You can now create multiple developer personas with unique permission sets. Open ID authentications for Single Sign-On (SSO) are supported by both the platform and applications built using it. Additionally, all WaveMaker 10 applications are protected from OWASP Top 10 Vulnerabilities to ensure greater security against threats and malicious injections. Applications built with WaveMaker 10 also support Web Content Accessibility Guidelines (WCAG) 2.1, making them more accessible to users with disabilities. Head over to WaveMaker’s official website to know more in detail. Angular 7 is now stable Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS
Read more
  • 0
  • 0
  • 2201

article-image-amazon-announces-the-public-preview-of-aws-app-mesh-a-service-mesh-for-microservices-on-aws
Amrata Joshi
29 Nov 2018
3 min read
Save for later

Amazon announces the public preview of AWS App Mesh, a service mesh for microservices on AWS

Amrata Joshi
29 Nov 2018
3 min read
Yesterday, at AWS re:Invent, Amazon introduced AWS App Mesh, a service mesh for controlling and monitoring communication easily across the microservices on AWS. App Mesh standardizes the communication of microservices and gives users an end-to-end visibility. It can also be used with Amazon ECS and Amazon EKS to run containerized microservices. Earlier it was difficult to pinpoint the exact location of errors when the number of microservices grew within an application. In order to solve this problem one had to build monitoring and control logic directly into the code and redeploy the microservices. AWS App Mesh solves the problem by making it easy to run microservices by providing visibility and network traffic controls for every microservice in an application. It also removes the need for updating application code. With App Mesh, the logic for monitoring and controlling communications between the microservices is implemented as a proxy. This proxy runs alongside each microservice, instead of being built into the microservice code. App Mesh automatically sends the configuration information to each microservice proxy. The major advantage of placing a proxy in front of every microservice is that the metrics, logs, and traces between the services can automatically get captured. Key Features of AWS App Mesh Identifies issues with microservices App Mesh captures metrics, logs, and traces from every microservice and exports this data to multiple AWS and third-party tools, including AWS X-Ray, Amazon CloudWatch, etc. for monitoring and controlling. This helps in identifying and isolating issues with any microservice in order to optimize the application. Configures the traffic flow With App Mesh one can easily implement custom traffic routing rules for ensuring that every microservice is highly available during deployments and after failures. AWS App Mesh is responsible for deploying and configuring a proxy that manages all communications traffic to and from the containers. It also removes the need for configuring the microservice’s communication protocols, writing custom code, or implementing libraries for operating applications. Works with existing microservices App Mesh can be used with existing or new microservices that are running on Amazon ECS, AWS Fargate, Amazon EKS, and self-managed Kubernetes on AWS. App Mesh monitors and controls the communications for microservices that are running across orchestration systems, clusters. Uses Envoy Proxy for monitoring App Mesh also uses the open source Envoy proxy with a wide range of AWS partner and open source tools for monitoring the microservices. Envoy is a self-contained process, designed to run alongside every application server. To know more about this news, check out the Amazon’s official blog post. Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 Amazon launches AWS Ground Station, a fully managed service for controlling satellite communications, downlink and processing satellite data 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations
Read more
  • 0
  • 0
  • 2963