Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-stack-exchange-migrates-to-net-entity-framework-core-ef-core-stack-overflow-to-follow-soon
Savia Lobo
08 Oct 2018
2 min read
Save for later

Stack Exchange migrates to .NET Entity Framework Core (EF Core), Stack Overflow to follow soon

Savia Lobo
08 Oct 2018
2 min read
Last week, Nick Craver, Architecture Lead for Stack Overflow, announced that Stack Exchange is migrating to .NET Entity Framework Core (EF Core) and seek help from users to test the EF Core. The Stack Exchange community has deployed a major migration from its previous Linq-2-SQL to EF Core. Following this, Stack Overflow may also get a partial tier to deploy later today. In his post, Nick said, “Along the way we have to swap out parts that existed in the old .NET world but don't in the new.” Some changes in Stack Exchange and Stack Overflow post migration to .NET EF Core The Stack community said that they have safely diverged their Enterprise Q3 release. This means they work on one codebase for easier maintenance and the latest features will also be reflected in the .NET Entity Framework Core. Stack Overflow was written on top of a data layer called Linq-2-SQL. This worked well but had scaling issues following which the community replaced the performance critical paths with a library named as Dapper. However, the community said that until today, some old paths, mainly where they insert entries, remained on Linq-2-SQL. The community also stated that as a part of the migration, a few code paths went to Dapper instead of EF Core. This means Dapper wasn’t removed and still exists post migration. This migration may affect posts, comments, users, and other ‘primary’ object types in Q&A. Nick also added, “We're not asking for a lot of test data to be created on meta here, but if you see something, please say something!”. He further added, “The biggest fear with a change like this is any chance of bad data entering the database, so while we've tested this extensively and have done a few tests deploys already, we're still being extra cautious with such a central & critical change.” To know more about this in detail, head over to Nick Craver’s discussion thread on Stack Exchange. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET Core 2.0 reaches end of life, no longer supported by Microsoft Stack Overflow celebrates its 10th birthday as the most trusted developer community
Read more
  • 0
  • 0
  • 4879

article-image-net-core-3-0-and-net-framework-4-8-more-details-announced
Prasad Ramesh
05 Oct 2018
4 min read
Save for later

.NET Core 3.0 and .NET Framework 4.8 more details announced

Prasad Ramesh
05 Oct 2018
4 min read
.NET Core 3.0 was announced in May this year, it adds support for building desktop applications using WinForms, WPF, and Entity Framework 6. Updates to .NET Framework were also announced which enable use of new modern controls from UWP in existing WinForms and WPF applications. Now, more details are out on both of them. .NET Core 3.0 .NET Core 3.0 addresses three scenarios asked by the .NET framework developer community. Multiple versions of .NET on the same machine As of now, only one version of .NET Framework can be installed on a machine. An update to the .NET Framework poses a risk of a security fix, bug fix, or new API breaking applications on the machine. Now Microsoft aims to solve this problem by allowing multiple versions of .NET Core  to reside on one machine. The applications that need to be stable can be locked to one of the stable versions then later on be moved to use the newer version as it is ready. Embedding .NET directly into an application Since there can only be one version of .NET Framework on a machine, to take advantage of the latest framework or language features, the newer version needs to be installed. With .NET Core, you can now ship the framework as a part of an application. This enables developers to take advantage of the new features of the latest version without having to wait for the framework to install. Taking advantage of .NET Core features The side-by-side nature of .NET enables introduction of new innovative APIs and Base Class Library (BCL) improvements without the risk of breaking compatibility. WinForms and WPF applications on Windows can now take advantage of the latest .NET Core features. These features include more fundamental fixes for a better high-DPI support. .NET Framework 4.8 .NET Framework 4.8 also addresses three scenarios asked for by the .NET Framework developer community. Modern browser and media controls .NET desktop applications use the Internet Explorer and Windows Media Player for displaying HTML and playing media files. These legacy controls don’t show the latest HTML or play the latest media files. Hence, Microsoft is adding new controls to advantage of Microsoft Edge and newer media players thereby supporting the latest standards. Access to touch and UWP Controls The Universal Windows Platform (UWP) contains new controls to take advantage of the latest Windows features and the devices with touch displays. The code in your application does not have to be rewritten to use these new features and controls. Microsoft is going to make them available to WinForms and WPF enabling the developers to take advantage of these new features in the existing code of applications created. Improvements for high DPI The standard resolution of computer displays is steadily becoming 4K and now even 8K resolutions are available. WIth the newer versions WinForms and WPF applications will look great on these high resolution displays. The future of .NET .NET Framework is installed over one billion machines, hence even a security fix introducing a bug will affect a lot of devices. .NET Core is a fast-moving version of .NET. Because of its side-by-side nature it can take changes that can prove very risky in .NET Framework. Meaning, .NET Core is bound to get new APIs and language features over time that .NET Framework cannot. If your existing applications are on .NET Framework, there is no immediate need to move to .NET Core. For more details, visit the Microsoft Blog. .NET Core 2.0 reaches end of life, no longer supported by Microsoft .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 Microsoft’s .NET Core 2.1 now powers Bing.com
Read more
  • 0
  • 0
  • 7208

article-image-kotlinconf-2018-kotlin-1-3-rc-out-and-kotlin-native-hits-beta
Prasad Ramesh
05 Oct 2018
2 min read
Save for later

KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta

Prasad Ramesh
05 Oct 2018
2 min read
Day 2 of the Kotlin Conference 2018 just ended yesterday and announcements were made regarding the programming language. There is one more day of the conference which will be streamed live at the Kotlin website. We will look at some of the announcements made in the conference so far. Kotlin 1.3 is now RC Kotlin 1.3 RC is here and brings a lot of new features. Some of them are the following. Contracts The Kotlin compiler now does extensive static analysis to show warnings and reduce boilerplate. ‘Contracts’ in Kotlin 1.3 allow functions to explicitly describe its own behavior in a way which is understood by the compiler. Coroutines Kotlin Coroutines are no longer experimental and will be supported like other features starting from Kotlin version 1.3. Coroutines delegate most of the functionality to libraries and helps in providing a fluid experience that is scalable when needed. Multiplatform projects The multiplatform projects model has been reworked to improve expressiveness and flexibility. It is in line with the language’s goal to function on all platforms. Currently Kotlin supports JVM, Android, JavaScript, iOS, Linux, Windows, Mac and embedded systems like STM32. This is beneficial for reusing code. Kotlin/Native is now in beta Kotlin/Native is designed to enable compilation in platforms where virtual machines do not work. An example would be embedded devices or iOS. Kotlin/Native is a solution to situations when developers need to produce a self-contained program where an additional runtime or virtual machine is not required. After several years of development, Kotlin/Native is now beta. The Kotlin foundation The Kotlin Foundation is a nonprofit nonstock corporation created in 2018. It has backing from JetBrains and Google. The Kotlin Foundation aims to protect, promote and advance the development of Kotlin. New revamped playground The online environment for trying and learning Kotlin has a new look, functionality, and a new section called Learn Kotline by Example. All this is available directly in your web browser via the Kotlin Playground website. The first day can be watched on YouTube. You can watch the Kotlin Conference live at their website. Kotlin 1.3 RC1 is here with compiler and IDE improvements How to implement immutability functions in Kotlin [Tutorial] Forget C and Java. Learn Kotlin: the next universal programming language
Read more
  • 0
  • 0
  • 2880
Visually different images

article-image-qt-3d-studio-2-1-released-with-new-sub-presentations-scene-preview-and-runtime-improvements
Natasha Mathur
05 Oct 2018
3 min read
Save for later

Qt 3D Studio 2.1 released with new sub-presentations, scene preview, and runtime improvements

Natasha Mathur
05 Oct 2018
3 min read
The Qt team released Qt 3D Studio 2.1 earlier this week. Qt 3D Studio 2.1 explore features such as sub-presentations, scene preview, and runtime improvements. Qt 3D Studio 2.1 is a design tool that is used for creating 3D user interfaces as well as for adding 3D content into Qt-based applications. Qt 3D Studio helps with easily designing the 3D content look & feel, animations and user interface states. New Editor features There are two new features added in the Editor in Qt 3D Studio 2.1, namely, Sub-presentations and Scene Preview. Sub-presentations This feature provides an option to embed another Studio presentation or a QML file within a Studio presentation. For example, you can divide the design work into smaller projects and make reusable components. Managing the Sub-Presentations as well adding them into views is easy with the Qt 3D Studio 2.1. There’s a project browser option that shows all the Qt Quick files (.qml) as well as the Qt 3D Studio presentations (.uip) that have been imported to the main project. These files can then be added to a scene layer or as a texture to an object by dragging them from the project browser onto the scene. Sub-Presentations can now be easily viewed in the scene view allowing you to see the whole user interface while creating the design. Scene Preview Qt 3D Studio 2.1 release comes with a new option used for Scene Preview for times when you’re working with different camera views (perspective, top etc.). This is super handy when aligning objects in the scene. Runtime The runtime side in Qt 3D Studio 2.1 mainly focuses on performance and stability improvements. The Qt team is working on writing a new API that will help replace the old runtime in the Qt 3D Studio Editor. In the future releases, the new API will also be capable of performing dynamic content creation from the application side. Support for compressed textures which is already a feature in Qt 5.11 has been also added to the Qt 3D Studio runtime. So you can improve the loading time and also save memory in devices supporting  ETC2 or ASTC Compressed textures by compressing the textures. Asset compression management feature will also be added in the Editor side in the future releases of Qt 3D Studio. For more information, check out the official documentation. Qt Creator 4.7.0 releases! Qt for Python 5.11 released! WebAssembly comes to Qt. Now you can deploy your next Qt app in the browser
Read more
  • 0
  • 0
  • 2432

article-image-net-core-2-0-reaches-end-of-life-no-longer-supported-by-microsoft
Prasad Ramesh
04 Oct 2018
2 min read
Save for later

.NET Core 2.0 reaches end of life, no longer supported by Microsoft

Prasad Ramesh
04 Oct 2018
2 min read
.NET Core 2.0 was released mid August 2017. It has now reached end of life (EOL) and will no longer be supported by Microsoft. .NET Core 2.0 EOL .NET Core 2.1 was released towards the end of May 2018 and .NET Core 2.0 reached EOL on October 1. This was supposed to happen on September 1 but was pushed by a month since users experienced issues in upgrading to the newer version. .NET Core 2.1 is a long-term support (LTS) release and should be supported till at least August 2021. It is recommended to upgrade to and use .NET Core 2.1 for your projects. There are no major changes in the newer version. .NET Core 2.0 is no longer supported and updates won’t be provided. The installers, zips and Docker images of .NET Core 2.0 will still remain available, but they won’t be supported. Downloads for 2.0 will still be accessible via the Download Archives. However, .NET Core 2.0 is removed from the microsoft/dotnet repository README file. All the existing images will still be available in that repository. Microsoft’s support policy The ‘LTS’ releases contain stabilized features and components. They require fewer updates over their longer support release lifetime. The LTS releases are a good choice for applications that developers do not intend to update very often. The ‘current’ releases include features that are new and may undergo changes in the future based on feedback/issues. They give access to the latest features and improvements and hence are a good choice for applications in active development. Upgrades to newer .NET Core releases is required more frequently to stay in support. Some of the new features in .NET Core 2.1 include performance improvements, long term support, Brotli compression, and new cryptography APIs. To migrate from .NET Core 2.0 to .NET Core 2.1, visit the Microsoft website. You can read the official announcement on GitHub. Note: article amended 08.10.2018 - .NET Core 2.0 reached EOL on October 1, not .NET Core 2.1. The installers, zips and Docker images will still remain available but won't be supported, not unsupported. .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 Microsoft’s .NET Core 2.1 now powers Bing.com Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 4
  • 5952

article-image-sourcegraph-a-code-search-and-navigation-engine-is-now-open-source
Natasha Mathur
03 Oct 2018
2 min read
Save for later

Sourcegraph, a code search, and navigation engine is now open source!

Natasha Mathur
03 Oct 2018
2 min read
The Sourcegraph team announced that they’re making Sourcegraph, a self-hosted code search and navigation engine, available as open source, earlier this week. “We opened up Sourcegraph to bring code search and intelligence to more developers and developer ecosystems—and to help us realize the Sourcegraph master plan,” writes Quinn Slack on the announcement page. This Sourcegraph master plan involves making basic code intelligence ubiquitous (for every language, and in every editor, code host, etc.). It wants to focus on making code review continuous and intelligent. Additionally, they also hope to increase the amount and quality of open-source code. Sourcegraph comprises the following features: Instant Code Search: Fast global code search with a hybrid backend. This combines a trigram index with in-memory streaming. You can search in files and diffs in your code by just using simple terms, regular expressions, and other filters. Code intelligence: It offers code intelligence for many languages using the Language Server Protocol. It also makes browsing code and finding references on your code easier. Data Center: Once you grow to hundreds or thousands of users and repositories, you can graduate from the single-server deployment to a highly scalable cluster using the Sourcegraph Data Center. Integrations: It offers Integration with third-party developer tools via the Sourcegraph Extension API. Organizations that are already using the Sourcegraph navigation engine can upgrade to Sourcegraph Enterprise (previously called Data Center) to get a hand on features that large organizations need such as single sign-on, backups, and recovery, cluster deployment, etc. However, these additional features that come with the Enterprise edition are paid and not open source. “We're also excited about what this means for Sourcegraph as a company. All of our customers, many with hundreds or thousands of developers using Sourcegraph internally every day, started out with a single developer spinning up a Sourcegraph instance and sharing it with their team. Being open-source makes it even easier to start using Sourcegraph in that way”, explained the announcement page. For more information, check out the official announcement. Facebook open sources LogDevice, a distributed data store for logs Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop
Read more
  • 0
  • 0
  • 1957
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-is-atlassians-decision-to-forbid-benchmarking-potentially-masking-its-degrading-performance
Savia Lobo
02 Oct 2018
3 min read
Save for later

Is Atlassian’s decision to forbid benchmarking potentially masking its degrading performance?

Savia Lobo
02 Oct 2018
3 min read
Last week, Atlassian software company released their updated ‘Atlassian Software License Agreement’ and their ‘Cloud Terms of Service’, which would be effective from the 1st of November, 2018. Just as any general agreement, this one too mentions the scope, the users authorized, use of software, and so on. However, it has set up certain restrictions based on the performance of its software. As per the new agreement, benchmarking of Atlassian software is forbidden. Restrictions on benchmarking As per the discussion on the Atlassian Developer Community, Andy Broker highlighted two clauses from the restriction section have been highlighted, which includes: (i) publicly disseminate information regarding the performance of the Software Andy Broker, a marketplace vendor explains this clause as, “This sounds very much like the nonsense clause that Intel were derided for, regarding the performance of their CPU’s. Intel backtracked after being lambasted by the world, I can’t really understand how these points got into new Atlassian terms, surely the terms have had a technical review? Just… why, given all the DC testing being done ongoing, this is an area where data we gathered may be interesting to prospective customers.” (j) encourage or assist any third party to do any of the foregoing. Andy Broker further adds, “So, we can’t guide/help a customer understand how to even measure performance to determine if they have a performance issue in the “Software”, e.g. generating a performance baseline before an ‘app’ is involved? The result of this would appear sub-optimal for Customer and Vendors alike, the “Software” performance just becomes 3rd party App performance that we cannot ‘explain’ or ‘show’ to customers.” Why Atlassian decided to forbid benchmarking? As per a discussion thread on Hacker News, many users have stated their views on why Atlassian planned to forbid benchmarking on its software. According to a comment, “If the company bans benchmarking, then the product is slow.” A user also stated that Atlassian software is slow, has annoying UX, and it is very inconsistent. This may be because most of its software is built using JIRA which has a Java backend and is not Node based. Jira cannot be rebuilt from scratch only slowly abstracted and broken up into smaller pieces. Also, around 3 years ago Atlassian forked behind the firewall for two distinct products, multi-tenant cloud and traditional to get into the cloud sector with a view to attracting more potential customers, thus increasing growth. A user also stated, “Cloud was full buy into AWS, taking a behind the firewall product and making it multi-tenanted cloud-based is a huge job. A monolith service now becomes highly distributed so latency's obviously mounted up due to the many services to service interactions.” The user further added, “some things which are heavily multi-threaded and built in statically compiled languages had to be built in single threaded Node.js because everyone is using Node.js and your language is now banned. It's not surprising there are noticeable performance differences.” Another user has suggested, “the better way for a company to handle this concern is to proactively run and release benchmarks including commentary on the results, together with everything necessary for anyone to reproduce their results.” The user added that they can even fund a trustworthy neutral third party to perform benchmarking with proper disclosure of the funding. To read the entire discussion in detail, head over to Hacker News. Atlassian acquires OpsGenie, launches Jira Ops to make incident response more powerful Atlassian sells Hipchat IP to Slack Atlassian open sources Escalator, a Kubernetes autoscaler project
Read more
  • 0
  • 0
  • 3439

article-image-facebook-is-the-new-cigarettes-says-marc-benioff-salesforce-co-ceo-2
Kunal Chaudhari
02 Oct 2018
6 min read
Save for later

“Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO

Kunal Chaudhari
02 Oct 2018
6 min read
So, it was that time of the year when Salesforce enthusiasts, thought-leaders, and pioneers gathered around downtown San-Francisco, to attend the annual Dreamforce conference last week. This year marked the 15th anniversary of the Salesforce annual conference with over 100,000 trailblazers flocking towards the bay area. Throughout these years, technological development in the platform has been the focal point of these conferences, but it was different this time around. A lot has happened between the conference that took place in 2017 and now, especially after Facebook’s Cambridge analytica scandal. First Whatsapp’s co-founder Jan Koum parted ways with Facebook, and now the Instagram co-founders have called it quits. Interestingly, Marc Benioff gave an interview to Bloomberg Technology in which he condemned Facebook as the ‘new cigarettes’. To regulate or not to regulate, that is the question Marc Benioff has been a vocal criticizer of the social media platform. Earlier this year, when innovators and tech leaders gathered at the annual World Economic Forum in the Swiss Alps of Davos, Benioff was one of the panelist discussing on the factors of trust in technology where he made certain interesting points. He took the examples of financial industry a decade ago, where bankers were pretty confident that new products like credit default swaps (CDS), and collateralized debt obligation (CDO) would lead to better economic growth but instead it lead to the biggest financial crisis the world had ever seen. Similarly, he argued that Cigarettes were introduced as this great product for pass time, without any background on its adverse effects on health. Well to the cut the story short, the point that Benioff was trying to make is that these industries were able to take advantage of the addictive behavior of humans because of the clear lack of regulation from the governmental bodies. It was only when the regulators became strict towards these sectors and public reforms came into the picture, these products were brought under control. Similarly, Benioff had called for a regulation of companies, on behalf of the recent news linking the Russian interference in the US presidential elections. He urged the CEO’s of companies to take better responsibilities towards their consumers and their products without explicitly mentioning any name. Let’s take a guess, Mark Zuckerberg anyone? While Benioff made a strong case for regulation, the solution seemed to be more politically driven. Rachel Botsman, Visiting Academic and Lecturer at the Saïd Business School, University of Oxford, argued that regulators are not aware of the new decentralized nature of today’s technological platforms. And ultimately who do we want as the arbiters of truth, should it be Facebook, Regulators, or the Users? and where does the hierarchy of accountability lie in this new structure of platforms? The big question remains. The ethical and humane side of technology Fast forward to Dreamforce 2018, with star-studded guest speakers ranging from the former American Vice President Al Gore to Andre Iguodala of the NBA’s Golden State Warriors. Benioff started with his usual opening keynote but this time with a lot of enthusiasm, or as one might say in a full evangelical mode, the message from the Salesforce CEO was very clear, “We are in the fourth industrial revolution”. Salesforce announced plenty of new products and some key strategic business partnerships with the likes of Apple and AWS now joining Salesforce. While these announcements summarized the technological advancements in the platform, his interview with Bloomberg Technology’s Emily Chang was quite opportunistic. The interview started casually with talks of Benioff sharing his job with the new Co-CEO Keith Block. But soon they discussed the news about Instagram founders Kevin Systrom and Mike Krieger leaving the services of parent company Facebook. While Benioff still maintained his position on regulation, he also discussed about the ethics and humane side of technology. The ethics of technology has come under the spotlight in the recent months with the advancements in Artificial intelligence. In order to solve these questions, Benioff said that Salesforce has taken its first step by setting up the “Office of Ethical and Humane Use of Technology” at the Salesforce Tower in San Francisco. At first, this initiative looks like a solid first step towards solving the problem of technology being used for unethical work. But going back to the argument posed by Rachel Botsman, who actually leverages technology to do unethical work? Is it the Company or the consumer? While Salesforce boasts about its stand on the ethics of building a technological system, Marc Benioff is still silent on the question of Salesforce’s ties with the US Customs and Border Protection (CBP) agency, which follows Donald Trump’s strong anti-immigration agenda. Protesters took a stand against this issue during the Salesforce conference and hundreds of employees from Salesforce wrote an open letter to Benioff to cut ties with the CBP. In return, Benioff responded that its contract with CBP does not deal directly with the separation of children at the Mexican borders. One decision at a time Ethics is largely driven by human behavior, while innovators believe that technological advancements should happen regardless of the outcome, it is the responsibility of every stakeholder in the company, be it a developer, an executive, or a customer to take action against unethical work. And with each mistake, companies and CEOs are provided with opportunities to set things right. Take McKinsey & Company for example. The top management consultancy was under fire due to its scandal in the South African government. But when the firm again came under scrutiny with its ties with the CBP of USA, McKinsey’s new managing partner, Kevin Sneader, came out saying that the firm “will not, under any circumstances, engage in any work, anywhere in the world, that advances or assists policies that are at odds with our values.” It’s now time for companies like Facebook and Salesforce to set the benchmark for the future of technology. How far will Facebook go to fix what it broke: Democracy, Trust, Reality SAP creates AI ethics guidelines and forms an advisory panel The Cambridge Analytica scandal and ethics in data science Introducing Deon, a tool for data scientists to add an ethics checklist The ethical dilemmas developers working on Artificial Intelligence products must consider Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms
Read more
  • 0
  • 0
  • 2789

article-image-facebook-releases-skiplang-a-general-purpose-programming-language
Prasad Ramesh
01 Oct 2018
2 min read
Save for later

Facebook releases Skiplang, a general purpose programming language

Prasad Ramesh
01 Oct 2018
2 min read
Facebook released Skip or Skiplang last week, a language it has been developing since 2015. It is a general-purpose programming language that provides caching with features like reactive invalidation, safe parallelism, and efficient garbage collection. Skiplang features Skiplang's primary goal is to explore language and runtime support for correct, efficient memoization-based caching and cache invalidation. It achieves this via a static type system that carefully tracks mutability. The language is typed statically and compiled ahead-of-time using LLVM to produce executables that are highly optimized. Caching with reactive invalidation The main new language feature in Skiplang is its precise tracking of side effects. It includes both mutability of values and distinguishing between non-deterministic data sources. This distinguishing includes data sources that can provide reactive invalidations that tell Skiplang when data has changed. Safe parallelism Skiplang has support for two complementary forms of concurrent programming. Both forms avoid the usual thread safety issues due to the language's tracking of side effects. This language also supports ergonomic asynchronous computation with async/await syntax. Asynchronous computations cannot refer to mutable state and are therefore safe to execute in parallel allowing independent async continuations to continue in parallel. Skiplang also has APIs for direct parallel computation, again using its tracking of side effects it prevents thread safety issues like shared access to mutable state. An efficient and predictable garbage collector Skiplang’s approach to memory management combines aspects of typical garbage collectors with more straightforward linear allocation schemes. The garbage collector only has to scan the memory that is reachable from the root of a computation. This allows developers to write code with predictable garbage collector overhead. A hybrid functional object-oriented language Skiplang is a mix of ideas from functional and object-oriented styles. They are all carefully integrated to form a cohesive language. Like other functional languages, it is expression-oriented and supports features like abstract data types, pattern matching, easy lambdas, higher-order functions, and enforcing pure/referentially-transparent API boundaries (optional). Like OOP languages, it supports classes with inheritance, mutable objects, loops, and early returns. In addition to these, Skiplang also incorporates ideas from “systems” languages supporting low-overhead abstractions, and compact memory layout of objects. Know more about the language from the Skiplang website and their GitHub repository. JDK 12 is all set for public release in March 2019 Python comes third in TIOBE popularity index for the first time Michael Barr releases embedded C coding standards
Read more
  • 0
  • 0
  • 3235

article-image-the-haiku-operating-system-has-released-r1-beta1
Melisha Dsouza
01 Oct 2018
6 min read
Save for later

The Haiku operating system has released R1/beta1

Melisha Dsouza
01 Oct 2018
6 min read
As promised by the Haiku team earlier this month, Haiku R1 now stands released in its beta version! After the big gap between Haikus’ latest release in November 2012, users can expect a lot more upgrades in the R1/beta1. The Haiku OS is known for its ease of use, responsiveness, and overall coherence. With improvements to its package manager, WebPositive, media subsystem and much more Haiku has made the wait worth its while! Let’s dive into some of the major upgrades of this release. #1  Package management The biggest upgrade in the R1 Beta is the addition of a complete package management system. Finalized and merged during 2013, Haiku packages are a special type of compressed filesystem image. These are ‘mounted’ upon installation and thereafter on each boot by the packagefs. It is worth noting that since packages are merely "activated", not installed, the bootloader has been given some capacity to affect them. Users can boot into a previous package state -in case they took a bad update- or even blacklist individual files. Installations and uninstallations of packages are practically instant. Users can manage the installed package set on a non-running Haiku system by mounting its boot disk and then manipulating the /system/packages directory and associated configuration files. The Haiku team has also introduced pkgman, the command-line interface to the package management system. Unlike most other package managers where packages can be installed only by name, Haiku packages can also be searched for and installed by provides, e.g. pkgman install cmd:rsync or pkgman install devel:libsdl2, which will locate the most relevant package that provides that, and install it. Accompanying the package manager is a massively revamped HaikuPorts, containing a wide array of both native and ported software for Haiku. #2 WebPositive upgrades The team has made the system web browser much more stable than before. Glitches with YouTube now stand fixed. While working on WebKit, the team also managed to fix a large number of bugs in Haiku itself - such as broken stack alignment, various kernel panics in the network stack, bad edge-case handling in app_server’s rendering core GCC upgrades and many more. HaikuWebKit now supports Gopher, which is its own network protocol layer. #3 Completely rewritten network preflet The newly rewritten network preflet, is designed for ease of use and longevity. In addition to the interface configuration screens, the preflet is also now able to manage the network services on the machine, such as OpenSSH and ftpd. It uses a plugin-based API, which helps third-party network services like VPNs, web servers, etc to integrate with it. #4 User interface cleanup & live color updates Mail and Tracker now sport Haiku-style toolbars and font-size awareness, among other applications. This will enable users to add proper DPI scaling and right-to-left layouts. Instead of requesting a specific system color and then manipulating it, most applications now instruct their controls to adopt certain colors based on the system color set directly. #5 Media subsystem improvements The Haiku team has made cleanups to the Media Kit to improve fault tolerance, latency correction, and performance issues. This will help with the Kit’s overall resilience. HTTP and RTSP streaming support integrated into the I/O layer of the Media Kit. Livestreams can now be played in WebPositive via HTML5 audio/video support, or in the native MediaPlayer. Significant improvements to the FFmpeg decoder plugin were made. Rather than the ancient FFmpeg 0.10, the last version that GCC2 can compile, FFmpeg 4.0 is now used all-around for a better support of both  audio and video formats, as well as significant performance improvements. The driver for HDA saw a good number of cleanups and wider audio support since the previous release. The DVB tuner subsystem saw a substantial amount of rework and the APE reader was also cleaned up and added to the default builds. #6 RemoteDesktop Haiku’s native RemoteDesktop application was improved and added to the builds. The RemoteDesktop forwards drawing commands from the host system to the client system, which for most applications consumes significantly lower bandwith. RemoteDesktop  can connect and run applications on any Haiku system that users have SSH access to, there is no need for a remote server. #7 New thread scheduler Haiku’s kernel thread scheduler is now O(1) (constant time) with respect to threads, and O(log N)(logarithmic time) with respect to processor cores. The new limit is 64 cores, this being an arbitrary constant that can be increased at any time. There are new implementations of the memcpy and memset primitives for x86 which constitute significant increases to their performance. #8 Updated Ethernet & WiFi drivers The ethernet & WiFi drivers, have been upgraded to those from FreeBSD 11.1. This brings in support for Intel’s newer “Dual Band” family, some of Realtek’s PCI chipsets, and newer-model chipsets in all other existing drivers. Additionally, the FreeBSD compatibility layer now interfaces with Haiku’s support for MSI-X interrupts, meaning that WiFi and ethernet drivers will take advantage of it wherever possible, leading to significant improvements in latency and throughput. #9 Updated file system drivers The NFSv4 client, was finally merged into Haiku itself, and is included by default. Additionally, Haiku’s userlandfs, which supports running filesystem drivers in userland, is now shipped along with Haiku itself. It supports running BeOS filesystem drivers, Haiku filesystem drivers, and provides FUSE compatibility. As a result, various FUSE-based filesystem drivers are now available in the ports tree, including FuseSMB, among others. Apart from the above mentioned features, users can look forward to EFI bootloader and GPT support, a build-in debugger, general system stabilization and much more! Reddit also saw comments from users waiting eagerly for this release: Source: Reddit  Source: Reddit After a long span of 17 years  from its day of launch, it would be interesting to see how this upgrade is received by the masses. To know more about Haiku R1, head over to their official site Sugar operating system: A new OS to enhance GPU acceleration security in web apps cstar: Spotify’s Cassandra orchestration tool is now open source! OpenSSL 1.1.1 released with support for TLS 1.3, improved side channel security  
Read more
  • 0
  • 0
  • 4659
article-image-typescript-3-1-releases-with-typesversions-redirects-mapped-tuple-types
Bhagyashree R
28 Sep 2018
3 min read
Save for later

TypeScript 3.1 releases with typesVersions redirects, mapped tuple types

Bhagyashree R
28 Sep 2018
3 min read
After announcing TypeScript 3.1 RC version last week, Microsoft released TypeScript 3.1 as a stable version, yesterday. This release comes with support for mapped array and tuple types, easier properties on function declarations, typesVersions for version redirects, and more. Support for mapped array and tuple types TypeScript has a concept called ‘mapped object type’ which can generate new types out of existing ones. Instead of introducing a new concept for mapping over a tuple, mapped object types now just “do the right thing” when iterating over tuples and arrays. This means that if you are using the existing mapped types like Partial or Required from lib.d.ts, they will now also automatically work on tuples and arrays. This change will eliminate the need to write a ton of overrides. Properties on function declarations For any function or const declaration that’s initialized with a function, the type-checker will analyze the containing scope to track any added properties. This enables users to write canonical JavaScript code without resorting to namespace hacks. Additionally, this approach for property declarations allows users to express common patterns like defaultProps and propTypes on React stateless function components (SFCs). Introducing typesVersions for version redirects Users are always excited to use new type system features in their programs or definition files. However, for the library maintainers, this creates a difficult situation where they are forced to choose between supporting new TypeScript features and not breaking its older versions. To solve this, TypeScript 3.1 introduces a new feature called typesVersions. When TypeScript opens a package.json file to figure out which files it needs to read, it will first look for the typesVersions field. The field will tell TypeScript to check which version of TypeScript is running. If the version in use is 3.1 or later, it figures out the path you've imported relative to the package and reads from the package's ts3.1 folder. Refactor from .then() to await With this new refactoring, you can now easily convert functions that return promises constructed with chains of .then() and .catch() calls to async functions that uses await. Breaking changes Vendor-specific declarations removed: TypeScript's built-in .d.ts library and other built-in declaration file libraries are partially generated using Web IDL files provided from the WHATWG DOM specification. While this makes keeping lib.d.ts easier, many vendor-specific types have been removed. Differences in narrowing functions: Using the typeof foo === "function" type guard may provide different results when intersecting with relatively questionable union types composed of {}, Object, or unconstrained generics. How to install this latest version? You can get the latest version through NuGet or via npm by running: npm install -g typescript According to their roadmap, TypeScript 3.2 is scheduled to be released in November with strictly-typed call/bind/apply on function types. To read the full list of updates, check their official announcement on MSDN. TypeScript 3.1 RC released TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more How to work with classes in Typescript
Read more
  • 0
  • 0
  • 2661

article-image-ipython-7-0-releases-with-asyncio-integration-and-new-async-libraries
Natasha Mathur
28 Sep 2018
2 min read
Save for later

IPython 7.0 releases with AsyncIO Integration and new Async libraries

Natasha Mathur
28 Sep 2018
2 min read
IPython team released version 7.0 of IPython, yesterday. IPython is a powerful Python interactive shell with features such as advanced tab completion, syntactic coloration, and more. IPython 7.0 explores new features such as AsyncIO integration, new Async libraries, and Async support in Notebooks. IPython (Interactive Python) provides a rich toolkit for interactive computing in multiple programming languages. It’s the Jupyter kernel for Python used by millions of users. Let’s discuss the key features in IPython 7.0 release. AsyncIO Integration IPython 7.0 comes with the integration of IPython and AsyncIO. This means that you don’t have to import or learn about asyncIO anymore. AsyncIO is a library which lets you write concurrent code using the async/await syntax. The asyncIO library is used as a foundation for multiple Python asynchronous frameworks providing high-performance network, web-servers, database connection libraries, distributed task queues, etc. Just remember that asyncIO is an async function, it won’t magically make your code faster but will make it easier to write. New Async Libraries (Curio and Trio integration) Python consists of keywords async and await. This helps simplify the use of asynchronous programming and the standardization around asyncIO. It also allows experimentation with the new paradigms for asynchronous libraries. Now, two new Async Libraries namely Curio and Trio, have been added in IPython 7.0. Both of these libraries explore ways to write asynchronous programs. They also explore how to use async, awaits, and coroutines when starting from a blank slate. Curio is a library which helps perform concurrent I/O and common system programming tasks. It makes use of the Python coroutines and the explicit async/await syntax. Trio is an async/await-native I/O library for Python. It lets you write programs that do multiple things at the same time with parallelized I/O. Async support in Notebooks Async code will now work in a notebook when using ipykernel for Jupyter users. With IPython 7.0, async will work with all the frontends that support the Jupyter Protocol, including the classic Notebook, JupyterLab, Hydrogen, nteract desktop, and nteract web. The default code will run in the existing asyncIO/tornado loop that runs the kernel. For more information, check out the official release notes. Make Your Presentation with IPython How to connect your Vim editor to IPython Increase your productivity with IPython
Read more
  • 0
  • 0
  • 3648

article-image-nim-0-19-a-statically-typed-and-compiled-language-is-out-with-nimble-0-9-0-support
Bhagyashree R
28 Sep 2018
3 min read
Save for later

Nim 0.19, a statically typed and compiled language, is out with Nimble 0.9.0 support

Bhagyashree R
28 Sep 2018
3 min read
Earlier this week, the Nim team announced the release of Nim 0.19 with many language changes, async improvements, and support for the latest Nimble 0.9.0. Nim is a systems and applications programming language, which aims for better performance, portability, and expressiveness. It is a statically typed and compiled language which comes with unparalleled performance in an elegant package. Its common features include: High-performance garbage-collection Compiles to C, C++ or JavaScript Runs on Windows, macOS, Linux What’s new in Nim 0.19? Language changes and additions The nil state for strings/seqs is no more supported and their default value is changed to  "" / @[]. In the transition period you can use --nilseqs:on. It is now invalid to access the binary zero terminator in Nim’s native strings, but internally it can still have the trailing zero to support zero-copy interoperability with cstring. In the transition period you can compile your code using the new --laxStrings:on switch. Instead of being an all-or-nothing switch, experimental is now a pragma and a command line switch that can allow specific language extensions. You can make dot calls combined with explicit generic instantiations using the syntax x.y[:z], which is converted as y[z](x) by the parser. You can use func as an alias for proc {.noSideEffect.}. Nim now supports for-loop macros to make for loops and iterators more flexible to use. This feature enables a Python-like generic enumerate implementation. In order to implement pattern matching for certain types, case statements can be rewritten via macros. Keyword arguments after the comma are supported in the command syntax. Declaration of thread-local variables inside procs is now supported. This implies all the effects of the global pragma. Nim supports the except clause in the export statement. Async improvements Nim’s async macro now works completely with exception handling. The use of await in a try statement is also supported. Supports Nimble 0.9.0 This release comes with Nimble 0.9.0, which was released recently in August. This version contains a large number of fixes spread across 57 commits. One breaking change that you need to keep in mind is that any package that specifies a bin value in its .nimble file will no longer install any Nim source code files. Breaking changes The deprecated symbols in the standard library such as system.expr or the old type aliases starting with a T or P prefix have been removed. SystemError is now renamed to CatchableError and is the new base class for any exception that is guaranteed to be catchable. Read the full announcement on Nim’s official website. Rust as a Game Programming Language: Is it any good? Java 11 is here with TLS 1.3, Unicode 11, and more updates The 5 most popular programming languages in 2018
Read more
  • 0
  • 0
  • 1956
article-image-openmp-libc-and-libcabi-are-now-part-of-llvm-toolchain-package
Bhagyashree R
27 Sep 2018
2 min read
Save for later

OpenMP, libc++, and libc++abi, are now part of llvm-toolchain package

Bhagyashree R
27 Sep 2018
2 min read
On Tuesday, LLVM announced that starting from LLVM 7, the packages libc++, libc++abi, and OpenMP are integrated into llvm-toolchain. Integration of these libraries was a project proposed in the Google Summer of Code 2018. Warnings and usage of the libc++* and OpenMP packages libc++* packages The libc++ and libc++abi packages that are currently present in Debian and Ubuntu repositories will not be affected, but they will be removed in the later versions. Also, the newly integrated libcxx* packages are not co-installable with them. To keep the library usage same as before, symlinks are provided from the original locations. For example, from /usr/lib/x86_64-linux-gnu/libc++.so.1.0 to /usr/lib/llvm-7/lib/libc++.so.1.0. The usage of libc++ is as follows: $ clang++-7 -std=c++11 -stdlib=libc++ foo.cpp $ ldd ./a.out|grep libc++   libc++.so.1 => /usr/lib/x86_64-linux-gnu/libc++.so.1 (0x00007f62a1a90000)   libc++abi.so.1 => /usr/lib/x86_64-linux-gnu/libc++abi.so.1 (0x00007f62a1a59000) OpenMP packages Though OpenMP has been a part of Debian and Ubuntu archives, only one version was supported on the system. To address this, OpenMP is integrated with the llvm-toolchain. Similar to libc++, to keep the current usage same, the newly integrated package creates a symlink from /usr/lib/libomp.so.5 to /usr/lib/llvm-7/lib/libomp.so.5. It can be used with clang through -fopenmp flag: $ clang -fopenmp foo.c The dependency packages that provide the default libc++* and OpenMP package are also integrated into llvm-defaults. Using the following command you will able to install the current version of all these packages: $ apt-get install libc++-dev libc++abi-dev libomp-dev To get more clarity on the integration of libc++* and OpenMP in llvm-toolchain, check out their announcement on LLVM’s site. LLVM 7.0.0 released with improved optimization and new tools for monitoring Boost 1.68.0, a set of C++ source libraries, is released, debuting YAP! Will Rust Replace C++?
Read more
  • 0
  • 0
  • 3339

article-image-java-11-is-here-with-tls-1-3-unicode-11-and-more-updates
Prasad Ramesh
26 Sep 2018
3 min read
Save for later

Java 11 is here with TLS 1.3, Unicode 11, and more updates

Prasad Ramesh
26 Sep 2018
3 min read
After the first release candidate last month, Java 11 is now generally available. The GA version is the first release with long-term support (LTS). Some of the new features include nest-based access control, a new garbage collector, support for Unicode 11 and TLS 1.3. New features in Java 11 Some of the new features in Java 11 include nest-based access control, dynamic class-file constants, a no-op garbage collector called Epsilon and more. Let’s look at these features in detail. Nest-based access control ‘Nests’ are introduced as an access control context that aligns with the existing nested types in Java. Classes that are logically part of the same code but are compiled to distinct files can access private members with nests. It eliminates the need for compilers to insert bridge methods. Two members in a nest are described as ‘nestmates’. Nests do not apply to large scales of access control like modules. Dynamic class-file constants The existing Java class-file format is extended to support a new constant-pool form called CONSTANT_Dynamic. Loading this new form will delegate its creation to a bootstrap method in the same way linking an invokedynamic call site delegates linkage to a bootstrap method. The aim is to reduce the cost and disruption of creating new forms of materializable class-file constants giving broader options to language designers and compiler implementors. Epsilon, a no-op garbage collector Epsilon is a new experimental garbage collector in Java 11 that handles memory allocation but does not actually reclaim any memory. It works by implementing linear allocation in a single contiguous chunk memory. The JVM will shut down when the available Java heap is exhausted. Added support for Unicode 11 Java 11 brings Unicode 11 support to existing platform APIs. The following Java classes are mainly supported with Unicode 10: In the java.lang package: Character and String In the java.awt.font package: NumericShaper In the java.text package: Bidi, BreakIterator, and Normalizer This upgrade includes Unicode 9 changes and adds a total of 16,018 characters and ten new scripts. Flight recorder The flight recorder in Java 11 is a data collection framework for troubleshooting Java applications and the HotSpot JVM. It has a low overhead. TLS  1.3 TLS 1.3 was recently standardized and is the latest version of the Transport Layer Security protocol. TLS 1.3 is not directly compatible with the previous versions. The goal here is not to support every feature of TLS 1.3. Features deprecated Some of the features are also removed from Java 11. Applications depending on Java EE and COBRA modules need to explicitly call these modules. The Nashorn JavaScript Engine, Pack200 Tools and API have all been deprecated. For a complete list of features and deprecations, visit the JDK website. Oracle releases open source and commercial licenses for Java 11 and later JEP 325: Revamped switch statements that can also be expressions proposed for Java 12 No more free Java SE 8 updates for commercial use after January 2019
Read more
  • 0
  • 0
  • 6168