Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-ruby-ends-support-for-its-2-3-series
Amrata Joshi
16 Apr 2019
2 min read
Save for later

Ruby ends support for its 2.3 series

Amrata Joshi
16 Apr 2019
2 min read
Last month, the team at Ruby announced that support for Ruby 2.3 series has ended. Security and bug fixes from the recent Ruby versions won’t be backported to Ruby 2.3. As there won’t be any patches of 2.3, the Ruby team has recommended users to upgrade to Ruby 2.6 or 2.5 as soon as possible. Currently supported Ruby versions Ruby 2.6 series Ruby 2.6 series is currently in the normal maintenance phase. The team will backport bug fixes and will release an urgent fix for it in case of urgent security issue/bug. Ruby 2.5 series Ruby 2.5 series is currently in the normal maintenance phase. The team will backport bug fixes and will release an urgent fix for it in case of urgent security issue/bug. Ruby 2.4 series Ruby 2.4 series is currently in security maintenance phase. The team won’t backport any bug fixes to 2.4 except for security fixes. The team will release an urgent fix for it in case of urgent security issue/bug. The team is also planning to end the support for Ruby 2.4 series by March 31, 2020. To know more about this news, check out the post by Ruby. How Deliveroo migrated from Ruby to Rust without breaking production Ruby on Rails 6.0 Beta 1 brings new frameworks, multiple DBs, and parallel testing Ruby 2.6.0 released with a new JIT compiler
Read more
  • 0
  • 0
  • 1317

article-image-qt-creator-4-9-0-released-with-language-support-qml-support-profiling-and-much-more
Amrata Joshi
16 Apr 2019
2 min read
Save for later

Qt Creator 4.9.0 released with language support, QML support, profiling and much more

Amrata Joshi
16 Apr 2019
2 min read
Yesterday, the team behind Qt released the latest version, Qt Creator 4.9.0, a cross-platform software development framework for embedded and desktop applications. This release comes with programming language support, changes to UI, QML support and much more. What’s new in Qt Creator 4.9.0? Language support Qt Creator 4.9 comes with added support for document outline, find usages, and also for code actions that allow the language server to suggest fixes at a specified place in the code. The team has changed the highlighter. It is now based on the KSyntaxHighlighting library, which is used in KDE for this purpose. Changes to UI In this release, the UI for diagnostics from the Clang analyzer tools have been improved as they now are grouped by file now. Diagnostics from the project’s header files are now also included. QML Support The team updated their QML parser to Qt 5.12 that added support for ECMAScript 7. Profiling This release comes with perf, which is a performance profiling tool for software that runs on a Linux system. The integration in Qt Creator is available for applications that run on a local Linux system, and for applications that run on a remote Linux system from a Linux or Windows host. Generic Projects Users can now add a QtCreatorDeployment.txt file to their generic project for specifying the necessary information about where to deploy and which files to deploy. Support for OS For Windows, the team has added support for MSVC (Microsoft Visual C++) 2019. For macOS, a Touch Bar has been added so that users can run Qt Creator on a MacBook. And for Linux, the team has added OpenSSH tools. To know more about this news, check out the Qt blog post. Qt Creator 4.9 Beta released with QML support, programming language support and more! Qt team releases Qt Creator 4.8.0 and Qt 5.12 LTS Qt creator 4.8 beta released, adds language server protocol  
Read more
  • 0
  • 0
  • 2576

article-image-swift-is-improving-the-ui-of-its-generics-model-with-the-reverse-generics-system
Sugandha Lahoti
16 Apr 2019
4 min read
Save for later

Swift is improving the UI of its generics model with the “reverse generics” system

Sugandha Lahoti
16 Apr 2019
4 min read
Last week, Joe Groff of the Swift Core Team published a post on the Swift forums discussing refining the Swift Generics model which was established by the Generics Manifesto, almost three years ago. The post introduced new changes to improve the UI of how generics work in the Swift language. The first part of this group of changes is the SE-0244 proposal. This proposal introduces some features around function return values. SE-0244 proposal The SE-0244 proposal addresses the problem of type-level abstraction for returns.  At present Swift has three existing generics features in Swift 5. Type-level abstraction The syntax of a type level abstraction is quite similar to generics in other languages, like Java or C#. In this users type out the function definitions, use angle brackets, and conventionally use T for a generic type, all of it happening at the function (or type) level. Each of the functions for Type-level abstraction has a placeholder type T. Each call site then gets to pick what concrete type is bound to T, making these functions very flexible and powerful in a variety of situations. Value-level abstraction Value-level abstraction deals with individual variables. It is not concerned with making general statements about the types that can be passed into or out of a function; instead, developers need to worry only about the specific type of exactly one variable in one place. Existential type Many swift libraries consist of composable generic components which provide primitive types along with composable transformations to combine and modify primitive shapes into more complex ones. These transformations may be composed by using the existential type instead of generic arguments. Existential types are like wrappers or boxes for other types. However, they bring more dynamism and runtime overhead than desired. If a user wants to abstract the return type of a declaration from its signature, existentials or manual type erasure are the two choices. However, these come with their own tradeoffs. Tradeoffs of existing generics features The biggest problem of the original genetic manifesto is generalized existentials. Present existentials have a variety of use cases that could never be addressed. Although existentials would allow functions to hide their concrete return types behind protocols as implementation details, they would not always be the most desirable tool for this job. This is because they don’t allow functions to abstract their concrete return types while still maintaining the underlying type's identity in the client code. Also, Swift follows in the tradition of similar languages like C++, Java, and C# in its generics notation, using explicit type variable declarations in angle brackets. However, this notation can be verbose and awkward. So new improvements need to be made for existing notations for generics and existentials. Reverse generics Currently Swift has no way for an implementation to achieve type-level abstraction of its return values independent of the caller's control. If an API wants to abstract its concrete return type from callers, it must accept the tradeoffs of value-level abstraction. If those trade-offs are unacceptable, the only alternative in Swift today is to fully expose the concrete return type. These tradeoffs led to the introduction of a new type system feature to achieve type-level abstraction of a return type. Coined as reverse generics by Manolo van Ee, this system behaves similar to a generic parameter type, but whose underlying type is bound by the function's implementation rather than by the caller. This is analogous to the roles of argument and return values in functions; a function takes its arguments as inputs and uses them to compute the return values it gives back to the caller. This process has already begun with a formal review in progress on SE-244: Opaque Result Types. This proposal covers the “reverse generics” idea and some keyword in return types. “If adopted”, says Tim Ekl, a Seattle-area software developer, “it would give us the ability to return a concrete type hidden from the caller, indicating only that the returned value conforms to some protocol(s)”. He has also written an interesting blog post summarizing the discussion by Joe Groff on the swift forums page. Note: The content of this article is taken from Joe Groff’s discussion. For extensive details, you may read the full discussion on the Swift forums page. Swift 5 for Xcode 10.2 is here! Implementing Dependency Injection in Swift [Tutorial] Apple is patenting Swift features like optional chaining
Read more
  • 0
  • 0
  • 3498
Visually different images

article-image-red-hat-team-announces-updates-to-the-red-hat-certified-engineer-rhce-program
Amrata Joshi
12 Apr 2019
3 min read
Save for later

Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program

Amrata Joshi
12 Apr 2019
3 min read
The Red Hat Certified Engineer (RHCE) certification program has certified skilled IT professionals for around 20 years now. This program has also one of the leading certification programs for Linux skills. As new technologies are coming up and industries are equally evolving, the focus has now shifted to hybrid cloud implementations. With this new development, shift automation has become an important skill to learn for Linux system administrators. So, the team behind RHCE thought that there is a need for evolving the RHCE program for Red Hat Certified Professionals. What changes are expected? In the updated RHCE program, the team is shifting the focus to automation of Linux system administration tasks with the help of Red Hat Ansible Automation and will also be changing the requirements for achieving an RHCE credential. With the upcoming release of Red Hat Enterprise Linux 8, the team at RHCE will be offering a new course and a new certification exam. Red Hat System Administration III: Linux Automation (RH294) The team at RHCE has designed this course for Linux system administrators and developers who are into automating provisioning, configuration, application deployment, and orchestration. The ones’ taking up this course will learn how to install and configure Ansible on a management workstation and will get a clear idea about preparing managed hosts for automation. Red Hat Certified Engineer exam (EX294) The RHCE exam will focus on the automation of Linux system administration tasks that uses Red Hat Ansible Automation and shell scripting. The ones who pass this new exam will become RHCEs. What will remain the same? Ken Goetz, vice president of Training and Certification at Red Hat writes in a blog post, “One thing that we want to assure you is that this is not a complete redesign of the program.” The candidates can still get an RHCE by having first passed the Red Hat Certified System Administrator exam (EX200) and then later passing an RHCE exam while still being an RHCSA. The Red Hat Enterprise Linux 7 based RHCE exam (EX300) will remain available for a year post the new exam gets released. How does it impact candidates? Current RHCE The RHCE certification is valid for three years from the date the candidate has become an RHCE. The period of the RHCE can be extended by earning additional certifications that can be applied towards becoming Red Hat Certified Architect in infrastructure. Candidates can renew the RHCE before it becomes non-current by passing the new RHCE exam (EX294). Aspiring RHCE An RHCSA who is progressing towards becoming an RHCE can continue preparing for the Red Hat Enterprise Linux 7 version of the course and take the current RHCE exam (EX300) till June 2020. Or else they can prepare for the new exam (EX294), based on the upcoming release of Red Hat Enterprise Linux 8. Red Hat Certified Specialist in Ansible Automation The ones who are currently Red Hat Certified Specialist in Ansible Automation can continue to demonstrate their Ansible automation skills and knowledge by earning RHCE via the new process. Ken Goetz, vice president of Training and Certification at Red Hat writes in the post, “We are aligning the RHCE program, and the learning services associated with that program, to assist individuals and organizations in keeping up with these changes in the industry.”   To know more about this news, check out Red Hat’s blog post. Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL)  
Read more
  • 0
  • 0
  • 4904

article-image-eclipse-announces-support-for-java-12
Amrata Joshi
12 Apr 2019
2 min read
Save for later

Eclipse announces support for Java 12

Amrata Joshi
12 Apr 2019
2 min read
Last month, the team at Eclipse announced that Eclipse now supports Java 12. What are the latest changes in support with Java 12 Updated project compliance and JRE Eclipse comes with project compliance and JRE updated to 12 that changes the current project to be compatible with Java 12. Preview features Users can enable preview features in Java 12 by selecting Preferences > Java > Compiler > Enable preview features option. Users can further configure the problem severity of these preview features. Set enable preview features The issue with the Enable preview features option in preferences has been resolved. Configure problem severity of preview features Configure problem severity is now provided to update the problem severity of preview features in Java 12. Default case has been added Add 'default' option is now available to add a default case to the enhanced switch statement in Java 12. Missing case statements An option to add missing case statements has been provided for the enhanced switch statement in Java 12. Java Editor In the Java > Editor > Code Mining preference, users can now enable the Show parameter names option which will show the parameter name in method or constructor calls. Java views and dialogs An option to control the comment generation while creating module-info.java or package-info.java is now available. To know more about this news, check out the post by Eclipse. Eclipse 4.10.0 released with major improvements to colors, fonts preference page and more Eclipse IDE’s Photon release will support Rust What can Blockchain developers learn from Eclipse Attacks in a Bitcoin network – Koshik Raj
Read more
  • 0
  • 0
  • 3555

article-image-rust-1-34-releases-with-alternative-cargo-registries-stabilized-tryfrom-and-tryinto-and-more
Bhagyashree R
12 Apr 2019
2 min read
Save for later

Rust 1.34 releases with alternative cargo registries, stabilized TryFrom and TryInto, and more

Bhagyashree R
12 Apr 2019
2 min read
Yesterday, the Rust team announced the release of Rust 1.34. This release introduces alternative cargo registries, includes support for ‘?’ operator in documentation tests, stabilized TryFrom and TryInto, and more. Support for alternative cargo registries Rust provides a public crate registry called crates.io where developers can publish crates with the cargo publish command. However, as this crate registry is not for people maintaining proprietary code, they are forced to use git or path dependencies. This release brings support for alternate cargo registries, which coexists with crates.io. So, users will now be able to write software that depends on crates from both crates.io and their custom registry. Support for the ‘?’ operator in documentation tests It was proposed in RFC 1937 to add support for the ‘?’ operator in the main() function, #[test] functions, and doctests allowing them to return Option or Result with error values.  This ensured a non-zero exit code in the case of the main() function and a test failure in the case of the tests. Support for the main() and #[test] functions were already implemented in previous versions. However, in the case of documentation tests, support for ‘?’ was limited to doctests that have an explicit main() function. In this release, the team has implemented full support for ‘?’ operator in doctests. Stabilized TryFrom and TryInto The TryFrom and TryInto traits that were proposed in an RFC back in 2016 are finally stabilized in this release to allow fallible type conversions. A ‘Infallible’ type is added for conversions that cannot fail such as u8 to u32. In future versions, the team plans to convert Infallible to an alias for the (!) never type. Library stabilizations This release comes with an expanded set of stable atomic integer types with signed and unsigned variants from 8 to 64 bits available. In the previous versions, non-zero unsigned integer types, for example, NonZeroU8 were stabilized. With this release, signed versions are also stabilized. The ‘iter::from_fn’ and ‘iter::successors’ functions are also stabilized. To know more about the updates in Rust 1.34, check out its official announcement. Chris Dickinson on how to implement Git in Rust The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks Rust 1.33.0 released with improvements to Const fn, pinning, and more!  
Read more
  • 0
  • 0
  • 1878
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introducing-netlify-dev-for-local-testing-and-live-stream-preview-capabilities
Amrata Joshi
10 Apr 2019
2 min read
Save for later

Introducing Netlify Dev for Local Testing and Live Stream Preview Capabilities

Amrata Joshi
10 Apr 2019
2 min read
Yesterday, the team at Netlify announced the new Netlify Dev for local testing and live stream preview capabilities. Web developers can now locally test serverless functions, API integrations, and CDN Logic; thus promoting instant progress sharing.. They can now have access to capabilities of the Netlify platform on their laptops which means they no longer have to wait for staging or production to test and get feedback on their websites and applications. Developers can live-stream their development server to a cloud URL and share updates as the code and content changes. In a statement to Business Wire, Kent C. Dodds, software engineer and educator, said, “Netlify has a knack for simplifying things that are hard so I can focus on building my web application, and Netlify Dev is another example of that. “I'm excited about being able to simply develop, test, and debug my Netlify web applications with one simple command.” Netlify has compiled its entire edge redirect engine into WebAssembly so developers can locally test before deploying to production. They can now write and validate AWS Lambda functions in the Netlify CLI using modern JavaScript and also deploy them as full API endpoints. Mathias Biilmann, CEO, said, “Netlify is obsessed with developer productivity for building modern sites on the JAMstack. The new local test and share capabilities of Netlify Dev provide a single, simplified workflow that brings everything together—from the earliest code to production global deployment. Netlify Dev can automatically detect common tools like Gatsby, Hugo, Jekyll, React Static, Eleventy and more. It also provides a single development server and workflow. New and existing users can use Netlify Dev by installing or updating the Netlify CLI for creating new sites, setting up continuous deployment and for pushing new deployments. The new features of Netlify Dev are tightly coupled with Netlify's git-based workflow for team collaboration. Netlify brings an instant CI/CD pipeline for the developers who work in Git so that every commit and pull request can build the site into a deploy preview. Developers can easily build and collaborate in the full production environment. To know more about this news, check out Netlify’s official page. Netlify raises $30 million for a new ‘Application Delivery Network’, aiming to replace servers and infrastructure management Introducing Gitpod, a one-click IDE for GitHub IPv6 support to be automatically rolled out for most Netify Application Delivery Network users  
Read more
  • 0
  • 0
  • 2087

article-image-googles-cloud-healthcare-api-is-now-available-in-beta
Amrata Joshi
09 Apr 2019
3 min read
Save for later

Google’s Cloud Healthcare API is now available in beta

Amrata Joshi
09 Apr 2019
3 min read
Last week, Google announced that its Cloud Healthcare API is now available in beta. The API acts as a bridge between on-site healthcare systems and applications that are hosted on Google Cloud. This API is HIPAA compliant, ecosystem-ready and developer-friendly. The aim of the team at Google is to give hospitals and other healthcare facilities more analytical power with the help of Cloud Healthcare API. The official post reads, "From the beginning, our primary goal with Cloud Healthcare API has been to advance data interoperability by breaking down the data silos that exist within care systems. The API enables healthcare organizations to ingest and manage key data and better understand that data through the application of analytics and machine learning in real time, at scale." This API offers a managed solution for storing and accessing healthcare data in Google Cloud Platform (GCP). With the help of this API, users can now explore new capabilities for data analysis, machine learning, and application development for healthcare solutions. The  Cloud Healthcare API also simplifies app development and device integration to speed up the process. This API also supports standards-based data formats and protocols of existing healthcare tech. For instance, it will allow healthcare organizations to stream data processing with Cloud Dataflow, analyze data at scale with BigQuery, and tap into machine learning with the Cloud Machine Learning Engine. Features of Cloud Healthcare API Compliant and certified This API is HIPAA compliant and HITRUST CSF certified. Google is also planning ISO 27001, ISO 27017, and ISO 27018 certifications for Cloud Healthcare API. Explore your data This API allows users to explore their healthcare data by incorporating advanced analytics and machine learning solutions such as BigQuery, Cloud AutoML, and Cloud ML Engine. Managed scalability Google’s Cloud Healthcare API provides web-native, serverless scaling which is optimized by Google’s infrastructure. Users can simply activate the API to send requests as the initial capacity configuration is not required. Apigee Integration This API integrates with Apigee, which is recognized by Gartner as a leader in full lifecycle API management, for delivering app and service ecosystems around user data. Developer-friendly This API organizes users’ healthcare information into datasets with one or more modality-specific stores per set where each store exposes both a REST and RPC interface. Enhanced data liquidity The API also supports bulk import and export of FHIR data and DICOM data, which accelerates delivery for applications with dependencies on existing datasets. It further provides a convenient API for moving data between projects. The official post reads, “While our product and engineering teams are focused on building products to solve challenges across the healthcare and life sciences industries, our core mission embraces close collaboration with our partners and customers.” Google will highlight what its partners, including the American Cancer Society, CareCloud, Kaiser Permanente, and iDigital are doing with the API at the ongoing Google Cloud Next. To know more about this news, check out Google’s official announcement. Ian Goodfellow quits Google and joins Apple as a director of machine learning Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council  
Read more
  • 0
  • 0
  • 4217

article-image-facebook-ai-introduces-aroma-a-new-code-recommendation-tool-for-developers
Natasha Mathur
09 Apr 2019
3 min read
Save for later

Facebook AI introduces Aroma, a new code recommendation tool for developers

Natasha Mathur
09 Apr 2019
3 min read
Facebook AI team announced a new tool, called Aroma, last week. Aroma is a code-to-code search and recommendation tool that makes use of machine learning (ML) to simplify the process of gaining insights from big codebases. Aroma allows engineers to find common coding patterns easily by making a search query without any need to manually browse through code snippets. This, in turn, helps save time in their development workflow. So, in case a developer has written code but wants to see how others have implemented the same code, he can run the search query to find similar code in related projects. After the search query is run, results for codes are returned as code ‘recommendations’. Each code recommendation is built from a cluster of similar code snippets that are found in the repository. Aroma is a more advanced tool in comparison to the other traditional code search tools. For instance, Aroma performs the search on syntax trees. Instead of looking for string-level or token-level matches, Aroma can find instances that are syntactically similar to the query code. It can then further highlight the matching code by cutting down the unrelated syntax structures. Aroma is very fast and creates recommendations within seconds for large codebases. Moreover, Aroma’s core algorithm is language-agnostic and can be deployed across codebases in Hack, JavaScript, Python, and Java. How does Aroma work? Aroma follows a three-step process to make code recommendations, namely, Feature-based search, re-ranking and clustering, and intersecting. For feature-based search, Aroma indexes the code corpus as a sparse matrix. It parses each method in the corpus and then creates its parse tree. It further extracts a set of structural features from the parse tree of each method. These features capture information about variable usage, method calls, and control structures. Finally, a sparse vector is created for each method according to its features and then the top 1,000 method bodies whose dot products are highest are retrieved as the candidate set for the recommendation. Aroma In the case of re-ranking and clustering, Aroma first reranks the candidate methods by their similarity to the query code snippet. Since the sparse vectors contain only abstract information about what features are present, the dot product score is an underestimate of the actual similarity of a code snippet to the query. To eliminate that, Aroma applies ‘pruning’ on the method syntax trees. This helps to discard the irrelevant parts of a method body and helps retain all the parts best match the query snippet. This is how it reranks the candidate code snippets by their actual similarities to the query. Further ahead, Aroma runs an iterative clustering algorithm to find clusters of code snippets similar to each other and consist of extra statements useful for making code recommendations. In the case of intersecting, a code snippet is taken first as the “base” code and then ‘pruning’ is applied iteratively on it with respect to every other method in the cluster. The remaining code after the pruning process is the code which is common among all methods, making it a code recommendation. “We believe that programming should become a semiautomated task in which humans express higher-level ideas and detailed implementation is done by the computers themselves”, states Facebook AI team. For more information, check out the official Facebook AI blog. How to make machine learning based recommendations using Julia [Tutorial] Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Facebook AI research and NYU school of medicine announces new open-source AI models and MRI dataset
Read more
  • 0
  • 0
  • 6465

article-image-ml-net-1-0-rc-releases-with-support-for-tensorflow-models-and-much-more
Amrata Joshi
08 Apr 2019
2 min read
Save for later

ML.NET 1.0 RC releases with support for TensorFlow models and much more!

Amrata Joshi
08 Apr 2019
2 min read
Last week, the team behind ML.NET announced the release of ML.NET 1.0 RC (Release Candidate), an open-source and cross-platform machine learning framework for .NET developers.  ML.NET 1.0 RC is the last preview release before releasing the final ML.NET 1.0 RTM (Requirements Traceability Matrix) this year. Developers can use ML.NET in sentiment analysis, product recommendation, spam detection, image classification, and much more. What’s new in ML.NET 1.0 RC? Preview packages According to the Microsoft blog, “Heading ML.NET 1.0, most of the functionality in ML.NET (around 95%) is going to be released as stable (version 1.0).” The packages that will be available for the preview state are TensorFlow, Onnx components, TimeSeries components, and recommendation components. IDataView moved to Microsoft.ML namespace In this release, IDataView has been moved back into Microsoft. ML namespace based on feedback the team received. Support for TensorFlow models This release comes with added support for TensorFlow models, an open source machine learning framework used for deep learning projects. The issues in ML.NET version 0.11 related to TensorFlow models have been fixed in this release. Major changes in ML.NET 1.0 RC The ‘Data’ namespace has been removed in this release with the help using Microsoft.Data.DataView. The Nuget package has been added for Microsoft.ML.FastTree. Also, PoissonRegression has been changed to LbfgsPoissonRegression. To know more about this release, check out the official announcement. .NET team announces ML.NET 0.6 Qml.Net: A new C# library for cross-platform .NET GUI development ML.NET 0.4 is here with support for SymSGD, F#, and word embeddings transform!A
Read more
  • 0
  • 0
  • 2358
article-image-introducing-gitpod-a-one-click-ide-for-github
Bhagyashree R
05 Apr 2019
3 min read
Save for later

Introducing Gitpod, a one-click IDE for GitHub

Bhagyashree R
05 Apr 2019
3 min read
Today, Sven Efftinge, the Technical Co-founder of Gitpod.io, announced the launch of Gitpod, a cloud IDE that tightly integrates with GitHub. Along with the launch, starting from today, the Gitpod app is also available on GitHub marketplace. What is Gitpod? While working on a project, a lot of time goes into switching contexts between projects and branches, setting up a development environment, or simply waiting for the build to complete. To reduce this time and effort, Gitpod provides developers disposable, ready-to-code development environments for their GitHub projects. What are its advantages? Automatically pre-builts every commit Gitpod, similar to continuous integration tools, automatically pre-builds every commit. So, when you open a Gitpod workspace you will not only find the code and tools ready but also that the build has already finished. Easily go back to previous releases A Gitpod workspace is configured through a .gitpod.yml file written in YAML. This file is versioned with your code, so if at some point, you need to go back to old releases, you can easily do that. Pre-installed VS Code extensions You will get several VS Code extensions pre-installed in Gitpod such as Go support from Microsoft’s own extension. The team plans to add more VS Code extensions in the near future and later developers will be allowed to define any extensions they want. Supports full-featured terminals In addition to supporting one of the best code editors, Gitpod comes with full-featured terminals that are backed by Linux container running in the cloud. So, you get the same command-line tools you would use locally. Better collaboration Gitpod supports two major features for collaboration: Sharing running workspaces: This feature allows you to share a workspace with a remote colleague. It comes handy when you want to hunt down a bug together or do some pair programming. Snapshots: With this feature, you can take an immutable copy of your dev environment at any point in time and share the link wherever you want. Users will receive an exact clone of the environment including all state and even UI layout. How you can use Gitpod? For creating a workspace you have two options: You can prefix any GitHub URL with gitpod.io/#. You can also use the Gitpod browser extension available for Chrome and Firefox users, which adds a button to GitHub that does the prefixing for you. You can watch the following video to know exactly how Gitpod works: https://www.youtube.com/watch?v=D41zSHJthZI Read more in detail on Gitpod’s official website. Introducing git/fs: A native git client for Plan 9 ‘Developers’ lives matter’: Chinese developers protest over the “996 work schedule” on GitHub Sublime Text 3.2 released with Git integration, improved themes, editor control and much more!
Read more
  • 0
  • 0
  • 2994

article-image-introducing-git-fs-a-native-git-client-for-plan-9
Bhagyashree R
05 Apr 2019
2 min read
Save for later

Introducing git/fs: A native git client for Plan 9

Bhagyashree R
05 Apr 2019
2 min read
On Wednesday, Ori Bernstein, a software engineer at Google, shared details about the Git client he has implemented for Plan 9, a non-posix system. The client named git/fs is implemented in Plan 9 flavor C and comes with tools for writing repository contents. Why git/fs is being introduced? This is the first time someone has implemented a Git client for Plan 9. The upstream Git uses a large number of system calls that are not supported in Plan 9. Bernstein came up with this client to enable working with git repositories without having to clone the git interface directly. Git/fs structure Git/fs provides read-only access to scripts via a file system mounted on ‘/mnt/git’. You will find the following content in ‘/mnt/git’: /mnt/git/object: This includes the objects in the repo. /mnt/git/branch: This includes the branches in the repo. /mnt/git/ctl: This is a file showing the status of the repo. /mnt/git/HEAD: This is an alias for the currently checked out commit directory. You can directly access the repository from the shell using standard tools. The scripts and binaries will manipulate the repository contents directly and the changes done will be immediately mirrored in the filesystem. To improve user experience, the author has put more focus on building a consistent and minimalist interface that supports the necessary functionality. Git/fs does not have any concept of the staging area. There are only three states that files can be in namely, ‘untracked', 'dirty', and 'committed'. To do the tracking it uses empty files under .git/index9/{removed,tracked}/path/to/file. The client is currently hosted in Mercurial, a distributed revision-control tool, as it is the current native plan 9 version control system. To know more detail about Git/fs, head over to its Bitbucket repository. Chris Dickinson on how to implement Git in Rust ‘Developers’ lives matter’: Chinese developers protest over the “996 work schedule” on GitHub Sublime Text 3.2 released with Git integration, improved themes, editor control and much more!  
Read more
  • 0
  • 0
  • 2408

article-image-pivotal-and-heroku-team-up-to-create-cloud-native-buildpacks-for-kubernetes-and-beyond
Natasha Mathur
04 Apr 2019
3 min read
Save for later

Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes

Natasha Mathur
04 Apr 2019
3 min read
Pivotal Inc., a software and services firm, announced yesterday that it has teamed up with Heroku to create Cloud Native Buildpacks for Kubernetes and beyond. Cloud Native Buildpacks turn source code into production-ready Docker images that are OCI image compatible and is based around the popular Buildpack model. The new project is aimed at allowing developers to get more productive with Kubernetes. The Cloud Foundry Buildpacks team also released a selection of next-gen Cloud Foundry buildpacks that are compatible with the Cloud Native Buildpacks. This will allow users to try buildpacks out on Pivotal Container Service (PKS) and Pivotal Application Service (PAS). https://twitter.com/pivotalcf/status/1113426937685446657 “The project aims to deliver a consistent platform-to-buildpack contract for use in more places. The interface defined by this contract is informed by learnings from maintaining production-grade buildpacks for years at both Pivotal and Heroku” states the Pivotal team. With the new Cloud Native Buildpacks, you can create containers by just pushing the code without using any runtime dependencies. On “cf” pushing the custom code, buildpacks automatically add in the framework dependencies and create an application “droplet” that can be run on the platform. This droplet model allows Cloud Foundry to handle all the dependency updates. Application runtimes can also be updated by pulling in the latest buildpacks and rebuilding a droplet. Cloud Native Buildpacks expand on this idea and build an OCI (Open Container) image, capable of running on any platform.“We believe developers will love the simplicity of this single command to get a production quality container when they prefer not to author and maintain their own Dockerfile”, states the Pivotal team. Other reasons why Cloud Native Buildpacks are a step ahead than traditional buildpacks: Portability through OCI standard. Cloud Native Buildpacks can directly produce the OCI Images from source code. This makes Cloud Native Buildpacks much more portable, making them easy to use with  Kubernetes and Knative. Better modularity. Cloud Native Buildpacks are modular, offering platform operators more control over how developers can build their code during runtime. Speed. Cloud Native Buildpacks build faster because of advanced build caching, layer reuse, and data deduplication. Fast troubleshooting. Cloud Native Buildpacks helps troubleshoot production issues much faster as they can be used in a developer local environment. Reproducible builds. Cloud Native Buildpacks allow reproducible container image builds. What next? Pivotal team states that Cloud Native Buildpacks need some more work for it to be ready for enterprise scenarios. Pivotal is currently exploring adding three new features such as image promotion, operator control, and automated image patching. For image promotion, Pivotal is exploring a build service effective at image updating. This would allow the developers to promote images through environments, and cross PCF foundations. Also, Pivotal is exploring a declarative configuration model which will deliver new images to your registry whenever your configuration falls out of sync. “The best developers strive to eliminate toil from their lives. These engineers figure that if a task doesn’t add value, it should be automated..with Cloud Native Buildpacks, developers can happily remove.. toil from their jobs”, states Pivtol team. For more information, check out the official Pivotal Blog. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits.
Read more
  • 0
  • 0
  • 3260
article-image-fedora-30-beta-released-with-desktop-environment-options-gnome-3-32-and-much-more
Amrata Joshi
04 Apr 2019
2 min read
Save for later

Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more

Amrata Joshi
04 Apr 2019
2 min read
Just two days ago, the team at Fedora announced the release of Fedora 30 Beta to test its six variants including a workstation, server, silverblue, spins, Labs, and ARM. This release comes with GNOME 3.32, improved performance and much more. https://twitter.com/mattdm/status/1113079013696856065 What’s new in Fedora 30 Beta? Desktop environment options This release features two new options for the desktop environment, namely DeepinDE, a user-friendly domestic desktop by Deepin Technology Co. and Pantheon Desktop, mainly used in the elementary OS and is least customizable. Improved DNF performance This release features zchunk format which is a new compression format designed for highly efficient deltas. All the DNF (Dandified YUM) repository metadata is now compressed with the zchunk format in addition to xz or gzip. When Fedora’s metadata is compressed using zchunk, DNF downloads only the differences between earlier copies of the metadata and the current version. GNOME 3.32 This release comes with GNOME 3.32, which is the latest version of GNOME 3. It features updated visual style, improved user interface, icons, and much more. Testing needed Since it is a beta release, users might encounter bugs or experience that some of the features are missing. Users can report issues encountered during testing by contacting the Fedora QA team via the mailing list or in #fedora-qa on Freenode. Updated packages This release includes updated versions of many popular packages including Golang, GNU C Library, Bash shell, Python, and Perl. Major changes Binary support for deprecated and unsafe functions have been removed from libcrypt. Python 2 package has been removed from this release. In this release, language support groups in Comps file has been replaced by rich dependencies in the langpacks package. Obsolete scriplets have been removed from this release. Few users are excited about this release but others are still facing some bugs and dependency issues since it is the beta version. https://twitter.com/YanivKaul/status/1113132353096953857 To know more about this news, check out the official post by Fedora Magazine. GNOME 3.32 released with fractional scaling, improvements to desktop, web and much more Fedora 31 will now come with Mono 5 to offer open-source .NET support Fedora 29 released with Modularity, Silverblue, and more
Read more
  • 0
  • 0
  • 2753

article-image-microsoft-makes-f-4-6-and-f-tools-for-visual-studio-2019-generally-available
Bhagyashree R
03 Apr 2019
2 min read
Save for later

Microsoft makes F# 4.6 and F# tools for Visual Studio 2019 generally available

Bhagyashree R
03 Apr 2019
2 min read
Last week, Microsoft announced the general availability of F# 4.6 and F# tools for Visual Studio 2019. This release comes with a new record type called Anonymous Records and also few updates in the F# Core library. F# 4.6 and F# tools for Visual Studio 2019 For the updates and development of new features in F# 4.6, the team followed an open RFC process. Writing named record types in F# was not really easy in previous versions and to address exactly that a new type is introduced called Anonymous Records. These F# record types do not have any explicit name and can be declared in an ad-hoc fashion. Updates in F# Core library In the F# Core library, updates are made to the 'ValueOption' type. With this release, a new attribute is added called DebuggerDisplay that helps in debugging. The IsNone, IsSome, None, Some, op_Implicit, and ToString members are added. In addition to these updates, there is now a 'ValueOption' module, which has the same functions the Option module has. F# tools for Visual Studio 2019 A lot of focus has been put on improving the performance of F# tools for Visual Studio, especially for larger solutions. Previously, F# compiler and tools struggled when used for larger solutions and caused a lot of memory and CPU usage. To address this problem the team has done few updates in the F# parser, reduced the cache sizes, significantly reduced the allocations when processing format strings, and more. This release also comes with a new feature that intelligently idents pasted code based on where your cursor is. You can use this feature by turning on Smart Indent via Tools > Options > Text Editor > F# > Tabs > Smart this will be on automatically. Read the entire list of updates in F# 4.6 and F# tools for Visual Studio 2019 on Microsoft’s blog. Microsoft releases TypeScript 3.4 with an update for faster subsequent builds, and more Microsoft, Adobe, and SAP share new details about the Open Data Initiative Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript
Read more
  • 0
  • 0
  • 3190