Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-introducing-espresso-an-open-source-pytorch-based-end-to-end-neural-automatic-speech-recognition-asr-toolkit-for-distributed-training-across-gpus
Amrata Joshi
24 Sep 2019
5 min read
Save for later

Introducing ESPRESSO, an open-source, PyTorch based, end-to-end neural automatic speech recognition (ASR) toolkit for distributed training across GPUs

Amrata Joshi
24 Sep 2019
5 min read
Last week, researchers from USA and China released a paper titled ESPRESSO: A fast end-to-end neural speech recognition toolkit. In the paper, the researchers have introduced ESPRESSO, an open-source, modular, end-to-end neural automatic speech recognition (ASR) toolkit. This toolkit is based on PyTorch library and FAIRSEQ, the neural machine translation toolkit. This toolkit supports distributed training across GPUs and computing nodes and decoding approaches that are commonly employed in ASR such as look-ahead word-based language model fusion. ESPRESSO is 4 to 11 times faster for decoding than similar systems like ESPNET and it achieves state-of-the-art ASR performance on data sets such as LibriSpeech, WSJ, and Switchboard. Limitations of ESPnet  ESPnet, an end-to-end speech processing toolkit, has some limitations: The code ESPnet is not easily extensible and also has issues related to portability due to its mixed dependency on PyTorch and Chainer, the deep learning frameworks. It uses a decoder which is based on a slow beam search algorithm that is not fast enough for quick turnaround of experiments. To address the above problems, the researchers introduced ESPRESSO. With ESPRESSO it is possible to plug new modules into the system by extending standard PyTorch interfaces.  The research paper reads, “We envision that ESPRESSO could become the foundation for unified speech + text processing systems, and pave the way for future end-to-end speech translation (ST) and text-to-speech synthesis (TTS) systems, ultimately facilitating greater synergy between the ASR and NLP research communities.” ESPRESSO is built on design goals The researchers implemented ESPRESSO based on certain design goals in mind. Firstly, they made use of pure Python / PyTorch for enabling modularity and extensibility. To speed up the experiments, the researchers implemented parallelization, distributed training and decoding. They achieved compatibility with Kaldi / ESPNET data format in order to reuse previous / proven data preparation pipelines. They made ESPRESSO exhibit interoperability with the existing FAIRSEQ codebase in order to make future joint research areas between speech and NLP, easy. ESPRESSO’s dataset classes The speech data for ESPRESSO follows the format in Kaldi, a speech recognition toolkit where utterances get stored in the Kaldi-defined SCP format. The researchers have followed ESPNET and have used the 80-dimensional log Mel feature along with the additional pitch features (83 dimensions for each frame).  ESPRESSO also follows FAIRSEQ’s concept of “datasets” that contains a set of training samples and abstracts. Based on the same concept, the researchers have created dataset classes in ESPRESSO: data.ScpCachedDataset This dataset contains the real-valued acoustic features that are extracted from the speech utterance. The training batch that is drawn from this dataset is a real-valued tensor of shape [BatchSize × TimeFrameLength × FeatureDims] and it will be fed to the neural speech encoder. As the acoustic features are large and they cannot be loaded into memory all at once, the researchers also implement sharded loading where bulk of features are pre-loaded once the previous bulk is consumed for training/decoding. This also balances the file system’s I/O load as well as memory usage. data.TokenTextDataset This dataset contains the gold speech transcripts as text where the training batches are an integer-valued tensor of shape [BatchSize × SequenceLength]. data.SpeechDataset data.SpeechDataset is a container for the above-mentioned datasets. The samples drawn from this dataset contain two fields including source and target and points to the speech utterance and gold transcripts respectively.  Achieving state-of-the-art ASR performance on LibriSpeech, WSJ, and Switchboard datasets ESPRESSO provides running recipes for a variety of data sets. The researchers have given details about their recipes on Wall Street Journal (WSJ), an 80-hour English newspaper speech corpus, Switchboard (SWBD), a 300-hour English telephone speech corpus and LibriSpeecha corpus which is of approximately 1,000 hours of English speech.  The data sets for ESPRESSO have their own extra text corpus that is used for training language models. These are models are optimized using Adam, a method used for stochastic optimization, with an initial learning rate 10−3. This rate is halved if the metric on the validation set at the end of an epoch does not show an improvement over the previous epoch. In case, the learning rate is less than 10−5, Also the training process stops.  Curriculum learning is used for LibriSpeech or WSJ / SWBD epochs, as it prevents training divergence and improves performance. NVIDIA GeForce GTX 1080 Ti GPUs is used for training/evaluating the models. In this paper, all the models are trained with 2 GPUs by using FAIRSEQ built-in distributed data parallellism.  To conclude, the researchers have presented ESPRESSO toolkit in this paper and has provided ASR recipes for LibriSpeech, WSJ, and Switchboard datasets. The paper reads, “By sharing the underlying infrastructure with FAIRSEQ, we hope ESPRESSO will facilitate future joint research in speech and natural language processing, especially in sequence transduction tasks such as speech translation and speech synthesis.” To know more about ESPRESSO in detail, check out the paper. Other interesting news in programming Nim 1.0 releases with improved library, backward compatibility and more Dgraph releases Ristretto, a fast, concurrent and memory-bound Go cache library .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3
Read more
  • 0
  • 0
  • 5107

article-image-centos-8-released
Savia Lobo
24 Sep 2019
3 min read
Save for later

CentOS 8 released!

Savia Lobo
24 Sep 2019
3 min read
Today, the CentOS community released the much-awaited CentOS 8 (1905). RHEL 8 was released in May this year at the Red Hat Summit 2019. Users were highly anticipating this CentOS 8 rebuild. In CentOS 8, the community has partnered more closely with Fedora and will be sharing git repos with the Fedora system. Highlights of CentOS 8 As the CentOS Linux distribution is a platform derived from the sources of Red Hat Enterprise Linux (RHEL), it conforms fully with Red Hat's redistribution policy and aims to have full functional compatibility with the upstream product. Version control system and Database servers It will provide version control systems such as Git 2.18, Mercurial 4.8, and Subversion 1.10. Database servers such as MariaDB 10.3, MySQL 8.0, PostgreSQL 10, PostgreSQL 9.6, and Redis 5 have been included. GNOME Shell GNOME Shell has been rebased to version 3.28. The GNOME session and the GNOME Display Manager use Wayland as their default display server. The X.Org server, which is the default display server in RHEL 7, is available as well. Cryptography policies System-wide cryptographic policies, which configures the core cryptographic subsystems, covering the TLS, IPsec, SSH, DNSSEC, and Kerberos protocols, are applied by default. Python updates As Python 3.6 is the default Python implementation in RHEL 8, CentOS may get similar Python default updates. Also, limited support for Python 2.7 may be provided. No version of Python is installed by default. To know about all the highlights in detail, read the upstream Release Notes Deprecated functionalities Assuming the deprecations in RHEL 8, similar CentOS 8 features have been deprecated. The --interactive option of the ignoredisk Kickstart command has been deprecated. NFSv3 over UDP has been disabled. Digital Signature Algorithm (DSA) and Network scripts have been deprecated. TLS 1.0 and TLS 1.1 are deprecated To know more about the deprecated functionalities read the upstream documentation. Removed security functionality The Clevis HTTP pin has been removed shadow-utils no longer allow all-numeric user and group names securetty is now disabled by default To know more about the other removed security functionalities, read the upstream documentation. Known issues in CentOS 8 If the user is planning to install CentOS-8 in a VirtualBox guest, you should not select "Server with a GUI" (default) during the installation. Support for some adapters have been removed CentOS-8. ELRepo offers driver update disks (DUD) for some of those that are still commonly used. For the list of the device IDs provided by the ELRepo packages, please see here. Once CentOS-8 is installed, you can use the centosplus kernel (kernel-plus) which has support for those devices. While using the boot.iso and NFS to install, the automatic procedure for adding the AppStream-Repo will fail. You have to disable it and add the right NFS-path manually. To install and use CentOS 8 (1905), a minimum of 2 GB RAM is required. The community members recommend at least 4 GB RAM for it to function smoothly. To know more about CentOS 8 in detail, read CentOS wiki page. Other news in Tech .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations Nim 1.0 releases with improved library, backward compatibility and more
Read more
  • 0
  • 0
  • 3957

article-image-nim-1-0-releases-with-improved-library-backward-compatibility-and-more
Amrata Joshi
24 Sep 2019
2 min read
Save for later

Nim 1.0 releases with improved library, backward compatibility and more

Amrata Joshi
24 Sep 2019
2 min read
Yesterday, the team at Nim announced Nim version 1.0, a general-purpose, compiled programming language that focuses on efficiency, readability and flexibility. Major changes in Nim 1.0 Backwards compatibility The switch -d:nimBinaryStdFiles has been removed in this release and stdin/stdout/stderr are now the binary files again.  In this release, the language definition and compiler are now stricter about gensym'ed symbols in hygienic templates.  Changes made to library The team has removed unicode.Rune16 in this release as the name ‘Rune16 ’ was wrong. In Nim 1.0, encodings.getCurrentEncoding distinguishes between the OS's encoding and console's encoding.  In this release, json.parseJsonFragments iterator can speedup JSON processing. Oid usage has been enabled in hashtables. std/monotimes module has been added that implements monotonic timestamps. Compiler In Nim 1.0, the Nim compiler warns about unused module imports. Users can use a top-level {.used.} pragma in the module that can be importable without giving a warning. In this version, the Nim compiler nomore recompiles the Nim project via nim c -r if case no dependent Nim file is changed. Users seem to be excited about this news and are appreciating the efforts taken by the team. A user commented on HackerNews, “Great! I love this language, so simple and powerful, so fast executables!”  Another user commented, “I would have never thought to live long enough to see this happening! I started using Nim in 2014, but abandoned it after a few years, frustrated by the instability of the language and what I perceived as a lack of vision. (In 2014, release 1.0 was said to be "behind the corner".) This release makes me eager to try it again. I remember that the language impressed me a lot: easy to learn, well-thought, and very fast to compile. Congratulations to the team!” Other interesting news in programming How Quarkus brings Java into the modern world of enterprise tech LLVM 9 releases with official RISC-V target support, asm goto, Clang 9, and more Twitter announces to test ‘Hide Replies’ feature in the US and Japan, after testing it in Canada
Read more
  • 0
  • 0
  • 2196

article-image-dgraph-releases-ristretto-a-fast-concurrent-and-memory-bound-go-cache-library
Amrata Joshi
24 Sep 2019
4 min read
Save for later

Dgraph releases Ristretto, a fast, concurrent and memory-bound Go cache library

Amrata Joshi
24 Sep 2019
4 min read
Last week, the team at Dgraph released Ristretto, a fast, fixed size, concurrent, memory-bound Go cache library. It is contention-proof and focuses on throughput as well as hit ratio performance.  There was a need for a memory-bound and concurrent Go cache in Dgraph so the team used a sharded map with shard eviction for releasing memory but that led to memory issues. The team then repurposed Groupcache’s LRU with the help of mutex locks for thread safety.  But later the team realised that the cache suffered from severe contention. The team removed this cache and improved their query latency by 5-10x as the cache was slowing down the process. The team realised that the concurrent cache story in Go is broken and it needs to be fixed.  The official page reads, “In March, we wrote about the State of Caching in Go, mentioning the problem of databases and systems requiring a smart memory-bound cache which can scale to the multi-threaded environment Go programs find themselves in.” Ristretto is built on 3 key principles  Ristretto is built on three key principles including fast accesses, high concurrency, contention resistance and memory bounding. Let’s discuss more about the principles and how the team achieved them: Fast hash with runtime.memhash The team experimented with store interface within Ristretto and found out that sync.Map performs well for read-heavy workloads but it deteriorates for write workloads. As there was no thread-local storage, the team worked with sharded mutex-wrapped Go maps that gave good performance results. The team used 256 shards to ensure that the performance doesn’t get affected while working with a 64-core server. With a shard based approach, the team also needed a quick way to understand which shard a key should go in. The long keys consumed too much memory so the team used uint64 for keys, instead of storing the entire key.  There was a need for using the hash of the key in multiple places and to quickly generate a fast hash, the team borrowed runtime.memhash from Go Runtime. The runtime.memhash function uses assembly code for generating a hash, quickly.  Handling concurrency and contention resistance with batching The team wanted to achieve high hit ratios but that would require managing metadata about the information that is currently present in the cache and the information that will be needed in it. They took inspiration from the paper BP-Wrapper that explains two ways for mitigating contention: prefetching and batching. The team only used ‘batching’ to lower contention instead of acquiring a mutex lock for every metadata mutation. While talking about concurrency, Ristretto performs well under heavy concurrent load but it would lose some metadata while delivering better throughput performance. The page reads, “Interestingly, that information loss doesn’t hurt our hit ratio performance because of the nature of key access distributions. If we do lose metadata, it is generally lost uniformly while the key access distribution remains non-uniform. Therefore, we still achieve high hit ratios and the hit ratio degradation is small as shown by the following graph.” Key cost The workloads usually have variable-sized values where one value ncan cost a few bytes while another will cost few kilobytes and some other value might cost a few megabytes. In this case, it is not possible to have the same memory cost for all of them. In Ristretto, the cost is attached to every key-value and users can easily specify what that cost is while calling the Set function. This cost is calculated against the MaxCost of the cache.  The page reads, “When the cache is operating at capacity, a heavy item could displace many lightweight items. This mechanism is nice in that it works well for all different workloads, including the naive approach where each key-value costs 1.” To know more about Ristretto and its key principles in detail, check out the official post. Other interesting news in programming How Quarkus brings Java into the modern world of enterprise tech LLVM 9 releases with official RISC-V target support, asm goto, Clang 9, and more Twitter announces to test ‘Hide Replies’ feature in the US and Japan, after testing it in Canada
Read more
  • 0
  • 0
  • 2726

article-image-imagenet-roulette-viral-app-trained-using-imagenet-exposes-racial-biases-artificial-intelligent-system
Sugandha Lahoti
24 Sep 2019
4 min read
Save for later

ImageNet Roulette: New viral app trained using ImageNet exposes racial biases in artificial intelligent system

Sugandha Lahoti
24 Sep 2019
4 min read
A new facial recognition app is going viral on Twitter under the hashtag #ImageNetRoulette for all the wrong reasons. This app, ‘ImageNet Roulette’ uses artificial intelligence to analyze each face and describe what it sees. However, the kind of tags that this AI is revealing speaks volumes about the spread of biased artificial intelligent systems. For some people, it tags them as “orphan” or “nonsmoker.” Black and ethnic minority people was being tagged with labels such as “negroid” or “black person”. https://twitter.com/imy/status/1173868441599709185 https://twitter.com/lostblackboy/status/1174112872638689281 The idea behind ImageNet Roulette was to make people aware of biased AI The designers of the app are American artist Trevor Paglen and Microsoft researcher and Co-founder and Director of Research at the AI Now Institute, Kate Crawford. ImageNet Roulette was trained using popular image recognition database, ImageNet.  It uses a neural network trained on the “Person” categories from the ImageNet dataset which has over 2,500 labels used to classify images of people. The idea behind the app, Paglen said was to expose racist and sexist flaws in artificial intelligence systems and infer that similar biases can be present in other facial recognition systems used by other big companies. The app’s website notes in bold, “ImageNet Roulette regularly returns racist, misogynistic and cruel results.” Paglen and Crawford explicitly state that the project is a "provocation designed to help us see into the ways that humans are classified in machine learning systems." “We object deeply to stereotypical classifications, yet we think it is important that they are seen, rather than ignored and tacitly accepted. Our hope was that we could spark in others the same sense of shock and dismay that we felt as we studied ImageNet and other benchmark datasets over the last two years.” “Our project,” they add, “highlights why classifying people in this way is unscientific at best, and deeply harmful at worst.” ImageNet removes 600,000 images The ImageNet team has been working since the beginning of this year to address bias in AI systems and submitted a paper as part of these efforts in August. As the app went viral, ImageNet posted an update on 17th September stating, "Over the past year, we have been conducting a research project to systematically identify and remedy fairness issues that resulted from the data collection process in the people subtree of ImageNet," Among the 2,382 people subcategories, the researchers have decided to remove 1,593 that have been deemed ‘unsafe’ and ‘sensitive’. A total of 600,000 images will be removed from the database. Crawford and Paglen applauded the ImageNet team for taking the first step. However, they feel this “technical debiasing” of training data will not resolve the deep issues of facial recognition bias. The researchers state, “There needs to be a substantial reassessment of the ethics of how AI is trained, who it harms, and the inbuilt politics of these ‘ways of seeing.’” ImageNet Roulette is removing the app from the internet on Friday, September 27th, 2019. Although, it will remain in circulation as a physical art installation, currently on view at the Fondazione Prada Osservertario in Milan until February 2020. In recent months, a number of biases have been found in facial recognition services offered by companies like Amazon, Microsoft, and IBM. Researchers like those behind the ImageNet Roulette app, call for big tech giants to check and evaluate how opinion, bias and offensive points of view can drive the creation of artificial intelligence. Other interesting news in Tech Facebook suspends tens of thousands of apps amid an ongoing investigation into how apps use personal data. Twitter announces to test ‘Hide Replies’ feature in the US and Japan, after testing it in Canada Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data and Society report.
Read more
  • 0
  • 0
  • 2739

article-image-mozilla-introduces-neqo-rust-implementation-for-quic-new-http-protocol
Fatema Patrawala
24 Sep 2019
3 min read
Save for later

Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol

Fatema Patrawala
24 Sep 2019
3 min read
Two months ago, Mozilla introduced Neqo, code written in Rust to implement QUIC, a new protocol for the web developed on top of UDP instead of TCP. As per the GitHub page, web developers who want to run test on http 0.9 programs using neqo-client and neqo-server, below is the code: cargo build ./target/debug/neqo-server 12345 -k key --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ -o --db ./test-fixture/db While developers who want to run test on http 3 programs using neqo-client and neqo-http3-server must check the code given below: cargo build ./target/debug/neqo-http3-server [::]:12345 --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ --db ./test-fixture/db What is QUIC and why is it important for web developers According to Wikipedia, QUIC is the next generation encrypted-by-default transport layer network protocol designed by Jim Roskind at Google. It is designed to secure and accelerate web traffic on the Internet. It was implemented and deployed in 2012, and announced publicly in 2013 as an experimentation broadened and described to the IETF. While still an Internet Draft, QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. As per the QUIC’s official website, “QUIC is an IETF Working Group that is chartered to deliver the next transport protocol for the Internet.” One of the users on Hacker News commented, “QUIC is an entirely new protocol for the web developed on top of UDP instead of TCP. UDP has the advantage that it is not dependent on the order of the received packets, hence non-blocking unlike TCP. If QUIC is used, the TCP/TLS/HTTP2 stack is replaced to UDP/QUIC stack.”  The user further comments, “If QUIC features prove effective, those features could migrate into a later version of TCP and TLS (which have a notably longer deployment cycle). So basically, QUIC wants to combine the speed of the UDP protocol, with the reliability of the TCP protocol.” Additionally, the Rust community on Reddit were asked if QUIC is royalty free. To which one of the Rust developer responded, “Yes, it is being developed and standardized by a working group (under the IETF) and the IETF respectively. So it will become an internet standard just like UDP, TCP, HTTP, etc.” If you are interested to know more about Neqo and QUIC, check out the official GitHub page. Other interesting news in web development Chrome 78 beta brings the CSS Properties and Values API, the native file system API, and more! Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more! Inkscape 1.0 beta is available for testing  
Read more
  • 0
  • 0
  • 6462
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-net-core-3-0-available-c-8-asp-net-core-3-general-availability-ef-core-3-ef-6-3
Sugandha Lahoti
24 Sep 2019
5 min read
Save for later

.NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3

Sugandha Lahoti
24 Sep 2019
5 min read
Yesterday, at the ongoing .NET Conference 2019, NET Core 3.0 was released along with ASP.NET Core 3.0 and Blazor updates. C#8 and F# 4.7 is also a part of this release. Entity Framework Core 3.0 and Entity Framework 6.3 are now generally available. What’s new in .NET Core 3.0 .NET Core 3.0 now includes adding Windows Forms and WPF (Windows Presentation Foundation), adding new JSON APIs, support for ARM64, and improving performance across the board. Here are the key highlights: Support for Windows Desktop apps .NET Core supports Windows Desktop apps for both Windows Forms and WPF (and open source). The WPF designer is part of Visual Studio 2019 16.3, which was also released yesterday. This includes new templates and an updated XAML designer and XAML Hot Reload. The Windows Forms designer is still in preview and available as a VSIX download. Support for C# 8 and F# 4.7 C# 8 was released last week and adds async streams, range/index, more patterns, and nullable reference types. F# 4.7 was released in parallel to .NET Core 3.0 with a focus on infrastructural changes to the compiler and core library and some relaxations on previously onerous syntax requirements. It also includes support for LangVersion and ships with nameof and opening of static classes in preview. Read Also: Getting started with F# for .Net Core application development [Tutorial] .NET Core apps now have executables by default This means apps can now be launched with an app-specific executable, like myapp or ./myapp, depending on the operating system. Support for new JSON APIs High-performance JSON APIs have been added, for reader/writer, object model, and serialization scenarios. These APIs minimize allocations, resulting in faster performance, and much less work for the garbage collector. Support for Raspberry Pi and Linux ARM64 chips These chips enable IoT development with the remote Visual Studio debugger. You can deploy apps that listen to sensors, and print messages or images on a display, all using the new GPIO APIs. ASP.NET can be used to expose data as an API or as a site that enables configuring an IoT device. Read Also: .NET Core 3.0 Preview 6 is available, packed with updates to compiling assemblies. .NET Core 3.0 is a ‘current’ release and will be available with RHEL 8. It will be superseded by .NET Core 3.1, targeted for November 2019. If you're on .NET Core 2.2 you have until the end of the year to update to 3.1, which will be LTS. You can read a detailed report of all .NET Core 3.0 features. What's new in ASP.NET Core 3.0 ASP.NET Core 3.0 is also released in parallel to .NET Core for building web apps. Notably, ASP.NET Core 3.0 has Blazor, a new framework in ASP.NET Core for building interactive client-side web UI with .NET. With Blazor, you can create rich interactive UIs using C# instead of JavaScript. You can also share server-side and client-side app logic written in .NET. Blazor renders the UI as HTML and CSS for wide browser support, including mobile browsers. Other updates in ASP.NET Core 3.0: You can now create high-performance backend services with gRPC. SignalR now has support for automatic reconnection and client-to-server streaming. Endpoint routing integrated through the framework. HTTP/2 now enabled by default in Kestrel. Authentication support for Web APIs and single-page apps integrated with IdentityServer Support for certificate and Kerberos authentication. New generic host sets up common hosting services like dependency injection (DI), configuration, and logging. New Worker Service template for building long-running services. For a full list of features, visit Microsoft Docs. Entity Framework Core 3.0 and Entity Framework 6.3 are now generally available with C# 8 As a part of the .NET Core 3.0 release, Entity Framework Core 3.0 and Entity Framework 6.3 are now generally available on nuget.org. New updates in EF Core 3.0 include: Newly architectured LINQ provider to translate more query patterns into SQL, generating efficient queries in more cases, and preventing inefficient queries from going undetected. Cosmos DB support to help developers familiar with the EF programming model to easily target Azure Cosmos DB as an application database. EF 6.3 brings the following new improvements to the table: With support for .NET Core 3.0, the EF 6.3 runtime package now targets .NET Standard 2.1 in addition to .NET Framework 4.0 and 4.5. Support for SQL Server hierarchyid Improved compatibility with Roslyn and NuGet PackageReference Added the ef6.exe utility for enabling, adding, scripting, and applying migrations from assemblies. This replaces migrate.exe .NET Core 3.0 is a major new release of .NET Core. Developers have widely appreciated the announcement. https://twitter.com/dotMorten/status/1176172319598759938 https://twitter.com/robertmclaws/status/1176206536546357248 https://twitter.com/JaypalPachore/status/1176200191021473792 Interested developers can start updating their existing projects to target .NET Core 3.0. The release is compatible with earlier .NET Core versions which makes updating easier. Other interesting news in Tech Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations. Chrome 78 beta brings the CSS Properties and Values API, the native File Systems API and more. LLVM 9 releases with official RISC-V target support, asm goto, Clang 9 and more.
Read more
  • 0
  • 0
  • 4577

article-image-gitlab-12-3-releases-with-web-application-firewall-keyboard-shortcuts-productivity-analytics-system-hooks-and-more
Amrata Joshi
23 Sep 2019
3 min read
Save for later

GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more

Amrata Joshi
23 Sep 2019
3 min read
Yesterday, the team at GitLab released GitLab 12.3, a DevOps lifecycle tool that provides a Git-repository manager. This release comes with Web Application Firewall, Productivity Analytics, new Environments section and much more. What’s new in GitLab 12.3? Web Application Firewall In GitLab 12.3, the team has shipped the first iteration of the Web Application Firewall that is built in the GitLab SDLC platform. The Web Application Firewall focuses on monitoring and reporting the security concerns related to Kubernetes clusters.  Productivity Analytics  From GitLab 12.3, the team has started releasing Productivity Analytics that will help teams and their leaders in discovering the best practices for better productivity. This release will help in drilling into the data and learning insights for improvements in future. Group level analytics workspace can be used to provide performance insight, productivity, and visibility across multiple projects. Environments section This release comes with “Environments” section in the cluster page that gives an overview of all the projects that are making use of the Kubernetes cluster. License compliance  License Compliance feature can be used to disallow a merger when a blacklisted license is found in a merge request.  Keyboard shortcuts This release comes with the new ‘n’ and ‘p’ keyboard shortcuts that can be used to move to the next and previous unresolved discussions in Merge Requests. System hooks System hooks allow automation by triggering requests whenever a variety of events in GitLab take place. Multiple IP subnets This release introduces the ability to specify multiple IP subnets so instead of specifying a single range, it is now possible for large organizations to restrict incoming traffic to their specific needs. GitLab Runner 12.3 Yesterday, the team also released GitLab Runner 12.3, an open-source project that is used for running CI/CD jobs and sending the results back to GitLab. Audit logs In this release, the audit logs for push events are disabled by default for preventing performance degradation on GitLab instances. Few GitLab users are unhappy as some of the features of this release including Productivity Analytics are available to Premium or Ultimate users only. https://twitter.com/gav_taylor/status/1175798696769916932 To know more about this news, check out the official page. Other interesting news in cloud and networking Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 Istio 1.3 releases with traffic management, improved security, and more!    
Read more
  • 0
  • 0
  • 2083

article-image-chrome-78-beta-brings-the-css-properties-and-values-api-the-native-file-system-api-and-more
Bhagyashree R
23 Sep 2019
3 min read
Save for later

Chrome 78 beta brings the CSS Properties and Values API, the native file system API, and more!

Bhagyashree R
23 Sep 2019
3 min read
Last week, Google announced the release of Chrome 78 beta. Its stable version is scheduled to release in October this year. Chrome 78 will release with a couple of new APIs including the CSS Properties and Values API and Native File System API. Key updates in Chrome 78 beta The CSS Properties and Values API The Houdini’s CSS Properties and Values API will be supported in Chrome 78. The Houdini task force consists of engineers from Mozilla, Apple, Opera, Microsoft, HP, Intel, and Google. In CSS, developers can define user-controlled properties using CSS custom properties, also known as CSS variables. However, the CSS custom properties do have a few limitations that make them difficult to work with. The CSS Properties and Values API addresses these limitations by allowing the registration of properties that have a value type, an initial value, and a defined inheritance behavior. The Native File System API Chrome 78 will support the Native File System API, which will enable web applications to interact with files on the user’s local device like IDEs, photo and video editors, text editors, and more. After permission to access local files is received, the API will allow web applications to read or save changes directly to files and folders on the user’s device. The SMS Receiver API Websites send a randomly generated one-time-password (OTP) to verify a phone number. This way of verification is cumbersome as it requires a user to manually enter or copy and paste the password into a form. Starting with Chrome 78, users will be able to skip this manual interaction completely with the help of the SMS Receiver API. It provides websites an ability to programmatically obtain OTPs from SMS as a solution “to ease the friction and failure points of manual user input of SMS codes, which is prone to error and phishing.” Origin trials Chrome 78 introduces origin trials that allow developers to try new features and share their feedback on “usability, practicality, and effectiveness to the web standards community.” Developers can register to enable an origin trial feature for all users on their origin for a fixed period of time. To know what features are available as an origin trial, check out the Origin Trials dashboard. Among the deprecations are, disallowing synchronous XHR during page dismissal and the removal of XSS Auditor. On a discussion on Hacker News, users were skeptical about the new Native File System API. A user commented, “I’m not sure about how to think about the file system API. On one hand, is great to see that secure file system access is possible in-browser, which allows most electron apps to be converted into PWAs. That’s great, I no longer need to run 5 different chromium instances. On the other hand, I’m really not sure if I like the future of editing Microsoft Office documents in the browser. I heavily believe that apps should have an integrated UX (with appropriate OS-specific widgets) because it allows coherency and familiarity.” To know what else is coming in Chrome 78, check out the official announcement by Google. Other news in Web Development Safari Technology Preview 91 gets beta support for the WebGPU JavaScript API and WSL New memory usage optimizations implemented in V8 Lite can also benefit V8 GitHub updates to Rails 6.0 with an incremental approach
Read more
  • 0
  • 0
  • 3062

article-image-facebook-suspends-tens-of-thousands-of-apps-data-investigation
Sugandha Lahoti
23 Sep 2019
4 min read
Save for later

Facebook suspends tens of thousands of apps amid an ongoing investigation into how apps use personal data

Sugandha Lahoti
23 Sep 2019
4 min read
In a blog post on Friday, Facebook revealed to suspend tens of thousands of apps as a part of their ongoing App Developer investigation. Facebook’s app suspension began in March 2018, in a response to the episode involving Cambridge Analytica scandal. According to the investigation, these apps have mishandled the users’ personal data. Facebook says it now also identifies apps based on signals associated with an app’s potential to abuse its policies. The apps suspended by Facebook come from just 400 developers. “The review is ongoing,” said Facebook “and comes from hundreds of contributors, including attorneys, external investigators, data scientists, engineers, policy specialists, and teams within Facebook”. However, the company failed to provide details about what the apps had done wrong or their names, instead stating they were targeted for a “variety of reasons.” “App developers remain a vital part of the Facebook ecosystem,” said the company in a blog post, “They help to make our world more social and more engaging. But people need to know we’re protecting their privacy. And across the board, we’re making progress.” Facebook has also banned myPersonality an app, which shared information with researchers and companies with only limited protections in place and refused to participate in an audit. It has also taken legal action against Rankwave, a South Korean data analytics company and filed an action against LionMobi and JediMobi. These two companies used their apps to infect users’ phones with malware in a profit-generating scheme. Facebook says this is part of an ongoing investigation and is just a progress report. Facebook was fined a record $5bn imposed in July 2019 for data breaches and revelations of illegal data sharing. Facebook’s new agreement with the FTC will bring its own set of requirements for bringing oversight to app developers. It requires developers to annually certify compliance with Facebook’s policies. Any developer that doesn’t go along with these requirements will be held accountable. It has also developed new rules to more strictly control a developer’s access to user data, including suspension or revoking of a developer’s access to any API that has not been used in the past 90 days. Facebook’s app suspension sheds light on broader privacy issues The extent of how many apps Facebook had suspended was revealed later on Friday in new court documents from Massachusetts’ attorney general, which has been probing Facebook’s data-collection practices for months. Per these documents, Facebook had suspended 69,000 apps. They also “identified approximately 10,000 applications that may also have misappropriated and/or misused consumers’ personal data,” The court filings say 6,000 apps had a “large number of installing users,” and 2,000 exhibited behaviors that “may suggest data misuse.” Experts still believe that the social-networking giant has escaped tough consequences for its past privacy abuses. Per NYT, “Facebook's announcement was "a tacit admission that the scale of its data privacy issues was far larger than it had previously acknowledged." Ron Wyden, U.S Senator from Oregon tweeted on Facebook’s app suspension, “This wasn’t some accident. Facebook put up a neon sign that said “Free Private Data,” and let app developers have their fill of Americans’ personal info. The FTC needs to hold Mark Zuckerberg personally responsible.” David Heinemeier Hansson, creator of Ruby on Rails also talked about Facebook’s Facebook’s app suspension. “Another day, another Facebook privacy scandal. Tens of thousands of apps had improper access to data ala Cambridge Analytica. FB has previously claimed only hundreds did. If you still use FB or IG, ask yourself, is any scandal enough to make you quit?”, he tweeted. The company’s lack of information about the said disclosures is also likely to reignite calls for heightened data regulation of Facebook. It also shows that the company’s privacy practices remain a work in progress. Other news in Tech France and Germany reaffirm blocking Facebook’s Libra cryptocurrency The House Judiciary Antitrust Subcommittee asks Facebook, Apple for details including private emails in the wake of antitrust investigations. Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, Data and society reports.
Read more
  • 0
  • 0
  • 2306
article-image-llvm-9-releases-with-official-risc-v-target-support-asm-goto-clang-9-and-more
Vincy Davis
20 Sep 2019
5 min read
Save for later

LLVM 9 releases with official RISC-V target support, asm goto, Clang 9, and more

Vincy Davis
20 Sep 2019
5 min read
Yesterday, the LLVM team announced the stable release of LLVM 9; though LLVM 9.0 missed its planned release date, which was 28th August. LLVM 9.0 RC3 was made available earlier this month. With LLVM 9, the RISC-V target is now out of the experimental mode and turned on by default. Other changes include improved support for asm goto in the MIPS target, another assembly-level support added to the Armv8.1-M architecture, new immarg parameter attribute added to the LLVM IR, and more. LLVM 9 also explores many bug fixes, optimizations, and diagnostics improvements. LLVM 9 also presents an experimental support for C++ in Clang 9. What’s new in LLVM 9 Two new extension points, called EP_FullLinkTimeOptimizationEarly and EP_FullLinkTimeOptimizationLast are available as plugins for specializing the legacy pass manager full LTO pipeline. A new COFF object files/executables support for llvm-objcopy/llvm-strip. It will support the most common copying/stripping options. LLVM_ENABLE_Z3_SOLVER has replaced the CMake parameter CLANG_ANALYZER_ENABLE_Z3_SOLVER. LLVM 9.0 has finally made the “experimental” RISC-V LLVM backend “official” and will be enabled by default. This means that it no longer needs to be enabled by LLVM_EXPERIMENTAL_TARGETS_TO_BUILD. The RISC-V Target has full codegen support for the RV32I and RV64I based RISC-V instruction set variants, along with the MAFDC standard extensions. Explaining the reason behind this update, Alex Bradbury, CTO and Co-Founder of the lowRISC said, “As well as being more convenient for end users, this also makes it significantly easier for e.g. Rust/Julia/ Swift and other languages using LLVM for code generation to do so using the system-provided LLVM libraries. This will make life easier for those working on RISC-V ports of Linux distros encountering issues with Rust dependencies.” A new support for target-independent hardware loops is added along with PowerPC and Arm implementations, in IR. Other changes in LLVM 9 LLVM IR: A new immarg parameter attribute is added. It indicates that an intrinsic parameter is required to be a simple constant. The atomicrmw xchg now allows floating point types and supports fadd and fsub. ARM Backend: A assembly-level support is added for the Armv8.1-M architecture, including the M-Profile Vector Extension (MVE). Another pipeline model to be used for cores is also added to Cortex-M4. MIPS Target: Improved experimental support for GlobalISel instruction selection framework. New support for .cplocal assembler directive, sge, sgeu, sgt, sgtu pseudo instructions and asm goto constraint. PowerPC Target: Improved handling of TOC pointer spills for indirect calls and better precision of square root reciprocal estimates. SystemZ Target: A new support for the arch13 architecture is added. The builtins for the new vector instructions can be enabled using the -mzvector option. What’s new in Clang 9? With the stable release of LLVM 9, Clang 9 official release was also made available. The major new feature in Clang 9 is the new addition of experimental support for C++ for OpenCL. Clang 9 also new compiler flags- -ftime-trace and ftime-trace-granularity=N.  C Language improvements in Clang 9 The __FILE_NAME__ macro is added as a Clang specific extension and supports all C-family languages. It also provides initial support for asm goto statements for control flow from inline assembly to labels. The main consumers of this construct are the Linux kernel (CONFIG_JUMP_LABEL=y) and glib. Also, with the addition of asm goto support, the mainline Linux kernel for x86_64 is now buildable and bootable with Clang 9. The release notes also specifies about an issue that could not be fixed before the LLVM 9 release, “PR40547 Clang gets miscompiled by GCC 9.” C++ Language improvements in Clang 9 An experimental support for C++is added to OpenCL. Clang 9 also brings backward compatibility with OpenCL C v2.0. Other implemented features include: The address space behavior is improved in the majority of C++ features like templates parameters and arguments, reference types, type deduction, and more. OpenCL-specific types like images, samplers, events, pipes, are now accepted OpenCL standard header in Clang can be compiled in C++ mode Users are happy with the LLVM 9 features, especially the support for asm goto. A user on Hacker News comments, “This is big. Support for asm goto was merged into the mainline earlier this year, but now it's released [1]. Aside from the obvious implications of this - being able to build the kernel with LLVM - working with eBPF/XDP just got way easier” Another user says, “The support for asm goto is great for Linux, no longer being dependent on a single compiler for one of the most popular ISAs can only be a good thing for the overall health of the project.” For the complete list of changes, check out the official LLVM 9 release notes. Other news in Programming Dart 2.5 releases with the preview of ML complete, the dart:ffi foreign function interface and improvements in constant expressions Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements
Read more
  • 0
  • 0
  • 5428

article-image-apple-releases-safari-13-with-dark-mode-support-fido2-compliant-usb-security-keys-support
Bhagyashree R
20 Sep 2019
3 min read
Save for later

Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more!

Bhagyashree R
20 Sep 2019
3 min read
Yesterday, Apple released Safari 13 for iOS 13, macOS 10.15 (Catalina), macOS Mojave, and macOS High Sierra. This release comes with opt-in dark mode support, FIDO2-compliant USB security keys support, updated Intelligent Tracking Prevention, and much more. Key updates in Safari 13 Desktop-class browsing for iPad users Starting with Safari 13, iPad users will have the same browsing experience as macOS users. In addition to displaying websites same as the desktop Safari, it will also provide the same capabilities including more keyboard shortcuts, a download manager with background downloads, and support for top productivity websites. Updates related to authentication and passwords Safari 13 will prompt users to strengthen their passwords when they sign into a website. On macOS, users will able to use FIDO2-compliant USB security keys in Safari. Also, support is added for “Sign in With Apple” to Safari and WKWebView. Read also: W3C and FIDO Alliance declare WebAuthn as the web standard for password-free logins Security and privacy updates A new permission API is added for DeviceMotionEvent and DeviceOrientationEvent on iOS. The DeviceMotionEvent class encapsulates details like the measurements of the interval, rotation rate, and acceleration of a device. Whereas, the DeviceOrientationEvent class encapsulates the angles of rotation (alpha, beta, and gamma) in degrees and heading. Other updates include updated third-party iframes to prevent them from automatically navigating the page. Intelligent Tracking Prevention is updated to prevent cross-site tracking through referrer and link decoration. Performance-specific updates While using Safari 13, iOS users will find that the initial rendering time for web pages is reduced. The memory consumption by JavaScript including for non-web clients is also reduced. WebAPI updates Safari 13 comes with a new Pointer Events API to enable consistent access to mouse, trackpad, touch, and Apple Pencil events. It also supports the Visual Viewport API that adjusts web content to avoid overlays, such as the onscreen keyboard. Deprecated features in Safari 13 WebSQL and Legacy Safari Extensions are no longer supported. To replace your previously provided Legacy Safari Extensions, Apple provides two options. First, you can configure your Safari App Extension to provide an upgrade path that will automatically remove the previous Legacy Safari Extension when it is installed. Second, you can manually convert your Legacy Safari Extension to a Safari App Extension. In a discussion on Hacker News, users were pleased with the support for the Pointer Events API. A user commented, “The Pointer Events spec is a real joy. For example, if you want to roll your own "drag" event for a given element, the API allows you to do this without reference to document or a parent container element. You can just declare that the element currently receiving pointer events capture subsequent pointer events until you release it. Additionally, the API naturally lends itself to patterns that can easily be extended for multi-touch situations.” Others also expressed their concern regarding the deprecation of Legacy Safari Extensions. A user added, “It really, really is a shame that they removed proper extensions. While Safari never had a good extension story, it was at least bearable, and in all other regards its simply the best Mac browser. Now I have to take a really hard look at switching back to Firefox, and that would be a downgrade in almost every regard I care about. Pity.” Check out the official release notes of Safari 13 to know more in detail. Other news in web development New memory usage optimizations implemented in V8 Lite can also benefit V8 5 pitfalls of React Hooks you should avoid – Kent C. Dodds Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users
Read more
  • 0
  • 0
  • 3427

article-image-twitter-announces-to-test-hide-replies-feature-in-the-us-and-japan-after-testing-it-in-canada
Amrata Joshi
20 Sep 2019
4 min read
Save for later

Twitter announces to test ‘Hide Replies’ feature in the US and Japan, after testing it in Canada

Amrata Joshi
20 Sep 2019
4 min read
Yesterday, the team at Twitter announced to test a new feature called “Hide Replies” in the US and Japan after testing it in Canada. Twitter’s Hide Replies feature lets users hide all those unwanted trolls, abuse, bully and replies on their tweet. The company aims towards civilized conversations on Twitter and to give more control to the users. Users can now decide which reply will be hidden from other users but those who choose to view the hidden replies will still be able to see them by clicking on an icon that would bring up all the hidden tweets. Users can hide replies on both the app and desktop versions of the website.  Observations from the Canadian ‘Hide Replies’ feature test In July this year, the The Twitter team tested out the ‘Hide Replies’ feature in Canada and tried to understand how conversations on the platform change when a person (who starts a conversation) hides the replies.  The team observed that users often hide those replies that they think are not relevant, unintelligible or abusive. According to their survey, the ones who used this feature found it helpful. Also, the users were more likely to reconsider their interactions when their tweets were hidden. Around 27% of the users who had their tweets hidden thought of reconsidering their interactions with others in the future. Hiding someone’s replies can also lead to confusion as it could be misunderstood, so Twitter notifies the user if they wish to block the user. The official post reads, “People were concerned hiding someone’s reply could be misunderstood and potentially lead to confusion or frustration. As a result, now if you tap to hide a Tweet, we’ll check in with you to see if you want to also block that account.” According to the team, the Canadian test showed positive results as the feature helped users have better conversations. In an announcement regarding the feature’s Canada launch, the company said, “Everyday, people start important conversations on Twitter, from #MeToo and #BlackLivesMatter, to discussions around #NBAFinals or their favorite television shows. These conversations bring people together to debate, learn, and laugh. That said we know that distracting, irrelevant, and offensive replies can derail the discussions that people want to have. Ultimately, the success of ‘hide replies’ will depend on how people use it, but it could mean friendlier — and more filtered — conversations.” Twitter’s Hide Replies feature: will it really improve conversations? The Hide Replies feature is a great addition to the list of the block and mute options on Twitter but it could possibly turn into a slight restriction on freedom of speech. In case, the replies weren't abusive or offensive but are strong views about a subject and the author still decides to hide that reply, then the user who replied might not understand the reason behind hiding the reply. But the good thing is that users can opt to still see the hidden replies. So in this case, the hidden responses aren’t being completely silenced but it will now take an extra click to view them. Also, if the platform still shows the hidden replies then the motive of hiding the replies fails there itself. While it is still not clear as to how will Twitter curtail abusive comments or bullies on the Twitter thread with this feature as it doesn’t delete them but simply hide them. Few Twitter users are not happy with this feature and think it is irrelevant if the user first hides the replies and than again it will appear on clicking the option to see the hidden replies. https://twitter.com/QWongSJ/status/1174795321211158528 https://twitter.com/scott_satzer/status/1174890804143374336 https://twitter.com/CartridgeGames/status/1174857548777885697 https://twitter.com/camimosas/status/1174850022694952960 https://twitter.com/KyleTWN/status/1174828502769471488 https://twitter.com/iFireMonkey/status/1174791634736861207 To know more about this news, check out the official post. Other interesting news in programming Dart 2.5 releases with the preview of ML complete, the dart:ffi foreign function interface and improvements in constant expressions Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020  
Read more
  • 0
  • 0
  • 1789
article-image-click2gov-software-vulnerable-for-the-second-time-breach-hits-8-us-cities
Savia Lobo
20 Sep 2019
4 min read
Save for later

Click2Gov software vulnerable for the second time; breach hits 8 US cities

Savia Lobo
20 Sep 2019
4 min read
A vulnerable municipality software, Click2Gov, is known to be part of a breach involving eight cities last month, Threatpost reports. The Click2Gov software is used in self-service bill-paying portals used by utilities and community development organizations for paying parking tickets online etc. This is not the first time the software vulnerability has affected a huge bunch of people. The flaw was first discovered in December 2018, where using the vulnerable software, hackers compromised over 300,000 payment card records from dozens of cities across the United States and Canada between 2017 and late 2018. Also Read: Researchers reveal a vulnerability that can bypass payment limits in contactless Visa card Hackers are taking a second aim at Click2Gov The team of researchers at Gemini Advisory who covered the breach in 2018 have now observed a second wave of Click2Gov breaches beginning in August 2019 and affecting over 20,000 records from eight cities across the United States. The portals of six of the eight cities had been compromised in the initial breach. They also revealed that these user records have been offered for sale online via illicit markets. The impacted towns include Deerfield Beach, Fla., Palm Bay, Fla., Milton, Fla., Coral Springs. Fla., Bakersfield Calif., Pocatello Ida., Broken Arrow, Okla. and Ames, Iow “While many of the affected cities have patched their systems since the original breach, it is common for cybercriminals to strike the same targets twice. Thus, several of the same cities were affected in both waves of breaches,”  the Gemini Advisory researchers write in their official post. The researchers said, “Analysts confirmed that many of the affected towns were operating patched and up-to-date Click2Gov systems but were affected nonetheless. Given the success of the first campaign, which generated over $1.9 million in illicit revenue, the threat actors would likely have both the motive and the budget to conduct a second Click2Gov campaign,” they further added. Also Read: Apple Card, iPhone’s new payment system, is now available for select users According to a FireEye report published last year, in the 2018 attack, attackers compromised the Click2Gov webserver. Due to the vulnerability, the attacker was able to install a web shell, SJavaWebManage, and then upload a tool that allowed them to parse log files, retrieve payment card information and remove all log entries. Superion (now CentralSquare Technologies and owner of the Click2Gov software) acknowledged directly to Gemini Advisory that despite broad patch deployment the system remains vulnerable for an unknown reason. On similar lines of this year’s attack, researchers say “the portal remains a viable attack surface. These eight cities were in five states, but cardholders in all 50 states were affected. Some of these victims resided in different states but remotely transacted with the Click2Gov portal in affected cities, potentially due to past travels or to owning property in those cities.” Map depicting cities affected only by the original Click2Gov breach (yellow) and those affected by the second wave of Click2Gov breaches (blue). Source: Gemini Advisory These eight towns were contacted by Threatpost wherein most of them did not respond. However, some towns confirmed the breach in their Click2Gov utility payment portals. Some even took their Click2Gov portals offline shortly after contact. CentralSquare Technologies did not immediately comment on this scenario. To know more about this news in detail, read Gemini Advisory’s official post. Other news in security MITRE’s 2019 CWE Top 25 most dangerous software errors list released Emotet, a dangerous botnet spams malicious emails, “targets 66,000 unique emails for more than 30,000 domain names” reports BleepingComputer An unsecured Elasticsearch database exposes personal information of 20 million Ecuadoreans including 6.77M children under 18
Read more
  • 0
  • 0
  • 1831

article-image-dart-2-5-releases-with-the-preview-of-ml-complete-the-dartffi-foreign-function-interface-and-improvements-in-constant-expressions
Vincy Davis
20 Sep 2019
4 min read
Save for later

Dart 2.5 releases with the preview of ML complete, the dart:ffi foreign function interface and improvements in constant expressions

Vincy Davis
20 Sep 2019
4 min read
Last week, Michael Thomsen, the Project Manager for Dart announced the stable release of Dart 2.5 SDK (Software Development Kit). This release includes two technical previews of ML Complete and the dart:ffi foreign function interface to be used for calling C code directly from Dart. Dart 2.5 also brings improved support for constant expressions. Preview of ML Complete In his blog, Thomsen has regarded ML Complete as a “powerful addition” to their existing suite of productivity tools like hot reload, customizable static analysis, and Dart DevTools. It works by training a model of possible member occurrences in a given context. The possible occurrences can be analyzed from the available open-source Dart code on Github. The training model uses TensorFlow Lite tools to predict the next probable symbol, while the developer is editing. As ML Complete is built directly into the Dart analyzer, it is available on all Dart-enabled editors including Android Studio, IntelliJ, and VS Code. Since it is still in preview, developers are advised to use the Flutter dev channel or the Dart dev channel for previewing this feature. Preview of the dart:ffi foreign function interface The dart:ffi feature enables users to take advantage of the existing native APIs, where Dart code is already running. Users can also utilize the existing cross-platform native libraries written in C. Currently, the support for calling C directly from Dart is limited to the Dart VM deep integration which uses native extensions. The new dart:ffi foreign function interface will function on a new mechanism, offering great performance, easy approach, and will work across many Dart supported platforms and compilers. Dart-C interop works on two main scenarios: Calling a C-based system API on the host operating system (OS) For calling a C-based system API, the Linux command system is used. The system command allows the execution of any system command. It also allows the argument to be essentially passed to the shell/terminal and also run there. For implementing the dart:ffi, the Dart code needs to represent the C function and the types of its arguments and return type. It also needs to represent the corresponding Dart function, and its types. Both the representations are done by defining two typedefs in the C header for the command. Calling a C-based library for a single OS or cross-platform The dart:ffi feature is also used to implore C-based frameworks and components. It will allow the user to run TensorFlow across all the operating systems where code completion is needed. It also offers high performance of the native TensorFlow implementation. Thomsen adds, “We also expect that the ability to call C-based libraries will be of great use to Flutter apps. You can imagine calling native libraries such as Realm or SQLite, and we think dart:ffi will be valuable for enabling plugins for Flutter desktop.” Developers are advised to use the Flutter master channel or a Dart dev channel to quickly learn about the changes and improvements in the dart:ffi feature. Read Also: Dart 2.2 is out with support for set literals and more! Improvements in constant expressions In the earlier versions, the abilities of constant expressions were limited, however, Dart 2.5 includes many new changes. In Dart 2.5, constant expressions can be defined using many ways, which includes the ability to use casts and the new control flow and collection spread features. Image Source: Medium Users love Dart 2.5 features. https://twitter.com/geek_timofey/status/1171507571372380167 https://twitter.com/Fredrikkerlund2/status/1171461649217097728 https://twitter.com/RashidG92908642/status/1171910418807369728 To know more about this announcement in detail, visit Michael Thomsen’s blog on Medium. Other Interesting News in Programming Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements
Read more
  • 0
  • 0
  • 2741