Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-opencv-4-schedule-july-release
Pavan Ramchandani
10 Apr 2018
3 min read
Save for later

OpenCV 4.0 is on schedule for July release

Pavan Ramchandani
10 Apr 2018
3 min read
There has been some exciting news from OpenCV: OpenCV developer Vadim Pisarevsky announced the development on OpenCV 4 on the GitHub repository of OpenCV and addressed why the time is right for the release of OpenCV 4. OpenCV 3 was released in 2015 taking 6 years to come out after OpenCV 2 which was released in 2009. OpenCV 3 has been built around C++ 98 standards. Re-writing the library in the recent version of C++ like C++ 11 or later versions would mean to break the "binary compatibility". This makes it important to move further from the OpenCV 3 promises. There are two interesting concepts that we need to know here - Binary compatibility and source-level compatibility. OpenCV had made a promise to stay binary-compatible with versions, that means the release of new OpenCV versions will stay compatible with the previous version library calls. Now moving from C++ 98 standard to recent C++ standard will break this promise. However, OpenCV has looked into this and found that not much harm will be caused by this migration, hence relaxing the "binary compatibility" and moving to "source compatibility" with the new release. Apart from migrating to latest C++ standards, the OpenCV library needs refactoring and new module additions for Deep learning and neural networks seeing the heavy usage of OpenCV in machine learning. OpenCV developers can expect some big revisions in functions and modules. Here is a quick summary of what you might expect in this major release of OpenCV 4.0: Hardware-accelerated Video I/O module: This module maximizes OpenCV performance using software and hardware accelerator in the machine. This means calling this module with OpenCV 4 will harness the acceleration. HighGUI module (Revised): With the enhancement of this module, you can efficiently read video from camera or files and also perform a write operation on them. This module comes with a lot of functionality for media IO operation. Graph API module: This module creates support for efficiently reading and writing graphs from the image. Point Cloud module: Point cloud module contains algorithms such as feature estimation, model fitting, and segmentation. These algorithms can be used for filtering noisy data, stitch 3D point clouds, segment part of the image, among others. Tracking, Calibration, and Stereo Modules, among other features that will benefit image processing with OpenCV. You can find the full list of a new module that might get added in OpenCV 4 in the issues page of OpenCV repo. The OpenCV community is relying on its huge developer community to facilitate closing the open issues within the speculated time of release, that is July 2018. Functionalities that don’t make it OpenCV 4 release, will be rolled into the OpenCV 4.x releases. While you wait for OpenCV 4, enjoy these OpenCV 3 tutorials: New functionality in OpenCV 3.0 Fingerprint detection using OpenCV 3 OpenCV Primer: What can you do with Computer Vision and how to get started? Image filtering techniques in OpenCV Building a classification system with logistic regression in OpenCV Exploring Structure from Motion Using OpenCV
Read more
  • 0
  • 0
  • 5471

article-image-a-bitwise-study-presented-to-the-sec-reveals-that-95-of-coinmarketcaps-btc-trading-volume-report-is-fake
Savia Lobo
25 Mar 2019
2 min read
Save for later

A Bitwise study presented to the SEC reveals that 95% of CoinMarketCap’s BTC trading volume report is fake

Savia Lobo
25 Mar 2019
2 min read
A recent research report by Bitwise Asset Management last week revealed that 95% of the reported trading volumes in Bitcoin by CoinMarketCap.com is fake and artificially created by unregulated exchanges. Surprisingly, this fake data came from CoinMarketCap.com, the most widely cited source for bitcoin volume and is also used by most of the major media outlets. CoinMarketCap hasn't responded yet to the findings. “Despite its widespread use, the CoinMarketCap.com data is wrong. It includes a large amount of fake and/or non-economic trading volume, thereby giving a fundamentally mistaken impression of the true size and nature of the bitcoin market”, the Bitwise report states. The report also claims that only 10 cryptocurrency exchanges have actual volume, including major names like Binance, Coinbase, Kraken, Gemini, and Bittrex. https://twitter.com/BitwiseInvest/status/1109114656944209921 Following are the key takeaways of the report: 95% of reported BTC spot volume is fake. Likely motive is listing fees (can be $1-3M) Real daily spot volume is ~$270M 10 exchanges make up almost all of the real trading volume Majority of the 10 are regulated Spreads are <0.10%. Arbitrage is super efficient CoinMarketCap.com(CMC) originally reported a combined $6 billion in average daily trading volume. However, the 226-slide presentation by Bitwise to the U.S. Securities and Exchanges Commission (SEC) revealed that only $273 million of CMC’s reported BTC trading volume was legitimate. The report also has a detailed breakdown of all the exchanges that report more than $1 million in daily trading volumes on CoinMarketCap. Matthew Hougan, the global head of Bitwise’s research division, said, “People looked at cryptocurrency and said this market is a mess; that’s because they were looking at data that was manipulated”. Bitwise also posted on its official Twitter account, “Arbitrage between the 10 real exchanges has improved significantly. The avg price deviation of any one exchange from the aggregate price is now less than 0.10%! Well below the arbitrage band considering exchange-level fees (0.10-0.30%) & hedging costs.” https://twitter.com/BitwiseInvest/status/1109114686635687936 To know more about this in detail, head over to the complete Bitwise report. 200+ Bitcoins stolen from Electrum wallet in an ongoing phishing attack Can Cryptocurrency establish a new economic world order? Crypto-cash is missing from the wallet of dead cryptocurrency entrepreneur Gerald Cotten – find it, and you could get $100,000
Read more
  • 0
  • 0
  • 5446

article-image-someone-made-a-program-to-make-it-look-like-your-typing-on-slack-when-someone-else-is
Richard Gall
01 May 2018
2 min read
Save for later

Someone made a program to make it look like you're typing on Slack when someone else is

Richard Gall
01 May 2018
2 min read
Slack: productivity and collaboration tool, or platform for procrastination, in-jokes and GIFs? We couldn't possibly say here at Packt. For most of us, the only thing worse than wasting time on Slack is looking like you're never on Slack at all. While you'd like to tell people it's because you're busy, you can see your colleagues eyeing you with suspicion, convinced that if you're not procrastinating in the same manner they are, you really can't be doing anything at all. Luckily, someone has invented a tool for dealing with exactly this problem.  Take a bow Will Leinweber (@leinweber) - you have made something to make us look busy. Or, at the very least thoughtful and ready to contribute to the channel chat at any moment. https://twitter.com/leinweber/status/989267343002951680 Will has put the project on GitHub. You can find it here. The only disappointment with the tool is that Will didn't include the additional feature that  "asks other people what their typing whenever they're typing." The results were pretty hilarious, and likely too distracting for anyone to do any work at all... https://twitter.com/leinweber/status/989285775165423616 Needless to say, there was a pretty strong reaction to Will's program. https://twitter.com/snail_5/status/989271471766757376 https://twitter.com/LittleMxSurly/status/989315676325085184 https://twitter.com/CodeTheWebBlog/status/990008655394189313 https://twitter.com/shandrew/status/989395097249693698 Truly, software is being used for incredible things in 2018. These are the projects we need if we're to survive a hostile and unforgiving future, forever typing into the abyss at each other, and doomed to search out reaction GIFs to every rude email and hostile expression someone sends your way. What other novelty software projects have you seen recently? Let us know in the comments, and we'll do some investigative work* *Have a look on Twitter. Read more Creating slash commands for Slack using Bottle
Read more
  • 0
  • 0
  • 5445

article-image-what-is-full-stack-developer
Richard Gall
28 Mar 2018
3 min read
Save for later

What is a full-stack developer?

Richard Gall
28 Mar 2018
3 min read
Full stack developer has been named as one of the most common developer roles according to the latest stack overflow survey. But what exactly does a full stack developer do and what does a typical full stack developer job description look like? Full stack developers bridge the gap between the font end and back end Full stack developers deal with the full spectrum of development, from back end to front end development. They are hugely versatile technical professionals, and because they work on both the client and server side, they need to be able to learn new frameworks, libraries and tools very quickly. There’s a common misconception that full stack developers are experts in every area of web development. They’re not – they’re often generalists with broad knowledge that doesn’t necessarily run deep. However, this lack of depth isn’t necessarily a disadvantage. Because they have experience in both back end and front end development they know how to provide solutions to working with both. But most importantly, as Agile becomes integral to modern development practices, developers who are able to properly understand and move between front and back ends is vital. From an economic perspective it also makes sense – with a team of full-stack developers you have a team of people able to perform multiple roles. What a full stack developer job description looks like Every full-stack developer job description looks different. The role is continually evolving and different organizations will require different skills. Here are some of the things you’re likely to see: HTML / CSS JavaScript JavaScript frameworks like Angular or React Experience of UI and API design SQL and experience with other databases At least one backend programming language (python, ruby, java etc) Backend framework experience (for example, ASP.NET Core, Flask) Build and release management or automation tools such as Jenkins Virtualization and containerization knowledge (and today possibly serverless too) Essentially, it’s up to the individual to build upon their knowledge by learning new technologies in order to become an expert full stack developer. Full stack developers need soft skills But soft skills are also important for full-stack developers. Being able to communicate effectively, manage projects and stakeholders is essential. Of course, knowledge of Agile and Scrum are always in-demand; being collaborative is also vital, as software development is never really a solitary exercise. Similarly, commercial awareness is highly valued - a full stack developer that understands they are solving business problems, not just software problems is invaluable.
Read more
  • 0
  • 0
  • 5432

article-image-llvm-9-releases-with-official-risc-v-target-support-asm-goto-clang-9-and-more
Vincy Davis
20 Sep 2019
5 min read
Save for later

LLVM 9 releases with official RISC-V target support, asm goto, Clang 9, and more

Vincy Davis
20 Sep 2019
5 min read
Yesterday, the LLVM team announced the stable release of LLVM 9; though LLVM 9.0 missed its planned release date, which was 28th August. LLVM 9.0 RC3 was made available earlier this month. With LLVM 9, the RISC-V target is now out of the experimental mode and turned on by default. Other changes include improved support for asm goto in the MIPS target, another assembly-level support added to the Armv8.1-M architecture, new immarg parameter attribute added to the LLVM IR, and more. LLVM 9 also explores many bug fixes, optimizations, and diagnostics improvements. LLVM 9 also presents an experimental support for C++ in Clang 9. What’s new in LLVM 9 Two new extension points, called EP_FullLinkTimeOptimizationEarly and EP_FullLinkTimeOptimizationLast are available as plugins for specializing the legacy pass manager full LTO pipeline. A new COFF object files/executables support for llvm-objcopy/llvm-strip. It will support the most common copying/stripping options. LLVM_ENABLE_Z3_SOLVER has replaced the CMake parameter CLANG_ANALYZER_ENABLE_Z3_SOLVER. LLVM 9.0 has finally made the “experimental” RISC-V LLVM backend “official” and will be enabled by default. This means that it no longer needs to be enabled by LLVM_EXPERIMENTAL_TARGETS_TO_BUILD. The RISC-V Target has full codegen support for the RV32I and RV64I based RISC-V instruction set variants, along with the MAFDC standard extensions. Explaining the reason behind this update, Alex Bradbury, CTO and Co-Founder of the lowRISC said, “As well as being more convenient for end users, this also makes it significantly easier for e.g. Rust/Julia/ Swift and other languages using LLVM for code generation to do so using the system-provided LLVM libraries. This will make life easier for those working on RISC-V ports of Linux distros encountering issues with Rust dependencies.” A new support for target-independent hardware loops is added along with PowerPC and Arm implementations, in IR. Other changes in LLVM 9 LLVM IR: A new immarg parameter attribute is added. It indicates that an intrinsic parameter is required to be a simple constant. The atomicrmw xchg now allows floating point types and supports fadd and fsub. ARM Backend: A assembly-level support is added for the Armv8.1-M architecture, including the M-Profile Vector Extension (MVE). Another pipeline model to be used for cores is also added to Cortex-M4. MIPS Target: Improved experimental support for GlobalISel instruction selection framework. New support for .cplocal assembler directive, sge, sgeu, sgt, sgtu pseudo instructions and asm goto constraint. PowerPC Target: Improved handling of TOC pointer spills for indirect calls and better precision of square root reciprocal estimates. SystemZ Target: A new support for the arch13 architecture is added. The builtins for the new vector instructions can be enabled using the -mzvector option. What’s new in Clang 9? With the stable release of LLVM 9, Clang 9 official release was also made available. The major new feature in Clang 9 is the new addition of experimental support for C++ for OpenCL. Clang 9 also new compiler flags- -ftime-trace and ftime-trace-granularity=N.  C Language improvements in Clang 9 The __FILE_NAME__ macro is added as a Clang specific extension and supports all C-family languages. It also provides initial support for asm goto statements for control flow from inline assembly to labels. The main consumers of this construct are the Linux kernel (CONFIG_JUMP_LABEL=y) and glib. Also, with the addition of asm goto support, the mainline Linux kernel for x86_64 is now buildable and bootable with Clang 9. The release notes also specifies about an issue that could not be fixed before the LLVM 9 release, “PR40547 Clang gets miscompiled by GCC 9.” C++ Language improvements in Clang 9 An experimental support for C++is added to OpenCL. Clang 9 also brings backward compatibility with OpenCL C v2.0. Other implemented features include: The address space behavior is improved in the majority of C++ features like templates parameters and arguments, reference types, type deduction, and more. OpenCL-specific types like images, samplers, events, pipes, are now accepted OpenCL standard header in Clang can be compiled in C++ mode Users are happy with the LLVM 9 features, especially the support for asm goto. A user on Hacker News comments, “This is big. Support for asm goto was merged into the mainline earlier this year, but now it's released [1]. Aside from the obvious implications of this - being able to build the kernel with LLVM - working with eBPF/XDP just got way easier” Another user says, “The support for asm goto is great for Linux, no longer being dependent on a single compiler for one of the most popular ISAs can only be a good thing for the overall health of the project.” For the complete list of changes, check out the official LLVM 9 release notes. Other news in Programming Dart 2.5 releases with the preview of ML complete, the dart:ffi foreign function interface and improvements in constant expressions Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements
Read more
  • 0
  • 0
  • 5428

article-image-amazon-reinvent-2018-aws-snowball-edge-comes-with-a-gpu-option-and-more-computing-power
Bhagyashree R
27 Nov 2018
2 min read
Save for later

Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power

Bhagyashree R
27 Nov 2018
2 min read
Amazon re:Invent 2018 commenced yesterday at Las Vegas. This five-day event will comprise of various sessions, chalk talks, and hackathons covering AWS core topics. Amazon is also launching several new products and making some crucial announcements. Adding to this list, yesterday, Amazon announced that AWS Snowball Edge will now come with two options: Snowball Edge Storage Optimized and Snowball Edge Compute Optimized. Snowball Edge Compute Optimized, in addition to more computing power, comes with an optional GPU support. What is AWS Snowball Edge? AWS Snowball Edge is a physical appliance that is used for data migration and edge computing. It supports specific Amazon EC2 instance types and AWS Lambda functions. With Snowball Edge, customers can develop and test in AWS. The applications can then be deployed on remote devices to collect, pre-process, and return the data. Common use cases include data migration, data transport, image collation, IoT sensor stream capture, and machine learning. What is new in Snowball Edge? Snowball Edge will now come in two options: Snowball Edge Storage Optimized: This option provides 100 TB of capacity and 24 vCPUs, well suited for local storage and large-scale data transfer. Snowball Edge Compute Optimized: There are two variations of this option, one is without GPU and the other is with GPU. Both come with 42 TB of S3-compatible storage and 7.68 TB of NVMe SSD storage. You will also be able to run any combination of the instances that consume up to 52 vCPUs and 208 GiB of memory. The main highlight here is the support for an optional GPU. With Snowball Edge with GPU, you can do things like real-time full-motion video analysis and processing, machine learning inferencing, and other highly parallel compute-intensive work. In order to gain access to the GPU, you need to launch an sbe-g instance. You can select the “with GPU” option using the console: Source: Amazon The following are the specifications of the instances: Source: Amazon You can read more about the re:Invent announcements regarding Snowball Edge on AWS website. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition AWS announces more flexibility its Certification Exams, drops its exam prerequisites Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources
Read more
  • 0
  • 0
  • 5428
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-linux-drops-code-of-conflict-and-adopts-new-code-of-conduct
Richard Gall
17 Sep 2018
4 min read
Save for later

Linux drops Code of Conflict and adopts new Code of Conduct

Richard Gall
17 Sep 2018
4 min read
Prior to news of Linus Torvalds self-imposed leave from the project, Linux leaders - including Torvalds - revised its Code of Conflict, moving instead to a Code of Conduct. A new Linux Code of Conduct was submitted by Greg Kroah-Hartman on Saturday 15 September. Kroah-Hartman wrote that "the Code of Conflict is not achieving its implicit goal of fostering civility and the spirit of 'be excellent to each other.'" Read the new Linux Code of Conduct here. The change was committed yesterday (16 September) by Torvalds. Other leading figures in the Linux project also put their names behind the move, including Olof Johansson and Steve Rostedt. It's not immediately clear to what extent the new Code of Conduct has something to do with Torvalds' hiatus, but it's impossible to avoid making a connection between the two. What's new in the Linux Code of Conduct? Linux's Code of Conflict has always felt combative. The naming makes clear that disagreement is part and parcel of open source development. It was always clear that "critique and criticism" were simply a part of what it means to be in the Linux community. "The Linux kernel development effort is a very personal process compared to "traditional" ways of developing software. Your code and ideas behind it will be carefully reviewed, often resulting in critique and criticism. The review will almost always require improvements to the code before it can be included in the kernel. Know that this happens because everyone involved wants to see the best possible solution for the overall success of Linux." By switching to a Code of Conduct, Linux is immediately placing emphasis on how contributors and maintainers work together to cultivate an open and safe community that people want to be involved in. Contrast this with the section from the Code of Conflict above: "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation." The Code of Conduct then goes on to outline specific examples of what is and isn't acceptable. "Using welcoming and accepting language" and "showing empathy to other community members" are just two examples of how the code suggests community members can help to create a positive working environment. The new Code of Conduct then goes on to detail the responsibilities of Linux maintainers. They are presented as custodians or stewards for Linux. They are responsible for "clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior." The reaction to the new Linux Code of Conduct Reaction to the news - coupled with Linus Torvalds apology today - has caused considerable reaction on Twitter and across the open source community. For some, this is an example of politics entering into open source code - with some suggesting that it could be detrimental to the Linux project overall. Of course, the link between a more positive, inclusive and respectful community environment to a weaker project does seem strange to say the least. Taken alongside news last week that Python is dumping 'master' and 'slave' in its documentation, it would seem that we're starting to see open source projects take inclusivity and accessibility seriously. Some in the community might see that as a threat to them - but, if we really do think 'be excellent to each other' is the philosophy we should live by, shouldn't we do everything to make sure we're always held to that standard?
Read more
  • 0
  • 0
  • 5427

article-image-openais-gradient-checkpointing-package-makes-huge-neural-nets-fit-memory
Savia Lobo
17 Jan 2018
5 min read
Save for later

OpenAI’s gradient checkpointing: A package that makes huge neural nets fit into memory

Savia Lobo
17 Jan 2018
5 min read
OpenAI releases a python/Tensorflow package, Gradient checkpointing! Gradient checkpointing lets you fit 10x larger neural nets into memory at the cost of an additional 20% computation time. The tools within this package, which is a joint development of Tim Salimans and Yaroslav Bulatov, aids in rewriting TensorFlow model for using less memory. Computing the gradient of the loss by backpropagation is the memory intensive part of training deep neural networks. By checkpointing nodes in the computation graph defined by your model, and recomputing the parts of the graph in between those nodes during backpropagation, it is possible to calculate this gradient at reduced memory cost. While training deep feed-forward neural networks, which consists of n layers, we can reduce the memory consumption to O(sqrt(n)), at the cost of performing one additional forward pass. The graph shows the amount of memory used while training TensorFlow official CIFAR10 resnet example with the regular tf.gradients function and the optimized gradient function. To see how it works, let’s take an example of a simple feed-forward neural network. In the figure above, f : The activations of the neural network layers b : Gradient of the loss with respect to the activations and parameters of these layers All these nodes are evaluated in order during forward pass and in reversed order during backward pass. The results obtained for ‘f’ nodes are required in order to compute ‘b’ nodes. Hence, after the forward pass, all the f nodes are kept in memory, and can be erased only when backpropagation has progressed far enough to have computed all dependencies, or children, of an f node. This implies that in simple backpropagation, the memory required grows linearly with the number of neural net layers n. Graph 1: Vanilla Backpropagation The graph above shows a simple vanilla backpropagation, which computes each node once. However, recomputing the nodes can save a lot of memory. For this, we can simply try recomputing every node from the forward pass as and when required. The order of execution, and the memory used, then appear as follows: Graph 2: Backpropagation with poor memory By using the strategy above, the memory required to compute gradients in our graph is constant in the number of neural network layers n, which is optimal in terms of memory. However, now the number of node evaluations scales to n^2, which was previously scaled as n. This means, each of the n nodes is recomputed on the order of n times. As a result, the computation graph becomes much slower for evaluating deep networks. This makes the method impractical for use in deep learning. To strike a balance between memory and computation, OpenAI has come up with a strategy that allows nodes to be recomputed, but not too often. The strategy used here is to mark a subset of the neural net activations as checkpoint nodes. Source: Graph with chosen checkpointed node These checkpoint nodes are kept in memory after the forward pass, while the remaining nodes are recomputed at most once. After recomputation, the non-checkpoint nodes are stored in memory until they are no longer required. For the case of a simple feed-forward neural net, all neuron activation nodes are graph separators or articulation points of the graph defined by the forward pass. This means, the nodes between a b node and the last checkpoint preceding it need to be recomputed when computing that b node during backprop. When backprop has progressed far enough to reach the checkpoint node, all nodes that were recomputed from it can be erased from memory. The order of computation and memory usage then would appear as: Graph 3: Checkpointed Backpropagation Thus, the package implements checkpointed backprop, which is implemented by taking the graph for standard/ vanilla backprop (Graph 1) and automatically rewriting it using the Tensorflow graph editor. For graphs that contain articulation points or single node graph dividers, checkpoints using the sqrt(n) strategy, giving sqrt(n)memory usage for feed-forward networks are automatically selected. For other general graphs that only contain multi-node graph separators, our implementation of checkpointed backprop still works. But currently, the checkpoints have to be selected manually by the user. Summing up, the biggest advantage of using gradient checkpointing is that it can save a lot of memory for large neural network models. But, this package has some limitations too, which are listed below. Limitations of gradient checkpointing: The provided code does all graph manipulation in python before running your model. This slows down the process for large graphs. The current algorithm for automatically selecting checkpoints is purely heuristic and is expected to fail on some models outside of the class that are tested. In such cases manual mode checkpoint selection is preferable. To know more about gradient checkpointing in detail or to have a further explanation of  computation graphs, memory usage, and gradient computation strategies, Yaroslav Bulatov’s medium post on gradient-checkpointing.
Read more
  • 0
  • 0
  • 5424

article-image-google-open-sources-bert-an-nlp-pre-training-technique
Prasad Ramesh
05 Nov 2018
2 min read
Save for later

Google open sources BERT, an NLP pre-training technique

Prasad Ramesh
05 Nov 2018
2 min read
Google open-sourced Bidirectional Encoder Representations from Transformers (BERT) last Friday for NLP pre-training. Natural language processing (NLP) consists of topics like sentiment analysis, language translation, question answering, and other language-related tasks. Large datasets for NLP containing millions, or billions, of annotated training examples is scarce. Google says that with BERT, you can train your own state-of-the-art question answering system in 30 minutes on a single Cloud TPU, or a few hours using a single GPU. The source code built on top of TensorFlow. A number of pre-trained language representation models are also included. BERT features BERT improves on recent work in pre-training contextual representations. This includes semi-supervised sequence learning, generative pre-training, ELMo, and ULMFit. BERT is different from these models, it is the first deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus - Wikipedia. Context-free models like word2vec generate a single word embedding representation for every word. Contextual models, on the other hand, generate a representation\ of each word based on the other words in the sentence. BERT is deeply bidirectional as it considers the previous and next words. Bidirectionality It is not possible to train bidirectional models by simply conditioning each word on words before and after it. Doing this would allow the word that’s being predicted to indirectly see itself in a multi-layer model. To solve this, Google researchers used a straightforward technique of masking out some words in the input and condition each word bidirectionally in order to predict the masked words. This idea is not new, but BERT is the first technique where it was successfully used to pre-train a deep neural network. Results On The Stanford Question Answering Dataset (SQuAD) v1.1, BERT achieved 93.2% F1 score surpassing the previous state-of-the-art score of 91.6% and human-level score of 91.2%. BERT also improves the state-of-the-art by 7.6% absolute on the very challenging GLUE benchmark, a set of 9 diverse Natural Language Understanding (NLU) tasks. For more details, visit the Google Blog. Intel AI Lab introduces NLP Architect Library FAT Conference 2018 Session 3: Fairness in Computer Vision and NLP Implement Named Entity Recognition (NER) using OpenNLP and Java
Read more
  • 0
  • 0
  • 5413

article-image-mozilla-engineer-shares-the-implications-of-rewriting-browser-internals-in-rust
Bhagyashree R
01 Mar 2019
2 min read
Save for later

Mozilla engineer shares the implications of rewriting browser internals in Rust

Bhagyashree R
01 Mar 2019
2 min read
Yesterday, Diane Hosfelt, a Research Engineer at Mozilla, shared what she and her team experienced when rewriting Firefox internals in Rust. Taking Quantum CSS as a case study, she touched upon the potential security vulnerabilities that could have been prevented if it was written in Rust from the very beginning. Why Mozilla decided to rewrite Firefox internal in Rust? Quantum CSS is a part of Mozilla’s Project Quantum, under which it is rewriting Firefox internals to make it faster. One of the major parts of this project is Servo, an engine designed to provide better concurrency and parallelism. To achieve these goals Mozilla decided to rewrite Servo in Rust, replacing C++. Rust is very similar to C++ in some ways while being different in terms of the abstractions and data structures it uses. It was created by Mozilla keeping concurrency safety in mind. Its type and memory-safe property make programs written in Rust thread-safe. What type of bugs does Rust prevent? Overall Rust prevents bugs related to memory, bounds, null/uninitialized variables, or integer by default. Hosfelt mentioned in her blog post, “Due to the overlap between memory safety violations and security-related bugs, we can say that Rust code should result in fewer critical CVEs (Common Vulnerabilities and Exposures).” However, there are some types of bugs that Rust does not address like correctness bugs. According to Hosfelt, Rust is a good option in the following cases: When your program involves processing of untrusted input safely When you want to use parallelism for better performance When you are integrating isolated components into an existing codebase You can go through the blog post by Diane Hosfelt on Mozilla’s website. Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 5406
article-image-google-adanet-a-tensorflow-based-automl-framework
Sugandha Lahoti
31 Oct 2018
3 min read
Save for later

Google AdaNet, a TensorFlow-based AutoML framework

Sugandha Lahoti
31 Oct 2018
3 min read
Google researchers have come up with a new AutoML framework, which can automatically learn high-quality models with minimal expert intervention. Google AdaNet is a fast, flexible, and lightweight TensorFlow-based framework for learning a neural network architecture and learning to ensemble to obtain even better models. How Google Adanet works? As machine learning models increase in number, Adanet will automatically search over neural architectures, and learn to combine the best ones into a high-quality model. Adanet implements an adaptive algorithm for learning a neural architecture as an ensemble of subnetworks. It can add subnetworks of different depths and widths to create a diverse ensemble, and trade off performance improvement with the number of parameters. This saves ML engineers the time spent selecting optimal neural network architectures. Source: Google Adanet: Built on Tensorflow AdaNet implements the TensorFlow Estimator interface. This interface simplifies machine learning programming by encapsulating training, evaluation, prediction and export for serving. Adanet also integrates with open-source tools like TensorFlow Hub modules, TensorFlow Model Analysis, and Google Cloud’s Hyperparameter Tuner. TensorBoard integration helps to monitor subnetwork training, ensemble composition, and performance. Tensorboard is one of the best TensorFlow features for visualizing model metrics during training. When AdaNet is done training, it exports a SavedModel that can be deployed with TensorFlow Serving. How to extend AdaNet to your own projects Machine learning engineers and enthusiasts can define their own AdaNet adanet.subnetwork.Builder using high level TensorFlow APIs like tf.layers. Users who have already integrated a TensorFlow model in their system can use the adanet.Estimator to boost model performance while obtaining learning guarantees. Users are also invited to use their own custom loss functions via canned or custom tf.contrib.estimator.Heads in order to train regression, classification, and multi-task learning problems. Users can also fully define the search space of candidate subnetworks to explore by extending the adanet.subnetwork.Generator class. Experiments: NASNet-A versus AdaNet Google researchers took an open-source implementation of a NASNet-A CIFAR architecture and transformed it into a subnetwork. They were also able to improve upon CIFAR-10 results after eight AdaNet iterations. The model achieves this result with fewer parameters: [caption id="attachment_23810" align="aligncenter" width="640"] Performance of a NASNet-A model versus AdaNet learning to combine small NASNet-A subnetworks on CIFAR-10[/caption] Source: Google You can checkout the Github repo, and walk through the tutorial notebooks for more details. You can also have a look at the research paper. Top AutoML libraries for building your ML pipelines. Anatomy of an automated machine learning algorithm (AutoML) AmoebaNets: Google’s new evolutionary AutoML
Read more
  • 0
  • 0
  • 5391

article-image-oracle-apex-18-1-is-here
Natasha Mathur
31 May 2018
4 min read
Save for later

Oracle Apex 18.1 is here!

Natasha Mathur
31 May 2018
4 min read
Oracle announced the much awaited Oracle Apex 18.1 today. Oracle Application Express is a free development tool by Oracle. It allows developers to create web-based applications quickly by using a web browser on an Oracle database. With Oracle Apex 18.1, Oracle provides easy integration of data from the REST services with data taken from the SQL queries within an Oracle database to build scalable applications. The new release also includes high-quality features for creating applications without the need of coding. Let’s have a look at some of the major features and improvements in Oracle Apex 18.1. Key features and updates Application features High-level application features such as access control, email reporting, feedback, activity reporting, dynamic user interface selection, etc, can be added to your app. An application can also be created with “cards” report interface, a timeline report as well as a dashboard. REST enabled SQL support Apex 18.1 allows you to build charts, calendars, reports, and trees. You can also invoke certain processes against Oracle REST Data Services (ORDS) -provided REST Enabled SQL Services. There is no need for a database link to include data from other remote database objects within your APEX application. REST Enabled SQL gets it all done for you. Web Source Modules Different REST endpoints can be used to declaratively access data such as ordinary REST data feeds, REST services from Oracle REST data services as well as Oracle Cloud Applications REST services. It provides the ability to influence REST data sources results using industry standard SQL. REST Workshop Updates have been made to the REST workshop. Apart from helping with creating REST services against Oracle database objects, the new REST workshop comes with an added ability to generate Swagger documentation against REST definitions with just a button click. Application Builder Improvements Oracle Apex 18.1 allows developers to create components quickly as wizards are now streamlined with fewer steps and smarter defaults. Usability enhancements have been made to Page Designer. This includes advanced color palette and graphics on page elements as well as to Sticky Filter which improves developers’ productivity. Social Authentication Oracle APEX 18.1 comes with a native authentication scheme and social sign-in. It is possible for developers to create applications in APEX using authentication methods such as Oracle Identity Cloud Service, Facebook, Google, generic OAuth2  and generic OpenID Connect without coding. Charts Oracle JET 4.2 engine is a new feature to the APEX 18.1. It consists of updated charts as well as APIs. It also comes with different types of charts such as Box-Plot, Gantt, and Pyramid. These provide support for multi-series sparse data sets. Mobile UI New component types namely ListView, Reflow Report and Column Toggle have been introduced which can be used for creating Mobile Applications. Improvements have been made to the APEX Universal Theme. These are mobile-focused enhancements which means that page headers and footers in mobiles will be displayed consistently on mobile devices. Also, floating item label templates help in optimizing the presented information on a mobile screen. There is also declarative support offered by Oracle APEX 18.1 for touch-based dynamic actions such as tap, swipe, double tap, press, and pan. Font APEX The new release includes a set of 32 x 32 high-resolution icons that automatically selects the right font size. Accessibility Accessibility mode is deprecated as the latest release will make use of APEX Advisor which consists of a bunch of tests to identify the most occurring accessibility issues. These are the major updates and improvements made in the latest Oracle APEX 18.1. Existing Oracle APEX 18.1 customers just need to install APEX 18.1 version to avail all the latest upgrades. To know more about Oracle APEX 18.1, be sure to check out the official Oracle Apex Blog. Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! Firefox 60 arrives with exciting updates for web developers: Quantum CSS engine, new Web APIs and more Will Oracle become a key cloud player, and what will it mean to development & architecture community?  
Read more
  • 0
  • 0
  • 5383

article-image-opensky-is-now-a-part-of-the-alibaba-family
Bhagyashree R
06 Sep 2018
2 min read
Save for later

OpenSky is now a part of the Alibaba family

Bhagyashree R
06 Sep 2018
2 min read
Yesterday, Chris Keane, the General Manager of OpenSky announced that OpenSky is now acquired by the Alibaba Group. OpenSky is a network of businesses that empower modern global trade for SMBs and help people discover, buy, and share unique goods that match their individual taste. OpenSky will join Alibaba Group in two capacities: One of OpenSky’s team will become a part of Alibaba.com in North America B2B to serve US based buyers and suppliers. The other team will become a wholly-owned subsidiary of Alibaba Group consisting of OpenSky’s marketplace and SaaS businesses. In 2015, Alibaba Group acquired a minority ownership on OpenSky. In 2017, they collaborated with Alibaba’s B2B leadership team to solve the challenges faced by small businesses. According to Chris, both the companies share a common interest, which is to help small businesses: “It was thrilling to discover that our counterparts at Alibaba share our obsession with helping SMBs. We’ve quickly aligned on a global vision to provide access to markets and resources for businesses and entrepreneurs, opening new doors and knocking down obstacles.” In this announcement Chris also mentioned that they will be coming up with powerful concepts to serve small businesses everywhere, in the near future. To know more, read the official announcement on LinkedIn. Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Digitizing the offline: How Alibaba’s FashionAI can revive the waning retail industry Why Alibaba cloud could be the dark horse in the public cloud race
Read more
  • 0
  • 0
  • 5382
article-image-amd-rocm-gpus-now-support-tensorflow-v1-8-a-major-milestone-for-amds-deep-learning-plans
Prasad Ramesh
28 Aug 2018
2 min read
Save for later

AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans

Prasad Ramesh
28 Aug 2018
2 min read
AMD has announced the support for TensorFlow v1.8 for their ROCm-enabled GPUs. This includes the Radeon Instinct MI25. ROCm stands for Radeon Open Compute and it is an open-source Hyperscale-class (HPC) platform for GPUs. The platform is programming-language independent. This is a major milestone in AMD’s efforts towards accelerating deep learning. ROCm, the Radeon Open Ecosystem is AMD’s open-source software foundation for GPU computing on Linux. Mayank Daga, Director, Deep Learning Software, AMD stated: “Our TensorFlow implementation leverages MIOpen, a library of highly optimized GPU routines for deep learning.” There is a pre-built whl package made available for a simple install similar to the installation of generic TensorFlow in Linux. They also provide a pre-built Docker image for fast installation. AMD is also working towards upstreaming all the ROCm-specific enhancements to the TensorFlow master repository in addition to supporting TensorFlow v1.8. While they work towards fully upstreaming the enhancements, AMD will be releasing and maintaining future ROCm-enabled TensorFlow versions, like v1.10. In the post, Daga stated, “We believe the future of deep learning optimization, portability, and scalability has its roots in domain-specific compilers. We are motivated by the early results of XLA, and are also working towards enabling and optimizing XLA for AMD GPUs.” Current CPUs which support PCIe Gen3 + PCIe Atomics are: AMD Ryzen CPUs AMD EPYC CPUs Intel Xeon E7 V3 or newer CPUs Intel Xeon E5 v3 or newer CPUs Intel Xeon E3 v3 or newer CPUs Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer). The installation is simple, First, you’ll need the open-source ROCm stack. Then, install the rocm library needs to be installed via APT: sudo apt update sudo apt install rocm-libs miopen-hip cxlactivitylogger And finally, you install TensorFlow itself (via AMD’s pre-built whl package): sudo apt install wget python3-pip wget http://repo.radeon.com/rocm/misc/tensorflow/tensorflow-1.8.0-cp35-cp35m-manylinux1_x86_64.whl pip3 install ./tensorflow-1.8.0-cp35-cp35m-manylinux1_x86_64.whl For more details on how to get started, visit the GitHub repository. There are also examples on image recognition, audio recognition, and multi-gpu training on ImageNet in the GPUOpen website. Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU” AMD open sources V-EZ, the Vulkan wrapper library Sugar operating system: A new OS to enhance GPU acceleration security in web apps
Read more
  • 0
  • 0
  • 5380

article-image-turbo-googles-new-color-palette-for-data-visualization-addresses-shortcomings-of-the-common-rainbow-palette-jet
Sugandha Lahoti
23 Aug 2019
4 min read
Save for later

Turbo: Google’s new color palette for data visualization addresses shortcomings of the common rainbow palette, 'Jet'

Sugandha Lahoti
23 Aug 2019
4 min read
Google has released a new color palette, which it has named Turbo to address some of the shortcomings of the current popular rainbow palette, Jet. These shortcomings, include false detail, banding, and color blindness ambiguity. According to the blog post, Turbo provides better data visualization depth perception. Their aim with Turbo is to provide a color map which is uniform and color blind-accessible, but also optimal for day to day tasks where the requirements are not as stringent. The blog post specifies that Turbo is meant to be used in cases where perceptual uniformity is not critical, but one still wants a high contrast, smooth visualization of the underlying data. Google Researchers created a simple interface to interactively adjust the sRGB curves using a 7-knot cubic spline while comparing the result on a selection of sample images as well as other well-known color maps. “This approach,” the blog post reads, “provides control while keeping the curve C2 continuous. The resulting color map is not “perceptually linear” in the quantitative sense, but it is more smooth than Jet, without introducing false detail.” Comparison of Turbo with other color maps Virdius and Inferno are two linear color maps that fix most issues of Jet and are generally recommended when false color is needed. However, some feel that it can be harsh on the eyes, which hampers visibility when used for extended periods. Turbo, on the other hand, mimics the lightness profile of Jet, going from low to high back down to low, without banding. Turbo’s lightness slope is generally double that of Viridis, allowing subtle changes to be more easily seen. “This is a valuable feature,” the researchers note, “since it greatly enhances detail when color can be used to disambiguate the low and high ends.” Lightness plots generated by converting the sRGB values to CIECAM02-UCS and displaying the lightness value (J) in greyscale. The black line traces the lightness value from the low end of the color map (left) to the high end (right). Source: Google blog The lightness plots show Viridis and Inferno plots to be linear and Jet’s plot to be erratic and peaky. Turbo’s had a similar asymmetric profile to Jet with the lows darker than the highs. Although the low-high-low curve increases detail, it comes at the cost of lightness ambiguity. This makes Turbo inappropriate for grayscale printing and for people with the rare case of achromatopsia (total color blindness). In the case of semantic layers, compared to Jet, Turbo is much more smooth and has no “false layers” due to banding. Turbo’s attention system prioritizes hue which makes it easy for Turbo to judge the differences in color than in lightness. Turbo’s color map can be used as a diverging colormap as well. The researchers tested Turbo using a color blindness simulator and found that for all conditions except Achromatopsia, the map remains distinguishable and smooth. NASA data viz lead argues Turbo comes with flaws Joshua Stevens, Data visualization and cartography lead at NASA has posted a detailed Twitter thread pointing out certain flaws with Google’s Turbo color map. He points out that “Color palettes should change linearly in lightness. However, Turbo admittedly does not do this. While it avoids the 'peaks' and banding of Jet, Turbo's luminance curve is still humped. Moreover, the slopes on either side are not equal, the curve is still irregular, and it starts out darker than it finishes.” He also contradicts Google’s statement of "our attention system prioritizes hue". The paper that Google links to clearly specifies that experimental results showed that brightness and saturation levels are more important than hue component in attracting attention.”. He clarifies further, “This is not to say that Turbo is not an improvement over Jet. It is! But there is too much known about visual perception to reimagine another rainbow. The effort is stellar, but IMO Turbo is a crutch that further slows adoption of more sensible palettes.” Google has made available the color map data and usage instructions for Python and C/C++. There is also a polynomial approximation, for cases where a look-up table may not be desirable. DeOldify: Colorising and restoring B&W images and videos using a NoGAN approach Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial] Matplotlib 3.0 is here with new cyclic colormaps, and convenience methods
Read more
  • 0
  • 0
  • 5372