





















































In NIMA, or Neural Image Assessment, Google is introducing a deep CNN that is trained to predict which images a typical user would rate as looking good (technically) or attractive (aesthetically).
NIMA relies on the success of state-of-the-art deep object recognition networks, building on their ability to understand general categories of objects despite many variations. The proposed network can be used to not only score images reliably and with high correlation to human perception, but it is also useful for a variety of labor intensive and subjective tasks such as intelligent photo editing, optimizing visual quality for increased user engagement, or minimizing perceived visual errors in an imaging pipeline.
“In our approach, instead of classifying images a low/high score or regressing to the mean score, the NIMA model produces a distribution of ratings for any given image — on a scale of 1 to 10, NIMA assigns likelihoods to each of the possible scores,” Google said in its blog. “This is more directly in line with how training data is typically captured, and it turns out to be a better predictor of human preferences when measured against other approaches.” More details are available in the arXiv paper.
nmtpytorch is a neural machine translation framework in PyTorch. It is the PyTorch fork of nmtpy, a sequence-to-sequence framework which was originally a fork of dl4mt-tutorial. The core parts of nmtpytorch depends on numpy, torch and tqdm. nmtpytorch is developed and tested on Python 3.6 and will not support Python 2.x whatsoever. For more details, go to the GitHub page.
AllenNLP v0.3.0 comes with updated key dependencies to Spacy 2.0 and PyTorch 0.3, with a few additional models and many new features since the 0.2 release. The new models include the baseline NER model from Semi-supervised Sequence Tagging with Bidirectional Language Models, and a coreference model, based on the publication End-to-end Neural Coreference Resolution, which achieved state-of-the-art performance in early 2017 (details are available at http://allennlp.org/models). Among the new features, version 0.3.0 comes with improved SRL visualization on the demo and ListField padding fixes.
Researchers at Rigetti Computing, a company based in Berkeley, California, has reportedly used one of its prototype quantum chips—a superconducting device housed within an elaborate super-chilled setup—to run a clustering algorithm. Rigetti is also making the new quantum computer—which can handle 19 quantum bits, or qubits—available through its cloud computing platform, called Forest. “This is a new path toward practical applications for quantum computers,” Will Zeng, head of software and applications at Rigetti, says. “Clustering is a really fundamental and foundational mathematical problem. No one has ever shown you can do this.” Let us see if Rigetti’s algorithm goes on to transform the world of machine learning and AI.
In what could make its new pricing far more competitive against Amazon’s Elastic MapReduce, Microsoft has reduced the prices for Azure HDInsight service. Microsoft said it is offering varying price cuts depending on the virtual machine type used for the head and worker nodes in the HDInsight cluster. The price cuts are up to 52 percent, Microsoft says, while the service itself remains largely the same. In addition, for those customers wishing to run data science workloads with code written in R, the surcharge for running R Server in a distributed fashion on an HDI cluster has been cut by 80 percent, down to just $0.016 (i.e. 1.6 US cents) per CPU core, per hour. For complete details on the new pricing, please visit https://azure.microsoft.com/en-us/pricing/details/hdinsight/.
DeepMind, the artificial intelligence unit of Google owner Alphabet, is trying to find out whether AIs can learn how to cheat. According to Bloomberg, it is doing so through a test that involves running AI algorithms in simple, two-dimensional, grid-based games. The test is designed to see if, in the process of self-improvement, DeepMind’s algorithms end up straying from the safety of their tasks. There are three goals to this research: finding out how to “turn off” AIs if they start to become dangerous; preventing unintended side-effects arising from their main task; and making sure agents can adapt when testing conditions vary from their training conditions.
Ahead of its annual Consumer Electronics Show in January (CES 2018), tech consulting giant Accenture has predicted the major stories that could be unveiled in the event. “The first story is around the expansion and proliferation of artificial intelligence, the second is about 5G and how that enables the next generation of technology such as the Internet of Things, and the third is blockchain as an enabling technology for things like security,” said Greg Roberts, managing director for Accenture’s North American high-tech industry practice, adding that there could be a shift toward software, and a lot of attention around autonomous vehicles. “We think that pulling things together like AI, 5G, blockchain, and software will result in autonomous vehicles,” Roberts said. The very concept of ‘driverless’ cars comes with numerous engineering, regulatory, and usability challenges, but they can be overcome with breakthroughs in predictive capabilities, self-driving features, and “in-vehicle” AI and algorithm solutions and intuitive user interfaces such as voice. CES 2018 starts with press events on January 7.