Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-20th-dec-17-headlines
Packt Editorial Staff
20 Dec 2017
5 min read
Save for later

20th Dec.' 17 - Headlines

Packt Editorial Staff
20 Dec 2017
5 min read
Kibana's new version, Twitter's enterprise API, Google's speech generation tool Tacotron 2, and a new model selection system Auto-Tuned Models (ATM) among today's top stories in artificial intelligence and data science news. Kibana 6.1.1 released Kibana removes the math aggregation feature in v6.1.1 Open-source data visualization tool Kibana has released its version 6.1.1 with a reported fix for a high severity security vulnerability in the Time Series Visual Builder. According to the official release announcement, all administrators of Kibana 6.1.0 are asked to upgrade immediately to 6.1.1, while the versions prior to 6.1.0 will not be affected. “If you had any Kibana 6.1.0 instances on Elastic Cloud, we’ve automatically upgraded them, so no further action is required,” Tech Lead Court Ewing said in the release announcement. “For folks that cannot upgrade from 6.1.0 at this time, you can disable time series visual builder entirely by specifying  metrics.enabled: false in kibana.yml and restarting Kibana. Note, this will require a full ‘optimize’ run, which can take a few minutes.” Just to recap, Kibana 6.1.0 had introduced a new feature for “math aggregations” in the Time Series Visual Builder which allowed users to apply mathematical operations to their TSVB results. Later it was found that the new feature had a vulnerability that could allow an attacker to execute arbitrary code on the Kibana server. Gauging the severity of the issue, Kibana decided to do away with this feature. “We do want to have this sort of math capability in Kibana at some point, but we need to take a more holistic view on its security before releasing it again,” the release added. For complete details about all other bug fixes in this release, refer to the release notes. Twitter promised to streamline relations with developer community, and they have made a start! Twitter launches a new enterprise API to power customer service and chatbots Earlier this year, Twitter had unveiled its long-term vision to revamp and streamline its API platform leveraging its investment in Gnip which it acquired in 2014. As part of that broader plan, Twitter has announced a new enterprise-level API to provide access to real-time activities like tweets, retweets, likes and follows. To be specific, the API is designed to help developers build apps that can power customer service, chatbots and brand engagement on Twitter. Alongside this launch, Twitter is launching a suite of developer tools for Direct Messages out of beta. These features include Quick Replies, Welcome Messages, Buttons on messages, Custom Profiles, and Customer Feedback Cards. Brands like Samsung, MTV, TBS, Wendy’s, and Patrón have used these tools with their chatbots. Whereas Tesco and Evernote have been using these previously announced tools for customer service. Generating Human-like Speech from Text Tacotron 2: Google’s next speech generation tool that combines the best of WaveNet and Tacotron Google has unveiled a new method for training a neural network to produce realistic speech from text that requires almost no grammatical expertise. Named Tacotron 2, the new technique combines the best of Google’s previous projects on speech generation: WaveNet and Tacotron. It uses text and the corresponding narration to calculate all the linguistic rules that are specified to the systems, and while the text is converted into the original Tacotron-style “mel-scale spectrogram” for rhythm and emphasis, the words are generated using a WaveNet-style system. Google researchers have submitted the project for consideration at the IEEE International Conference on Acoustics, Speech and Signal Processing. The full paper is available at arXiv. Auto-tuning data science Auto-Tuned Models (ATM): Researchers propose automated machine learning technique for model selection in data science A new paper called "ATM: A distributed, collaborative, scalable system for automated machine learning" was presented recently at the IEEE International Conference on Big Data, where researchers from MIT and Michigan State University proposed a new system that automates the model selection step in data science. The system, called Auto-Tuned Models (ATM), takes advantage of cloud-based computing to perform a high-throughput search over modeling options, and find the best possible modeling technique for a particular problem. ATM tests thousands of models in parallel, evaluates each, and allocates more computational resources to those techniques that show promise. Poor solutions wither out in the process, while the best options rise to the top. So rather than blindly choosing the “best” one and providing it to the user, ATM displays results as a distribution, allowing for comparison of different methods side-by-side. Researchers have open sourced ATM, including provisions to add new model selection techniques and improve on the platform. ATM can run on a single machine, local computing clusters, or on-demand clusters in the cloud, and can work with multiple data sets and multiple users simultaneously. Soon, ‘smart’ robots will taste and smell! Aromyx, Rewired team up to inject the sense of taste and smell into robotics Rewired and Aromyx have entered partnership to digitize and explore novel applications of taste and smell for smart robots. While Rewired is a venture studio for robotics development, Aromyx is the maker of EssenceChip, the digital platform for measuring taste and scent. EssenceChip is a disposable biosensor that places the human taste and olfactory receptors into a biochip. This technology is targeted for use by the food and beverage, flavor and fragrance, consumer packaged goods, chemical and agricultural industries. Rewired invests in machine perception technologies that will “unlock the next generation of smart robotics.” The studio focuses on the sensors, software, and systems that help autonomous machines interact with unpredictable environments and collaborate with humans. “The next generation of machines must be able to gather diverse data about their surroundings and holistically interpret that data in order to model the world and productively interact with it,” said Andy Hickl, Venture Partner at Rewired. “Applying Aromyx’s technology to robotics will help us detect and capture new modalities of data, that will inform decision making in autonomous machines and inspire the development of new learning models.”
Read more
  • 0
  • 0
  • 1113

article-image-ai-learns-talk-naturally-googles-tacotron-2
Sugandha Lahoti
20 Dec 2017
3 min read
Save for later

AI learns to talk naturally with Google’s Tacotron 2

Sugandha Lahoti
20 Dec 2017
3 min read
Google has been one of the leading forces in the area of text-to-speech (TTS) conversions. The company has further leaped ahead in this domain with the launch of Tacotron 2. The new technique is a combination of Google’s Wavenet and the original Tacotron—Google’s previous speech generation projects. WaveNet is a generative model of time domain waveforms. It produces natural sounding audio fidelity and is already used in some complete TTS systems. However, the inputs to WaveNet need significant domain expertise to produce as they require elaborate text-analysis systems and a detailed pronunciation guide. Tacotron is a sequence-to-sequence architecture for producing magnitude spectrograms from a sequence of characters i.e. it synthesizes speech directly from words. It uses a single neural network trained from data alone for production of the linguistic and acoustic features .Tacotron uses the Griffin-Lim algorithm for phase estimation. Griffin-Lim produces characteristic artifacts and lower audio fidelity than approaches like WaveNet. Although Tacotron was efficient with respect to patterns of rhythm and sound, it wasn’t actually suited for producing a final speech product. Tacotron 2 is a conjunction of the above described approaches. It features a tacotron style, recurrent sequence-to-sequence feature prediction network that generates mel spectrograms. Followed by a modified version of WaveNet which generates time-domain waveform samples conditioned on the generated mel spectrogram frames. Source: https://arxiv.org/pdf/1712.05884.pdf In contrast to Tacotron, Tacotron 2 uses simpler building blocks, using vanilla LSTM and convolutional layers in the encoder and decoder. Also, each decoder step corresponds to a single spectrogram frame. The original WaveNet used linguistic features, phoneme durations, and log F0 at a frame rate of 5 ms. However, these lead to significant pronunciation issues when predicting spectrogram frames spaced this closely. Hence, the WaveNet architecture used in Tacotron 2  work with 12.5 ms feature spacing by using only 2 upsampling layers in the transposed convolutional network. Here’s how it works: Tacotron 2 uses a sequence-to-sequence model optimized for TTS in order to map a sequence of letters to a sequence of features that encode the audio. These sequence of features include an 80-dimensional audio spectrogram with frames computed every 12.5 milliseconds. They are used for capturing word pronunciations, and various other qualities of human speech such as volume, speed and pitch. Finally, these features are converted to a waveform of 24 kHz using a WaveNet-like architecture. Tacotron 2 system can be trained directly from data without relying on complex feature engineering. It achieves state-of-the-art sound quality close to that of natural human speech. Their model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. Google has also provided some Tacotron 2 audio samples that demonstrate the results of their TTS system. In the future, Google would work on improving their system to pronounce complex words, generate audio in realtime, and directing a generated speech to sound happy or sad. The entire paper is available for reading at Arxiv archives here.
Read more
  • 0
  • 0
  • 3153

article-image-rigetti-quantum-computing-algorithm-unsupervised-machine-learning
Sugandha Lahoti
19 Dec 2017
2 min read
Save for later

Rigetti develops a new quantum algorithm to supercharge unsupervised Machine Learning

Sugandha Lahoti
19 Dec 2017
2 min read
Rigetti Computing, a startup based in Fremont, California, is on the mission to build the world’s most powerful computer. It is a full-stack quantum computing company which uses quantum mechanics to solve machine learning and artificial intelligence problems. In order to update classical computing resources, and traditional learning techniques, Rigetti has hybridized these practices with quantum processing abilities. They have trained a 19-qubit gate model processor to solve a clustering problem. Clustering is a machine-learning technique used to organize data into similar groups and is a foundational challenge in unsupervised learning. Their 19Q quantum computer, available through its cloud computing platform, Forest, uses a quantum approximate optimization algorithm. The Forest platform is used for controlling the quantum computer and accessing the data it generates. This novel algorithm is combined with a gradient-free Bayesian optimization to train the quantum machine. This hybridization relies on Bayesian optimization of classical parameters within the quantum circuit. It reaches an optimal solution in fewer steps than would otherwise be expected by drawing cluster assignments uniformly at random. The runtime for 55 Bayesian optimization steps with N = 2500 measurements per step is approximately 10 minutes. Rigetti’s algorithm was able to reach the optimum in fewer than 55 steps (only about 25% of runs did not reach the optimum within 55 steps). This algorithm can also be applied to other combinatorial optimization problems such as image recognition and machine scheduling. Rigetti’s demonstration uses the largest number of qubits as compared to any other algorithm in a gate-based quantum processor. The algorithm showed robustness to realistic noise. The entire algorithm is implemented in Python, leveraging the pyQuil library for describing parameterized quantum circuits in the quantum instruction language Quil. The Bayesian optimizer is provided by the open source package BayesianOptimization, also written in Python. The above demonstration is just a basic example of how quantum computers can help solve machine learning problems. Hybrid approaches like this one form the basis of valuable applications for the first quantum computers. However, beating the best classical benchmarks will require more qubits and better performance. Apart from working on developing new algorithms for quantum computing in machine learning, Rigetti Computing builds hardware and software to store and process quantum information. You can learn more about their research on their blog.
Read more
  • 0
  • 0
  • 2438
Visually different images

article-image-google-introduces-nima-neural-image-assessment-model
Savia Lobo
19 Dec 2017
4 min read
Save for later

Google introduces NIMA: A Neural Image Assessment model

Savia Lobo
19 Dec 2017
4 min read
Google recently introduced NIMA (Neural Image Enhancement Model), a deep convolutional neural network. NIMA is trained to predict which images would be considered would be considered technically or aesthetically attractive by a user.  It is able to generalize objects based on their categories despite many variations, similar to object recognition networks. It can be used to score images reliably with high correlation to human perception, and also in other labor-intensive and subjective tasks such as intelligent photo editing, optimizing visual quality for an improvised user engagement, or to minimize perceived visual errors within an imaging pipeline. Assessment of image quality and aesthetics has been a persistent issue in the field of image processing and computer vision. image quality assessment deals with measuring pixel-level degradations such as noise, blur, compression artifacts, etc., whereas the aesthetic assessment captures semantic level characteristics associated with emotions and beauty in images. In recent times, Deep CNNs, which are trained using human-labeled data have been used to detect the subjective nature of image quality for some specific classes of images, for instance, landscapes. But, such an approach is limited as it categorizes images into two classes, high and low namely. On the contrary, what NIMA does is, it predicts the distribution of ratings. This further leads to a high-quality prediction with a higher correlation to the ground truth ratings. Also, it can be applied to all the images in general instead of only to some specific ones. Let’s explore some applications of the NIMA model: Distribution of ratings Instead of classifying images as a low/high score or regressing to the mean score, the NIMA model produces a distribution of ratings for any given image, on a scale of 1 to 10, with 10 being the highest aesthetic score associated to an image. It assigns likelihoods to each of the possible scores, which is more directly in line with how training data is typically captured. Hence, it turns out to be a better predictor of human preferences when measured against other approaches. Ranking photos aesthetically Various functions of the NIMA vector score--such as mean--can be used to rank photos aesthetically. Some test photos from the large-scale database for Aesthetic Visual Analysis (AVA) dataset are taken into account, where each AVA photo is scored by an average of 200 people in response to photography contests. After training on the aesthetic ranking of these photos, the NIMA model closely matched the mean scores given by human raters. So, NIMA is highly likely to perform equally well on other datasets, with predicted quality scores close to human ratings. NIMA scoring for detecting quality of an image NIMA scores can also be used to differentiate between the quality of images which have the same subject but may have been distorted in others ways. For instance, the mean scores that are predicted, are used to qualitatively rank photos as shown in the figure below. These images are part of the TID2013 test set, which contains various types and levels of distortions. Source: https://arxiv.org/pdf/1709.05424.pdf Perceptual Image Enhancement Quality and aesthetic scores are used to perceptually tune image enhancement operators. In other words, maximizing NIMA score as part of a loss function can increase the likelihood of enhancing perceptual quality--the ability to interpret something through human senses--of an image. NIMA can be used as a training loss to tune a tone enhancement algorithm. The baseline aesthetic ratings can be improved by contrast adjustments directed by the NIMA score. Also, the NIMA model is able to guide a deep CNN filter to aesthetically find near-optimal settings of its parameters, such as brightness, highlights, and shadows. To summarize, with NIMA, Google suggests that the quality assessment models that are based on ML may be capable of a wider range of useful functions. For instance, an improved image capture, the ability to sort out best pictures out of many, and so on. For a deeper understanding of the workings of NIMA, you can go through the research paper.  
Read more
  • 0
  • 0
  • 3866

article-image-19th-dec-17-headlines
Packt Editorial Staff
19 Dec 2017
5 min read
Save for later

19th Dec.' 17 - Headlines

Packt Editorial Staff
19 Dec 2017
5 min read
NIMA, nmtpytorch, and AllenNLP v0.3.0 among today’s top stories in machine learning, artificial intelligence and data science news. Now a CNN to predict if an image is aesthetic! Introducing NIMA: Neural Image Assessment In NIMA, or Neural Image Assessment, Google is introducing a deep CNN that is trained to predict which images a typical user would rate as looking good (technically) or attractive (aesthetically). NIMA relies on the success of state-of-the-art deep object recognition networks, building on their ability to understand general categories of objects despite many variations. The proposed network can be used to not only score images reliably and with high correlation to human perception, but it is also useful for a variety of labor intensive and subjective tasks such as intelligent photo editing, optimizing visual quality for increased user engagement, or minimizing perceived visual errors in an imaging pipeline. “In our approach, instead of classifying images a low/high score or regressing to the mean score, the NIMA model produces a distribution of ratings for any given image — on a scale of 1 to 10, NIMA assigns likelihoods to each of the possible scores,” Google said in its blog. “This is more directly in line with how training data is typically captured, and it turns out to be a better predictor of human preferences when measured against other approaches.” More details are available in the arXiv paper. Introducing nmtpytorch nmtpytorch open sourced on GitHub nmtpytorch is a neural machine translation framework in PyTorch. It is the PyTorch fork of nmtpy, a sequence-to-sequence framework which was originally a fork of dl4mt-tutorial. The core parts of nmtpytorch depends on numpy, torch and tqdm. nmtpytorch is developed and tested on Python 3.6 and will not support Python 2.x whatsoever. For more details, go to the GitHub page. Announcing AllenNLP 0.3 Open-source NLP research library AllenNLP releases version 0.3.0 AllenNLP v0.3.0 comes with updated key dependencies to Spacy 2.0 and PyTorch 0.3, with a few additional models and many new features since the 0.2 release. The new models include the baseline NER model from Semi-supervised Sequence Tagging with Bidirectional Language Models, and a coreference model, based on the publication End-to-end Neural Coreference Resolution, which achieved state-of-the-art performance in early 2017 (details are available at http://allennlp.org/models). Among the new features, version 0.3.0 comes with improved SRL visualization on the demo and ListField padding fixes. New algorithm to perform clustering tasks on a quantum machine A startup named Rigetti is using Quantum Computing to boost Machine Learning Researchers at Rigetti Computing, a company based in Berkeley, California, has reportedly used one of its prototype quantum chips—a superconducting device housed within an elaborate super-chilled setup—to run a clustering algorithm. Rigetti is also making the new quantum computer—which can handle 19 quantum bits, or qubits—available through its cloud computing platform, called Forest. “This is a new path toward practical applications for quantum computers,” Will Zeng, head of software and applications at Rigetti, says. “Clustering is a really fundamental and foundational mathematical problem. No one has ever shown you can do this.” Let us see if Rigetti’s algorithm goes on to transform the world of machine learning and AI. Azure HDInsight cheaper by 52% Microsoft's cloud Big Data service cuts pricing on HDInsight by up to 52 % and slashes additional charges for R Server by 80% In what could make its new pricing far more competitive against Amazon’s Elastic MapReduce, Microsoft has reduced the prices for Azure HDInsight service. Microsoft said it is offering varying price cuts depending on the virtual machine type used for the head and worker nodes in the HDInsight cluster. The price cuts are up to 52 percent, Microsoft says, while the service itself remains largely the same. In addition, for those customers wishing to run data science workloads with code written in R, the surcharge for running R Server in a distributed fashion on an HDI cluster has been cut by 80 percent, down to just $0.016 (i.e. 1.6 US cents) per CPU core, per hour. For complete details on the new pricing, please visit https://azure.microsoft.com/en-us/pricing/details/hdinsight/. This could allay Elon Musk and Stephen Hawking’s fears! Google DeepMind is using Games to discover if Artificial Intelligence can overpower and kill us all one day DeepMind, the artificial intelligence unit of Google owner Alphabet, is trying to find out whether AIs can learn how to cheat. According to Bloomberg, it is doing so through a test that involves running AI algorithms in simple, two-dimensional, grid-based games. The test is designed to see if, in the process of self-improvement, DeepMind’s algorithms end up straying from the safety of their tasks. There are three goals to this research: finding out how to “turn off” AIs if they start to become dangerous; preventing unintended side-effects arising from their main task; and making sure agents can adapt when testing conditions vary from their training conditions. CES 2018: Predictions Accenture predicts the top tech stories of upcoming annual Consumer Electronics Show Ahead of its annual Consumer Electronics Show in January (CES 2018), tech consulting giant Accenture has predicted the major stories that could be unveiled in the event. “The first story is around the expansion and proliferation of artificial intelligence, the second is about 5G and how that enables the next generation of technology such as the Internet of Things, and the third is blockchain as an enabling technology for things like security,” said Greg Roberts, managing director for Accenture’s North American high-tech industry practice, adding that there could be a shift toward software, and a lot of attention around autonomous vehicles. “We think that pulling things together like AI, 5G, blockchain, and software will result in autonomous vehicles,” Roberts said. The very concept of ‘driverless’ cars comes with numerous engineering, regulatory, and usability challenges, but they can be overcome with breakthroughs in predictive capabilities, self-driving features, and “in-vehicle” AI and algorithm solutions and intuitive user interfaces such as voice. CES 2018 starts with press events on January 7.
Read more
  • 0
  • 0
  • 1149

article-image-landing-ai-andrew-ngs-artificial-intelligence-factory-floors
Sugandha Lahoti
19 Dec 2017
3 min read
Save for later

Landing.ai: Andrew Ng’s next flight to bring Artificial Intelligence to factory floors

Sugandha Lahoti
19 Dec 2017
3 min read
Andrew Ng, the Google brain co-founder, in a recent post, talked about his desire to build an artificial intelligence company that will help other enterprises transform for the age of AI. With this in mind, he has announced Landing. ai,  a new AI and ML powered startup, and its first stop is manufacturing. As per Andrew, the IT industry has seen firsthand benefits of modern AI. However, he wants to build an AI-powered society. In his own words, “One in which our physical needs, health care, transportation, food, and lodging are more accessible through AI, and where every person is freed from repetitive mental drudgery.” Landing.AI aims to help organizations and enterprises with in-house AI solutions, and employee training to successfully transition the organizational structure and develop strategies and technologies for enterprises to transform in the age of AI. They will initially focus on the manufacturing industry. Foxconn, a multinational electronics contract manufacturing company is Landing.ai’s first strategic partner and is working with the startup since July this year. They provide Landing.ai a platform to jointly develop and deploy AI solutions and training globally. Here are some of the focus areas within manufacturing where Landing.ai will work on: It will bring in an AI-powered adaptive manufacturing, automated quality control, and predictive maintenance. Their AI-powered solutions will revolve around visual inspection, controlling and automating, calibration and tuning, and automated issue identification for large manufacturer partners. They are developing solutions for retraining of current or displaced workers to make them AI ready. AI would be used to improve quality control, shorten design cycles, remove supply-chain bottlenecks, reduce materials and energy waste, and improve production yields. They are also dedicating resources to improve the organizational structure and help companies in adopting the most effective AI technologies and processes. At a press briefing in San Francisco, Andrew demonstrated an example of using AI for visual inspection in a factory’s quality control efforts. Using a circuit board embedded beneath a digital camera, a computer was used to identify defective parts over an assembly line, instead of a worker doing that. Moreover, Landing.ai has developed a learning algorithm that takes only five training images for the computer to be trained, as compared to thousands of images required by other computer vision systems. Looking toward the future, Andrew said that Landing.ai could move into other sectors such as logistics, apart from manufacturing. He would also work on providing more public education resources to help a broader population adopt these new technologies. For more information and updates, you can visit their website landing.ai.
Read more
  • 0
  • 0
  • 1586
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-18th-dec-17-headlines
Packt Editorial Staff
18 Dec 2017
3 min read
Save for later

18th Dec.' 17 - Headlines

Packt Editorial Staff
18 Dec 2017
3 min read
MonkeyType, Chainer Chemistry, and the future possibility of Python in Excel in today's top stories in AI, ML and data science news. Soon developers could use their favorite programming language inside Excel! Microsoft considers adding Python as an official Scripting Language to Excel Microsoft is considering adding Python as one of the official Excel scripting languages, according to a topic on Excel's feedback hub opened last month. Since it was opened, the topic has become the most voted feature request, double the votes of the second-ranked proposition. "Let us do scripting with Python! Yay! Not only as an alternative to VBA, but also as an alternative to field functions (=SUM(A1:A2))," the feature request reads, as opened by one of Microsoft's users. In response to the buzz, Microsoft put up a survey to gather more information and know how users would like to use Python inside Excel. If approved, Excel users would be able to use Python scripts to interact with Excel documents, their data, and some of Excel's core functions, similar to how Excel currently supports VBA scripts. Generating type annotations from sampled production types Let your code type-hint itself: introducing open source MonkeyType Instagram Engineering has announced that it is open sourcing MonkeyType, its tool for automatically adding type annotations to a Python 3 code via runtime tracing of types seen. “Our first forays into manually adding type annotations were discouraging. It can take hours to annotate a single module, sometimes painstakingly tracing through multiple layers of function calls and objects to understand the possible types at some call site. So we built MonkeyType,” the team said. “Instead of guessing or spelunking for the right types, let your test suite or (better!) your production system tell you what the real types are.” MonkeyType collects runtime types of function arguments and return values, and can automatically generate stub files or even add draft type annotations directly to your Python code based on the types collected at runtime. MonkeyType requires Python 3.6+ and the retype library (for applying type stubs to code files). It generates only Python 3 type annotations (no type comments). To install MonkeyType with pip: pip install MonkeyType. For more details, read the full documentation. This library will help you easily apply deep learning on molecular structures Announcing Chainer Chemistry: A library for Deep Learning in Biology and Chemistry Chainer Chemistry is a collection of tools to train and run neural networks for tasks in biology and chemistry using Chainer. It supports various state-of-the-art deep learning neural network models (especially Graph Convolution Neural Network) for chemical molecule property prediction. The library was developed during the PFN 2017 summer internship, and part of the library has been implemented by an internship student, Hirotaka Akita at Kyoto University. For more information, you can refer to documentation. You can install this library via PyPI: pip install chainer-chemistry
Read more
  • 0
  • 0
  • 1268

article-image-microsoft-planning-python-official-scripting-language-excel
Savia Lobo
18 Dec 2017
3 min read
Save for later

Is Microsoft planning to make Python an official Scripting Language for its Excel package?

Savia Lobo
18 Dec 2017
3 min read
A shout out to all Pythonistas! Microsoft has something in store for you if you enjoy scripting in Excel. Python to be among the official Excel scripting languages According to a topic on Excel’s feedback hub opened last month, Microsoft considers adding Python for Excel. This topic has turned out to be the highly voted feature request, ever since it was put up on the hub. Microsoft recently rolled out a survey in order to gather a detailed understanding on how users would like to make use of Python within Excel. Python turns complexity to simplicity Python is one of the most popular programming languages among developers, due to its simplicity in coding and its versatility. Talking about its ranking among other programming languages, Python ranks second on the PYPL programming languages ranking. It ranks third in the RedMonk Programming Language Rankings, and fourth in the TIOBE index. If Python for Excel is sanctioned by Microsoft, one can easily work with Excel documents, Excel data, its core functions, using Python scripts replacing the current VBA scripts. This Python scripting, would not only turn out to be a substitute for VBA, but also could be a substitute to field functions (=SUM(A1:A2)). The idea of having Python as an official Excel scripting language, was highly appreciated by many users on board. Additionally, these users also chalked out that if Microsoft goes ahead in wiring Python within Excel, they also would require Python in other Microsoft office apps. As per a discussion on the Hacker news forum, a user posted that, “Much as I would love for the power of Python in Excel it is important that whatever is done is consistent across the office experience. Some of us old enough to remember the multiple versions of VB-whatever across Excel, Word, Access and that in itself was a blow to productivity” The user also added that, Microsoft should definitely choose Python and during this process it should also decide whether it would be Python with a .Net library--which has separate standard and core libraries--or IronPython. Later, this has to be done in a mechanism that provides exact same libraries and user-written code to work in the same way throughout other Microsoft Office products. Though, Microsoft would be delighted in adding such a feature for their users, still not much is known about this project. Until then we can expect great surprises from Microsoft with user side benefits.
Read more
  • 0
  • 0
  • 4300

article-image-week-glance-9th-dec-15th-dec-2017-top-news-data-science
Aarthi Kumaraswamy
16 Dec 2017
4 min read
Save for later

Week at a Glance (9th Dec – 15th Dec 2017): Top News from Data Science

Aarthi Kumaraswamy
16 Dec 2017
4 min read
This week saw conversations spark around ethics, regulation, governance, and formulation of standards for ML systems even as nations, tech giants and large government agencies pick up momentum in forming partnerships and plans to dominate the AI race. The trend of making ML/AI development more accessible as well as making ML/AI products more competitive remain on top of all tech companies from Google, Amazon, Microsoft to Kubernetes to newcomers like Landing.ai. NIPS Special Coverage - Part 2 A deep dive into Deep Bayesian and Bayesian Deep Learning with Yee Whye Teh How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel For the complete coverage, visit here. News Highlights NASA’s Kepler discovers a new exoplanet using Google’s Machine Learning Q# 101: Getting to know the basics of Microsoft’s new quantum computing language Generative Adversarial Networks: Google open sources TensorFlow-GAN (TFGAN) “The future is quantum” — Are you excited to write your first quantum computing code using Microsoft’s Q#? Is space the final frontier for AI? NASA to reveal Kepler’s latest AI backed discovery AlphaZero: The genesis of machine intuition “The Blockchain to Fix All Blockchains” – Overledger, the meta blockchain, will connect all existing blockchains In other News 15th Dec.’ 17 – Headlines Andrew Ng starts “Landing.ai” to bring AI into manufacturing, Foxconn joins in BlockSci version 0.3 announced with big performance gains, SegWit support, and numerous bugfixes Announcing Apache Hadoop v3.0.0 Announcing free trials for IBM Spectrum Conductor Deep Learning Impact version 1.1 Industrial Internet Consortium and Plattform Industrie 4.0 collaborate to facilitate interoperability of Industrial Internet systems Microsoft introduces “Insights” in Excel, “Acronyms” on Word, “Time to leave” on Outlook, Microsoft Whiteboard Preview, and “Text in image search” features into Office 365 suite Microsoft’s “AI Factory” initiative may boost the emergence of artificial intelligence market in France ARM releases pre-trained TensorFlow models and code for their speech keyword recognition code China develops three-year action plan to accelerate the industrial development of artificial intelligence 14th Dec.’ 17 – Headlines Microsoft makes Azure Bot Service and LUIS generally available for developers to build better conversational AI tools Microsoft announces new AI-powered search features for Bing with improved image search and object recognition, machine reading, and a more conversational auto-complete function Denso, Toyota conducting a trial to see if quantum computers can be used to analyze IoT data with commercial applications Introducing Model Server for Apache MXNet (MMS) IEEE releases version 2 of Ethically Aligned Design for Autonomous and Intelligent Systems (A/IS), invites industry inputs 13th Dec.’17 – Headlines IBM releases 120 code patterns for building AI, blockchain, IoT and chatbots NVIDIA announces release of CUDA Toolkit version 9.1 Accelerite launches ShareInsights 2.0, an end-to-end, self-service approach for big data analysis Google reduces prices of its cloud machine learning offerings Seattle plans for new machine learning apps and algorithms seeking inspiration from a Facebook Hackathon 12th Dec.’ 17 – Headlines Bitcoin price steadies, albeit a little, as first futures contracts begin trading on Cboe exchange Introducing Kubeflow to bring composable, easier to use stacks with more control and portability for Kubernetes deployments for all ML, not just TensorFlow. Lexalytics launches new pipeline for building machine learning-based, artificial intelligence applications for natural language processing 11th Dec.’ 17 – Headlines Practix using IoT tracking device in its Gyms for real-time data analytics Numba 0.36.1 announced with LLVM 5, the stencil decorator, and built with Anaconda Distribution 5 compilers Apple open sources ‘Turi Create’ machine learning framework on Github Gensim 3.2.0 released: new Poincare embeddings, speed up of FastText, pre-trained models for download, Linux/Windows/MacOS wheels and performance improvements  
Read more
  • 0
  • 0
  • 1294

article-image-15th-dec-17-headlines
Packt Editorial Staff
15 Dec 2017
8 min read
Save for later

15th Dec.' 17 - Headlines

Packt Editorial Staff
15 Dec 2017
8 min read
AI veteran Andrew Ng's startup Landing.ai, BlockSci 0.3, Hadoop 3, IBM Spectrum Conductor Deep Learning Impact V1.1, and Microsoft's new AI features into Office 365 among today's top news in artificial intelligence and data science news. Google Brain Co-founder Andrew Ng wants to expand artificial intelligence beyond traditional IT world Andrew Ng starts “Landing.ai” to bring AI into manufacturing, Foxconn joins in Andrew Ng has announced a new startup “Landing.ai” that focuses on using artificial intelligence into the manufacturing sector. Under the venture, Foxconn has become the .first strategic partner. Noting that the fruits of AI and machine learning should reach every industry, instead of being confined to internet companies, Ng said his initiative Landing.ai is working on a number of “AI transformation programs” that include introducing new technologies to companies and training employees,among others. As for the Foxconn partnership, Ng said that he’s been working with them to develop “AI technologies, talent and systems that build on the core competencies of the two companies.” BlockSci v0.3 released BlockSci version 0.3 announced with big performance gains, SegWit support, and numerous bugfixes BlockSci, a high-performance platform for blockchain science and exploration, has released its version 0.3 that includes many bug fixes and a massive 5x performance improvement, in addition to SegWit support. Arvind Naraynan, who co-created BlockSci at Princeton University, said that in the updated version traversing every transaction input and output on the bitcoin blockchain takes only 1 second. “If you want to start using BlockSci immediately, we have made available an EC2 image: ami-7cf38706. We recomend using instance with 60 GB of memory or more for optimal performance (r4.2xlarge). On boot, a Jupyter Notebook running BlockSci will launch immediately,” the platform founders said, “To run BlockSci locally, you must be running a full node (such as bitcoind or an altcoin node) since BlockSci requires the full serialized blockchain data structure which full nodes produce.” Details are available in the release notes. Hadoop 3 released Announcing Apache Hadoop v3.0.0 The Apache Software Foundation has announced the general availability of Apache Hadoop v3.0.0 with new features that improve the efficiency, scalability, and reliability of the open source framework. "This latest release unlocks several years of development from the Apache community," said Chris Douglas, Vice President of Apache Hadoop. "The platform continues to evolve with hardware trends and to accommodate new workloads beyond batch analytics, particularly real-time queries and long-running services. At the same time, our Open Source contributors have adopted Apache Hadoop to a wide range of deployment environments, including the Cloud." Among the highlights, the new version has improved capabilities for cloud storage systems such as Amazon S3 (S3Guard), Microsoft Azure Data Lake, and Aliyun Object Storage System. The HDFS erasure coding halves the storage cost of HDFS while also improving data durability, whereas the YARN Timeline Service v.2 (preview) improves the scalability, reliability, and usability of the Timeline Service. "Hadoop 3 is a major milestone for the project, and our biggest release ever," said Andrew Wang, Apache Hadoop 3 release manager. The team announced that Apache Hadoop will be in action at the Strata Data Conference in San Jose, CA, on 5-8 March 2018. Building a distributed deep learning environment in hours Announcing free trials for IBM Spectrum Conductor Deep Learning Impact version 1.1 Built on IBM Spectrum Conductor with Spark, the IBM Spectrum Conductor Deep Learning Impact provides robust, end-to-end workflow support for deep learning application logic. This includes the complete lifecycle management from installation and configuration, data ingest and preparation, building, optimizing, and training the model, to inference and testing. The IBM Spectrum Conductor Deep Learning Impact V1.1 adds on to IBM Spectrum Conductor with Spark V2.2.1 that provides deep learning capabilities to a new or existing IBM Spectrum Conductor with Spark environment. It is supported with IBM Power servers with NVLink and NVIDIA GPUs. It is also supported on x86 systems leveraging GPUs. The IBM Spectrum Conductor Deep Learning Impact helps monitor deep learning training jobs by capturing logs from the underlying deep learning frameworks. It also provides training progress of a job with monitoring data charts and summary information, and what more, offers advices on how to optimize the training job. The full version of IBM Spectrum Conductor Deep Learning Impact 1.1 is available on IBM Passport Advantage. To evaluate and try IBM Spectrum Conduct Deep Learning Impact, download the free trial. Advancing the Industrial Internet of Things (IIoT) Industrial Internet Consortium and Plattform Industrie 4.0 collaborate to facilitate interoperability of Industrial Internet systems Industrial Internet Consortium and Plattform Industrie 4.0 have aligned their internal architectures to accelerate the adoption of the Industrial Internet on a global scale with a cross-industry oriented approach. Representatives from both organizations began meeting in 2015, independently developing their own reference architectures for the Industrial Internet: Plattform Industrie 4.0’s Reference Architecture Model for Industrie 4.0 (RAMI4.0) and the Industrial Internet Reference Architecture (IIRA). After subsequent discussions, the companies merged the two approaches creating an initial draft mapping showing the direct relationships between elements of the models. They jointly developed a clear roadmap to ensure future interoperability. More information will be available in coming days. Microsoft rolls out new Artificial Intelligence features to Office 365 Microsoft introduces “Insights” in Excel, “Acronyms” on Word, “Time to leave” on Outlook, Microsoft Whiteboard Preview, and “Text in image search” features into Office 365 suite Microsoft has announced the preview of Insights in Excel—a new service that automatically highlights patterns it detects, which makes it easier for everyone to explore and analyze their data. Powered by machine learning, Insights helps identify trends, outliers, and other useful visualizations, providing new and useful perspectives on data. Next, the company announced a new Word feature called Acronyms. Using machine learning, Acronyms helps people understand shorthand that is commonly used in their own workplaces by leveraging the Microsoft Graph to surface definitions of terms that have been previously defined across emails and documents. Acronyms will begin rolling out to Word Online for Office 365 commercial subscribers in 2018. In another new addition, Microsoft is bringing Cortana to the Outlook mobile app such that when it is time to leave for appointments, Outlook will now send a notification—with directions for both driving and public transit—taking into account current location, the event location, and real-time traffic information. Time to leave in Outlook is rolling out to iOS users this month in markets where Cortana is available. Microsoft also announced the preview of Microsoft Whiteboard for Windows 10 devices—a freeform digital canvas where people, ideas, and content can come together. Microsoft Whiteboard Preview is built for teams who ideate and work together across multiple devices and locations.It is  now available in preview from the Windows Store. Finally, Microsoft has launched a new Text in images feature with intelligent search. The new feature will “automatically extract searchable text” from whiteboards, screenshots, receipts, business cards and more. Text in image search is currently rolling out and will be available to all Office 365 commercial subscribers by the end of December. “AI Factory” in France Microsoft’s "AI Factory" initiative may boost the emergence of artificial intelligence market in France Microsoft has created the "AI Factory" program at Station F, the largest start-up campus in the world hosting around 1,000 startups in Paris. Designed to  "stimulate the emergence of French champions of artificial intelligence," the program supports seven start-ups: Recast.AI, AB Tasty, DCbrain, Scortex, AI Craft, Case Law Analytics, Hugging Face. Microsoft is specially focusing on leveraging artificial intelligence in the marketing sector, using Chatbots, CRM, predictive analysis, and more. The company said that it has launched the program in order to help all players in the French Tech space to address the shift in artificial intelligence to make France a world reference in the field. In other Data science News ARM releases pretrained TensorFlow models and code for their speech keyword recognition code ARM has released pretrained TensorFlow models and code for their speech keyword recognition code which is small enough to run on M-class DSPs. The details are published in this arXiv paper. The code, model definitions and pretrained models are available here on GitHub. China develops three year action plan to accelerate the industrial development of artificial intelligence China's Ministry of Industry and Information Technology has released the three year action plan (2018-2020) to promote the development of a new generation of artificial intelligence. The move is aimed to boost the "Made in China 2025 " and "a new generation of artificial intelligence development plan" which China wants to integrate into its mainstream economy in coming years.
Read more
  • 0
  • 0
  • 1475
article-image-nasas-kepler-discovers-new-exoplanet-using-googles-machine-learning
Sugandha Lahoti
15 Dec 2017
3 min read
Save for later

NASA’s Kepler discovers a new exoplanet using Google’s Machine Learning

Sugandha Lahoti
15 Dec 2017
3 min read
Earlier this week, we wrote that NASA’s Kepler has made a major breakthrough discovery and is going to announce it in a press conference soon. Well, The day has finally arrived! NASA scientists have discovered a new planet outside our solar system. The Kepler-90i is a part of the eight-planet solar system called The Kepler-90 star system. Like Earth, the new planet is the third rock from its sun. Kepler-90i orbits the star Kepler-90 and is 2,545 light years away from earth. However, it is much closer to its sun and so it has a temperature of 427 degrees Celsius at the surface, making it unlikely that life could exist there. An interesting thing to note here is that this discovery was made by machines and not humans! NASA has partnered with Google to harness Machine learning to crunch data collected from the Kepler’s telescope. The experts from Google taught the ML system how to identify planets around faraway stars. The AI behind discovering Kepler 90i NASA’s Kepler observed about 200,000 stars for four years, taking a picture every 30 minutes, creating about 14 billion data points. Those 14 billion data points translated to around 2 quadrillion possible planet orbits. To analyze this huge amount of data points, Google experts turned to Machine Learning. They created a TensorFlow model to distinguish planets from non-planets. This model was created using a dataset of more than 15,000 labeled Kepler signals. Source: https://blog.google/topics/machine-learning/hunting-planets-machine-learning/ The model is a deep convolutional neural network which recognizes patterns created by light curves of actual planets, versus light curves and patterns caused by other objects like starspots or binary stars. They considered 3 types of neural networks for classifying Kepler’s data points as either “planets” or “not planets”. A linear architecture as the base model, a fully connected architecture as the second layer and finally, a 1-dimensional CNN with max pooling. The model was successful in distinguishing between a planet and a non-planet with 96% accuracy, when it was fed with signals never seen before. This model was then run over the data of 670 stars, known to have two or more exoplanets. The model was successful in discovering two new planets: Kepler 80g and Kepler 90i. Google Scientists now plan to use their model to search for other 200,000 stars in the Kepler data for unfound exoplanets. They also plan on incorporating new machine learning techniques to help fuel celestial discoveries for many years to come. Jessie Dotson, a NASA project scientist for the Kepler Telescope, said “As the application of neural networks to Kepler data matures, who knows what might be discovered, I'm on the edge of my seat.”
Read more
  • 0
  • 0
  • 2582

article-image-14th-dec-17-headlines
Packt Editorial Staff
14 Dec 2017
5 min read
Save for later

14th Dec.' 17 - Headlines

Packt Editorial Staff
14 Dec 2017
5 min read
Microsoft's new AI features on Azure and Bing, Amazon's Model Server for Apache MXNet, and Toyota's trialing of quantum computing system for IoT among today's top news in artificial intelligence and data science news. Microsoft’s new AIs around ‘conversational bots’ Microsoft makes Azure Bot Service and LUIS generally available for developers to build better conversational AI tools Microsoft has announced the generally availability of its Azure Bot Service and Cognitive Language Understanding service (known as LUIS). “Making these two services generally available on Azure simultaneously extends the capabilities of developers to build custom models that can naturally interpret the intentions of people conversing with bots,” Lili Cheng, corporate vice president at the Microsoft AI and Research division wrote in a company blog post on the announcement. Cheng further says Microsoft has designed the bot framework to be “as flexible” as possible. “You don’t even need to host it on Azure if you don’t want to,” she said, adding that the company's mission is “to bring conversational AI tools and capabilities to every developer and every organization on the planet.” Azure based bots using LUIS are at the foundation of conversational AI services offered by ICS, a UK based AI company. ICS.ai have embraced the development of customisable and ever improving conversational bots to support public sector organisations in their digital transformation journeys. A core component of using AI in the public domain is adopting AI compliance and ethics that include trust, privacy, inclusion and accessibility. Can Artificial Intelligence revive Bing’s fortunes? Microsoft announces new AI-powered search features for Bing with improved image search and object recognition, machine reading, and a more conversational auto-complete function Microsoft has announced a set of artificial intelligence functionalities for its Bing search engine to make it “more conversational” and context-sensitive. The new features upgrade Bing with better use of object recognition, machine reading for parsing text and extracting meaning, and other improved techniques with AI training methods. Earlier in September, Microsoft had attempted to cut down on fake news, misinformation, and other distorted stories from manipulative information sources, with a new fact checks feature into Bing search results. Now they have further enhanced Bing search results showing multiple perspectives collated from a list of pre-approved multiple news sources. Next, Microsoft has entered into a new partnership with social news site Reddit, under which Bing search results will source information by using algorithms to read and analyze the user-generated text across Reddit. Besides, Bing has also introduced a new machine learning-driven conversational search function, deploying a more advanced form of auto-complete to help users better phrase a query to get a more desirable answer on the first search. Additionally, Bing’s image search function will be able to better identify objects in photos, having been trained on object recognition models. While the conversational and intelligent search features will be available from next month, the improved search functionality is already effective, Microsoft said. Analyzing IoT data with quantum computers Denso, Toyota conducting a trial to see if quantum computers can be used to analyze IoT data with commercial applications Denso and Toyota Tsusho will jointly conduct a test run using a quantum computer to process data from a traffic IoT platform. The test will include vehicle location and travel data of over 130,000 commercial vehicles in Thailand. During the trial, the companies hope to guide application development to make transportation more efficient, including traffic decongestion and route optimization for emergency vehicles. Canada-based D-Wave Systems has built the quantum computer that will be used in the test to collect and analyze information from taxis and trucks. While TSquare, a traffic information service application from Toyota Tsusho, will provide quantum computer-based data analysis and processing technologies, Denso will create an algorithm to process and analyze quantum computer-based data within an app on TSquare. Toyota Tsusho is a member of the Toyota Group. AWS further makes deploying deep learning models easier Introducing Model Server for Apache MXNet (MMS) The Amazon Web services has announced the availability of Model Server for Apache MXNet (MMS). Built on top of Apache MXNet, Model Server for Apache MXNet is an open source component designed to simplify the task of deploying deep learning models for inference at scale. MMS is available to use through a PyPi package, or directly from the Model Server GitHub repository, and it runs on Mac and Linux. For scalable production use cases, the AWS recommends using the pre-configured Docker images that are provided in the MMS GitHub repository. The only prerequisite to get started is Python, and then PyPi can be used to install MMS as follows: $ pip install mxnet-model-server. For more information, refer to server documentation. #ieeeEAD: Announcing Ethically Aligned Design v2 IEEE releases version 2 of Ethically Aligned Design for Autonomous and Intelligent Systems (A/IS), invites industry inputs IEEE has announced the publication of the second version of Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (EADv2). In this regard, the IEEE Global Initiative has created its “Ethics in Action” campaign (#ieeeEAD on twitter) to encourage the general public to provide feedback to the document or join Standards Working Groups. Details on how to submit feedback to Ethically Aligned Design, Version 2 are available via Submission Guidelines available on The IEEE Global Initiative’s website. All comments and input will be made publicly available and should be sent no later than 12 March 2018. For more information, please visit the landing page of The IEEE Global Initiative for Ethical Considerations in Autonomous and Intelligent Systems.
Read more
  • 0
  • 0
  • 1315

article-image-q-101-getting-know-basics-microsofts-new-quantum-computing-language
Sugandha Lahoti
14 Dec 2017
5 min read
Save for later

Q# 101: Getting to know the basics of Microsoft’s new quantum computing language

Sugandha Lahoti
14 Dec 2017
5 min read
A few days back we posted about the preview of Microsoft‘s development toolkit with a new quantum programming language, simulator, and supporting tools. The development kit contains the tools which allow developers to build their own quantum computing programs and experiments. A major component of the Quantum Development Kit preview is the Q# programming language. According to Microsoft “Q# is a domain-specific programming language used for expressing quantum algorithms. It is to be used for writing sub-programs that execute on an adjunct quantum processor, under the control of a classical host program and computer.” The Q# programming language is foundational for any developer of quantum software. It is deeply integrated with the Microsoft Visual Studio and hence programming quantum computers is easy for developers who are well-versed with Microsoft Visual Studio. An interesting feature of Q# is the fact that it supports a basic procedural model (read loops and if/then statements) for writing programs. The top-level constructs in Q# are user-defined types, operations, and functions. The Type models Q# provides several type models. There are the primitive types such as the Qubit type or the Pauli type. The Qubit type represents a quantum bit or qubit. A quantum computer stores information in the form of qubits as both 1s and 0s at the same time.  Qubits can either be tested for identity (equality) or passed to another operation. Actions on Qubits are implemented by calling operations in the Q# standard library. The Pauli type represents an element of the single-qubit Pauli group. The Pauli group on 1 qubit is the 16-element matrix group consisting of the 2 × 2 identity matrix and all of the Pauli matrices. This type has four possible values: PauliI, PauliX, PauliY, and PauliZ. There are also array and tuple types for creating new, structured types. It is possible to create arrays of tuples, tuples of arrays, tuples of sub-tuples, etc. Tuple instances are immutable i.e. the contents of a tuple can’t be changed once created. Q# does not include support for rectangular multi-dimensional arrays. Q# also has User-defined types. User-defined types may be used anywhere. It is possible to define an array of a user-defined type and to include a user-defined type as an element of a tuple type. newtype TypeA = (Int, TypeB); newtype TypeB = (Double, TypeC); newtype TypeC = (TypeA, Range); Operations and Functions A Q# operation is a quantum subroutine, which means it is a callable routine that contains quantum operations. A Q# function is the traditional subroutine used within a quantum algorithm. It has no quantum operations. You may pass operations or qubits to Functions for processing. However, they can’t allocate or borrow qubits or call operations. Operations and functions are together known as callables. A functor in Q# is a factory that specifies a new operation from another operation. An important feature of the function is the fact, that they have access to the implementation of the base operation when defining the implementation of the new operation. This means that functors can perform more complex functions than classical complex functions. Comments Comments begin with two forward slashes, //, and continue until the end of line.  A comment may appear anywhere in a Q# source file, including where statements are not valid.  However, end of line comments in the middle of an expression is not supported, although the expression can be multi-lined. Comments can also begin with three forward slashes, ///. Their contents are considered as documentation for the defined callable or user-defined type when they appear immediately before an operation, function, or type definition. Namespaces Q# follows the same rules for namespace as other .NET languages. Every Q# operation, function, and user-defined type is defined within a namespace.  However, Q# does not support nested namespaces. Control Flow The control flow consists of For-Loop, Repeat-Until-Success Loop, Return statement, and the Conditional statement. For-Loop Like the traditional for loop, Q# uses the for statement for iteration through an integer range. The statement consists of the keyword for, followed by an identifier, the keyword in, a Range expression, and a statement block. for (index in 0 .. n-2) { set results[index] = Measure([PauliX], [qubits[index]]); } Repeat-until-success Loop The repeat statement supports the quantum “repeat until success” pattern. It consists of the keyword repeat, followed by a statement block (the loop body), the keyword until, a Boolean expression, the keyword fixup, and another statement block (the fixup). using ancilla = Qubit[1] {    repeat {        let anc = ancilla[0];        H(anc);        T(anc);        CNOT(target,anc);        H(anc);        let result = M([anc],[PauliZ]);    } until result == Zero    fixup {        ();    } }  The Conditional statement Similar to the if-then conditional statement in most programming languages, the if statement in Q# supports conditional execution. It consists of the keyword if, followed by a Boolean expression and a statement block (the then block). This may be followed by any number of else-if clauses, each of which consists of the keyword elif, followed by a Boolean expression and a statement block (the else-if block). if (result == One) {    X(target); } else {    Z(target); }  Return Statement The return statement ends execution of an operation or function and returns a value to the caller.  It consists of the keyword return, followed by an expression of the appropriate type, and a terminating semicolon. return 1; OR return (); OR return (results, qubits); File Structure A Q# file consists of one or more namespace declarations. Each namespace declaration contains definitions for user-defined types, operations, and functions. You can download the Quantum Development Kit here. You can learn more about the features of the Q# language here.
Read more
  • 0
  • 0
  • 3694
article-image-google-opensources-tensorflow-gan-tfgan-library-for-generative-adversarial-networks-neural-network-model
Abhishek Jha
13 Dec 2017
11 min read
Save for later

Generative Adversarial Networks: Google open sources TensorFlow-GAN (TFGAN)

Abhishek Jha
13 Dec 2017
11 min read
If you have played the game Prince of Persia, you know what it is like defending yourself from the ‘shadow’ which tries to kill you. It’s a conundrum: If you kill the shadow you die; if you don’t do anything, you definitely die! For all its merits, Generative Adversarial Networks, or GAN, has faced a similar problem with differentiation. Most deep learning experts who endorse GAN mix their support with a little bit of caution – there is a stability issue! You may call it a holistic convergence problem. Both discriminator and generator are at loggerheads, while still being dependant on each other for efficient training. If one of them fails, the entire system fails. And you have got to ensure they don’t explode. The Prince of Persia is an interesting concept! To begin with, Neural Networks were designed to replicate human brain (albeit, artificially). They have succeeded in recognizing objects and processing natural languages. But to think and act like humans at that neurological level – let us admit it’s a far cry still. Which is why Generative Adversarial Networks became a hot topic in machine learning. It’s a relatively new architecture, but have gone on to revolutionize deep learning by accurately modeling real world data in ways better than any other model has done before. After all, they came up with a new model for training a neural net, with not one but two independent nets that work separately (and act as adversaries!) as Discriminator and Generator. Such a new architecture for an unsupervised neural network yields far better performance when compared to traditional nets. But the fact is, we have barely scratched the surface. Challenge is to train GAN here onwards. It comes with its own problems, such as failing to differentiate how many of a particular object should occur at a location, failing to adapt to 3D objects (it doesn’t understand the perspectives of frontview and backview), not being able to understand real-life holistic structures, etc. Substantial research has been taking place to take care of these problems. New models have been proposed to give more accurate results than previous techniques. Now  Google intends to make the Generative Adversarial Networks easier to experiment with! They have just open sourced TFGAN, a lightweight TensorFlow library designned to make it easy to train and evaluate GANs. [embed width="" height=""]https://www.youtube.com/watch?v=f2GF7TZpuGQ[/embed] According to Google, TFGAN provides the infrastructure to easily train a GAN, provides well-tested loss and evaluation metrics, and gives easy-to-use examples that highlight the expressiveness and flexibility of TFGAN. "We’ve also released a tutorial that includes a high-level API to quickly get a model trained on your data," Google said in its announcement. Source: research.googleblog.com The above image demonstrates the effect of an adversarial loss on image compression. The top row shows image patches from the ImageNet dataset. The middle row shows the results of compressing and uncompressing an image through an image compression neural network trained on a traditional loss. The bottom row shows the results from a network trained with a traditional loss and an adversarial loss. The GAN-loss images are sharper and more detailed, even if they are less like the original. TFGAN offers simple function calls for majority of GAN use-cases (where users can run a model in a few lines of code), but it's also built in a modular way that covers sophisticated GAN designs. "You can just use the modules you want — loss, evaluation, features, training, etc. are all independent. TFGAN’s lightweight design also means you can use it alongside other frameworks, or with native TensorFlow code," Google says, adding that GAN models written using TFGAN will easily benefit from future infrastructure improvements. That users can select from a large number of already-implemented losses and features without having to rewrite their own. Most importantly, Google is assuring us that the code is well-tested: "You don’t have to worry about numerical or statistical mistakes that are easily made with GAN libraries." Source: research.googleblog.com Most neural text-to-speech (TTS) systems produce over-smoothed spectrograms. When applied to the TacotronTTS system, Google says, a GAN can recreate some of the realistic-texture reducing artifacts in the resulting audio. And then, there is no harm in reiterating that when Google has open sourced a project, it must be absolute production ready! "When you use TFGAN, you’ll be using the same infrastructure that many Google researchers use, and you’ll have access to the cutting-edge improvements that we develop with the library," the tech giant added. To Start With import tensorflow as tf tfgan = tf.contrib.gan Why TFGAN? Easily train generator and discriminator networks with well-tested, flexible library calls. You can mix TFGAN, native TF, and other custom frameworks Use already implemented GAN losses and penalties (ex Wasserstein loss, gradient penalty, mutual information penalty, etc) Monitor and visualize GAN progress during training, and evaluate them Use already-implemented tricks to stabilize and improve training Develop based on examples of common GAN setups Use the TFGAN-backed GANEstimator to easily train a GAN model Improvements in TFGAN infrastructure will automatically benefit your TFGAN project Stay up-to-date with research as we add more algorithms What are the TFGAN components? TFGAN is composed of several parts which were designed to exist independently. These include the following main pieces (explained in detail below). core: provides the main infrastructure needed to train a GAN. Training occurs in four phases, and each phase can be completed by custom-code or by using a TFGAN library call. features: Many common GAN operations and normalization techniques are implemented for you to use, such as instance normalization and conditioning. losses: Easily experiment with already-implemented and well-tested losses and penalties, such as the Wasserstein loss, gradient penalty, mutual information penalty, etc evaluation: Use Inception Score or Frechet Distance with a pretrained Inception network to evaluate your unconditional generative model. You can also use your own pretrained classifier for more specific performance numbers, or use other methods for evaluating conditional generative models. examples and tutorial: See examples of how to use TFGAN to make GAN training easier, or use the more complicated examples to jumpstart your own project. These include unconditional and conditional GANs, InfoGANs, adversarial losses on existing networks, and image-to-image translation. Training a GAN model Training in TFGAN typically consists of the following steps: Specify the input to your networks. Set up your generator and discriminator using a GANModel. Specify your loss using a GANLoss. Create your train ops using a GANTrainOps. Run your train ops. There are various types of GAN setups. For instance, you can train a generator to sample unconditionally from a learned distribution, or you can condition on extra information such as a class label. TFGAN is compatible with many setups, and a few are demonstrated below: Examples Unconditional MNIST generation This example trains a generator to produce handwritten MNIST digits. The generator maps random draws from a multivariate normal distribution to MNIST digit images. See 'Generative Adversarial Networks' by Goodfellow et al. # Set up the input. images = mnist_data_provider.provide_data(FLAGS.batch_size) noise = tf.random_normal([FLAGS.batch_size, FLAGS.noise_dims]) # Build the generator and discriminator. gan_model = tfgan.gan_model( generator_fn=mnist.unconditional_generator, # you define discriminator_fn=mnist.unconditional_discriminator, # you define real_data=images, generator_inputs=noise) # Build the GAN loss. gan_loss = tfgan.gan_loss( gan_model, generator_loss_fn=tfgan_losses.wasserstein_generator_loss, discriminator_loss_fn=tfgan_losses.wasserstein_discriminator_loss) # Create the train ops, which calculate gradients and apply updates to weights. train_ops = tfgan.gan_train_ops( gan_model, gan_loss, generator_optimizer=tf.train.AdamOptimizer(gen_lr, 0.5), discriminator_optimizer=tf.train.AdamOptimizer(dis_lr, 0.5)) # Run the train ops in the alternating training scheme. tfgan.gan_train( train_ops, hooks=[tf.train.StopAtStepHook(num_steps=FLAGS.max_number_of_steps)], logdir=FLAGS.train_log_dir) Conditional MNIST generation This example trains a generator to generate MNIST images of a given class. The generator maps random draws from a multivariate normal distribution and a one-hot label of the desired digit class to an MNIST digit image. See 'Conditional Generative Adversarial Nets' by Mirza and Osindero. # Set up the input. images, one_hot_labels = mnist_data_provider.provide_data(FLAGS.batch_size) noise = tf.random_normal([FLAGS.batch_size, FLAGS.noise_dims]) # Build the generator and discriminator. gan_model = tfgan.gan_model( generator_fn=mnist.conditional_generator, # you define discriminator_fn=mnist.conditional_discriminator, # you define real_data=images, generator_inputs=(noise, one_hot_labels)) # The rest is the same as in the unconditional case. ... Adversarial loss This example combines an L1 pixel loss and an adversarial loss to learn to autoencode images. The bottleneck layer can be used to transmit compressed representations of the image. Neutral networks with pixel-wise loss only tend to produce blurry results, so the GAN can be used to make the reconstructions more plausible. See 'Full Resolution Image Compression with Recurrent Neural Networks' by Toderici et al for an example of neural networks used for image compression, and 'Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network' by Ledig et al for a more detailed description of how GANs can sharpen image output. # Set up the input pipeline. images = image_provider.provide_data(FLAGS.batch_size) # Build the generator and discriminator. gan_model = tfgan.gan_model( generator_fn=nets.autoencoder, # you define discriminator_fn=nets.discriminator, # you define real_data=images, generator_inputs=images) # Build the GAN loss and standard pixel loss. gan_loss = tfgan.gan_loss( gan_model, generator_loss_fn=tfgan_losses.wasserstein_generator_loss, discriminator_loss_fn=tfgan_losses.wasserstein_discriminator_loss, gradient_penalty=1.0) l1_pixel_loss = tf.norm(gan_model.real_data - gan_model.generated_data, ord=1) # Modify the loss tuple to include the pixel loss. gan_loss = tfgan.losses.combine_adversarial_loss( gan_loss, gan_model, l1_pixel_loss, weight_factor=FLAGS.weight_factor) # The rest is the same as in the unconditional case. ... Image-to-image translation This example maps images in one domain to images of the same size in a different dimension. For example, it can map segmentation masks to street images, or grayscale images to color. See 'Image-to-Image Translation with Conditional Adversarial Networks' by Isola et al for more details. # Set up the input pipeline. input_image, target_image = data_provider.provide_data(FLAGS.batch_size) # Build the generator and discriminator. gan_model = tfgan.gan_model( generator_fn=nets.generator, # you define discriminator_fn=nets.discriminator, # you define real_data=target_image, generator_inputs=input_image) # Build the GAN loss and standard pixel loss. gan_loss = tfgan.gan_loss( gan_model, generator_loss_fn=tfgan_losses.least_squares_generator_loss, discriminator_loss_fn=tfgan_losses.least_squares_discriminator_loss) l1_pixel_loss = tf.norm(gan_model.real_data - gan_model.generated_data, ord=1) # Modify the loss tuple to include the pixel loss. gan_loss = tfgan.losses.combine_adversarial_loss( gan_loss, gan_model, l1_pixel_loss, weight_factor=FLAGS.weight_factor) # The rest is the same as in the unconditional case. ... InfoGAN Train a generator to generate specific MNIST digit images, and control for digit style without using any labels. See 'InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets' for more details. # Set up the input pipeline. images = mnist_data_provider.provide_data(FLAGS.batch_size) # Build the generator and discriminator. gan_model = tfgan.infogan_model( generator_fn=mnist.infogan_generator, # you define discriminator_fn=mnist.infogran_discriminator, # you define real_data=images, unstructured_generator_inputs=unstructured_inputs, # you define structured_generator_inputs=structured_inputs) # you define # Build the GAN loss with mutual information penalty. gan_loss = tfgan.gan_loss( gan_model, generator_loss_fn=tfgan_losses.wasserstein_generator_loss, discriminator_loss_fn=tfgan_losses.wasserstein_discriminator_loss, gradient_penalty=1.0, mutual_information_penalty_weight=1.0) # The rest is the same as in the unconditional case. ... Custom model creation Train an unconditional GAN to generate MNIST digits, but manually construct the GANModel tuple for more fine-grained control. # Set up the input pipeline. images = mnist_data_provider.provide_data(FLAGS.batch_size) noise = tf.random_normal([FLAGS.batch_size, FLAGS.noise_dims]) # Manually build the generator and discriminator. with tf.variable_scope('Generator') as gen_scope: generated_images = generator_fn(noise) with tf.variable_scope('Discriminator') as dis_scope: discriminator_gen_outputs = discriminator_fn(generated_images) with variable_scope.variable_scope(dis_scope, reuse=True): discriminator_real_outputs = discriminator_fn(images) generator_variables = variables_lib.get_trainable_variables(gen_scope) discriminator_variables = variables_lib.get_trainable_variables(dis_scope) # Depending on what TFGAN features you use, you don't always need to supply # every `GANModel` field. At a minimum, you need to include the discriminator # outputs and variables if you want to use TFGAN to construct losses. gan_model = tfgan.GANModel( generator_inputs, generated_data, generator_variables, gen_scope, generator_fn, real_data, discriminator_real_outputs, discriminator_gen_outputs, discriminator_variables, dis_scope, discriminator_fn) # The rest is the same as the unconditional case. ... Google has allowed anyone to contribute to the github repositories to facilitate code-sharing among machine learning users. For more examples on TFGAN, see tensorflow/models on GitHub.
Read more
  • 0
  • 0
  • 8209

article-image-13th-dec-17-headlines
Packt Editorial Staff
13 Dec 2017
4 min read
Save for later

13th Dec.’17 – Headlines

Packt Editorial Staff
13 Dec 2017
4 min read
New code patterns from IBM, ShareInsights 2.0 a new analytics platform, NVIDIA’s CUDA Toolkit version 9.1, Google’s TFGAN, and more in today’s top stories in AI, machine learning and data science news. Google open sources TFGAN, for easily training and evaluation of GANs Google has open sourced TFGAN to make it easy to train and evaluate GANs. TFGAN is a lightweight library which provides an easy infrastructure to train a GAN. It also consists of well-tested loss and evaluation metrics and easy-to-use examples to highlight the flexibility of TFGAN. TFGAN provides simple function calls that cover the majority of GAN use-cases. TFGAN is lightweight, so developers can use it alongside other frameworks, or with native TensorFlow code. Developers can also select from a large number of already-implemented losses and features. The code is well-tested so numerical or statistical mistakes associated with GAN libraries are avoidable. Google has also released a tutorial that includes a high-level API to quickly get a model trained on user-provided data. IBM releases 120 code patterns for building AI, blockchain, IoT and chatbots IBM has released more than 120 code patterns for streamlining the development process in areas of IoT, AI, Blockchain, Analytics, and DevOps to name a few. These code patterns include packages of code, one-click GitHub repositories, documentation and other resources. With this move, IBM aims to help developers automate their tasks, giving them more time to innovate and build. In addition, IBM has also unveiled Bot Asset Exchange, an open source repository, for helping developers create easily deployable chatbots compatible with the Watson Conversation Service. NVIDIA announces release of CUDA Toolkit version 9.1 CUDA 9 is one of the most powerful software platforms for GPU-accelerated applications. NVIDIA has added new algorithms and optimizations in its CUDA Toolkit version 9.1 for speeding up AI and HPC apps on Volta GPUs. It also includes compiler optimizations, support for new developer tool versions, and bug fixes. Using new functions in NVIDIA Performance Primitives, developers can now develop image augmentation algorithms for deep learning. They can also run batched neural machine translations and sequence modeling operations on Volta Tensor cores using new APIs in cuBLAS. The new release also has certain core optimizations to launch kernels up to 12x faster. Using new heuristics in cuFFT, developers can now solve large 2D and 3D FFT problems more efficiently on multi-GPU systems. Accelerite launches ShareInsights 2.0, an end-to-end, self-service approach for big data analysis ShareInsights 2.0 comes as a single, end-to-end, collaborative analytics platform for data preparation and online analytical processing and visualization from Accelerite, a cloud management vendor. The software runs natively atop a Hadoop cluster and uses existing Apache Spark instances for machine learning applications. It also boasts of more than 50 connectors and over 100 analytical widgets ranging from simple aggregation tasks to more complex machine learning. A drag-and-drop access to source data on the native cluster is available using the self-service data discovery feature.  It even has its own visualization interface and also supports other data visualization tools such as Tableau. Google reduces prices of its cloud machine learning offerings Google backlashes at Amazon following the release of Amazon Sagemaker by slashing the prices of its own cloud machine learning service. According to a report by VentureBeat, customers using basic-tier compute for training a machine learning system will now pay 43 percent less than they did earlier. Google is also providing per-hour pricing for all of the different types of training machines available. The price reductions report was first included in a blog post that appeared briefly on Google’s website. This announcement came just a few weeks after Amazon released its Sagemaker, which is a rival to Google’s ML Engine service. In addition to the price cuts, the leaked blog also talked about Google’s plans to make its online prediction feature generally available. Seattle plans for new machine learning apps and algorithms seeking inspiration from a Facebook Hackathon A recent Facebook hackathon invited Seattle coders to solve the city’s civic tech problems using Machine Learning. Seeking inspiration from this hackathon, the open data leaders of the city decided to include machine learning as an integral part of the city's open data program. The two winning projects at the hackathon were Find ‘n Park, an app that uses machine learning to help motorists find parking, and Contractor 5, a software that helps people find building contractors and estimate costs for new construction or remodeling. Following these two use cases, David Doyle, Seattle's open data program manager, has encouraged all departments to think about leveraging machine learning opportunities in their current and future open datasets.
Read more
  • 0
  • 0
  • 1909