Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-data-science-news-daily-roundup-15th-march-2018
Packt Editorial Staff
15 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 15th March 2018

Packt Editorial Staff
15 Mar 2018
2 min read
Tensorflow 1.7.0-rc0, Google’s NSynth Super, Microsoft’s Machine translation system, and more in today’s top stories around machine learning, deep learning, and data science news. Top Data science News Stories of the Day Tensorflow 1.7.0-rc0 arrives close on the heels of Tensorflow 1.6.0! Google launches Nsynth Super, a hardware companion to its NSynth AI tool to algorithmically create new sounds. Microsoft releases Windows 10 SDK Preview Build 17115 with Machine Learning APIs. Other Data Science News at a Glance Microsoft researchers have created the first machine translation system that they claim can translate sentences of news articles from Chinese to English with the same quality and accuracy as a real person. Read more on the Windows blog. NetBase unveiled next-gen artificial intelligence with image analysis capabilities. This AI can analyze visual posts to identify brand logos and keywords, classify images by facial emotions, and measure the impact of images on Instagram and other visual channels. Read more on MarTech Series. Elasticsearch has updated the Elastic Dotnet versions 2.x, 5.x & 6.x clients to use JSON.NET 11.0.1. Read more on Github release notes for versions 2.5.8, 5.6.1, and 6.0.1. Intel has issued the latest version of its Math Kernel Library in an effort to help developers leverage instruction sets, improve hardware or software performance, and democratize data science tools. Read more on SiliconANGLE. ObjectRocket For MongoDB Is Now On Microsoft Azure Cloud. With this new global offering via Microsoft Azure Cloud, businesses of all sizes can get their data as close to their application as possible, reducing latency and improving data performance. Read more on ObjectRocket.
Read more
  • 0
  • 0
  • 1817

article-image-microsoft-releases-windows-10-sdk-preview-build-17115-machine-learning-apis
Sugandha Lahoti
15 Mar 2018
2 min read
Save for later

Microsoft releases Windows 10 SDK Preview Build 17115 with Machine Learning APIs

Sugandha Lahoti
15 Mar 2018
2 min read
The Windows 10 SDK Preview Build 17115 is out now. With this preview, Microsoft has added Windows Machine Learning APIs, Gaze Input API improvements, bug fixes and other development changes to the API surface area. These API updates and additions are adaptive to run correctly on the widest number of Windows 10 devices. The entire list of API additions in the Windows 10 SDK Preview Build 17115 is available on the Windows Blogs.   In addition to these machine learning APIs, this release also includes the C++/WinRT headers and cppwinrt compiler (cppwinrt.exe). This compiler comes in handy when a user wants to add a third-party WinRT component or if they need to author their own WinRT components with C++/WinRT. However,  the authoring support is currently experimental and subject to change. The easiest way to get working with it is to start the Visual Studio Developer Command Prompt and run the compiler in that environment, after installing the Windows Insider Preview SDK. Another exciting feature of this build is the addition of new MIDL keywords.These keywords are added to the midlrt tool as a part of the “modernizing IDL” effort. The new keywords are: event set get partial unsealed overridable protected importwinmd If any of these keywords is used as an identifier, it will generate a build failure indicating a syntax error. To fix this, you can modify the identifier in error to an “@” prefix in front of the identifier. That will cause MIDL to treat the offending element as an identifier instead of a keyword. The Windows 10 SDK Preview Build 17115 can be downloaded from developer section on Windows Insider. More information about this release is available on the Windows blog.
Read more
  • 0
  • 0
  • 1798

article-image-tensorflow-1-7-0-rc0-arrives-close-heels-tensorflow-1-6-0
Sugandha Lahoti
15 Mar 2018
2 min read
Save for later

Tensorflow 1.7.0-rc0 arrives close on the heels of Tensorflow 1.6.0!

Sugandha Lahoti
15 Mar 2018
2 min read
It’s only been a few days since we witnessed the release of Tensorflow 1.6.0, and now the first release candidate of Tensorflow 1.7.0 is already here! There are quite a few major features and improvements in this new release candidate. However, no breaking changes are unveiled as such. With Tensorflow 1.7.0-rc0, TensorBoard Debugger Plugin, the graphical user interface (GUI) of TensorFlow Debugger (tfdbg), is now in alpha. Also, Eager mode is moving out of contrib. Other additional major features include: EGraph rewrites emulating fixed-point quantization compatible with TensorFlow Lite are now supported by new tf.contrib.quantize package. Easily customize gradient computation available with tf.custom_gradient. New tf.contrib.data.SqlDataset provides an experimental support for reading a sqlite database as a Dataset Distributed Mutex / CriticalSection added to tf.contrib.framework.CriticalSection. Better text processing with tf.regex_replace. Easy, efficient sequence input with tf.contrib.data.bucket_by_sequence_length Apart from these, there is a myriad of bug fixes and small changes. Some of these include: MaxPoolGradGrad support is added for Accelerated Linear Algebra (XLA). CSE pass from Tensorflow is now disabled. tf.py_func now reports the full stack trace if an exception occurs. TPUClusterResolver now integrated with GKE's integration for Cloud TPUs. A new library added for statistical testing of samplers. Helpers added to stream data from the GCE VM to a Cloud TPU. ClusterResolvers are integrated with TPUEstimator. Metropolis_hastings interface unified with HMC kernel. LIBXSMM convolutions moved to a separate --define flag so that they are disabled by default. MomentumOptimizer lambda fixed. tfp.layers boilerplate reduced via programmable docstrings. auc_with_confidence_intervals, a method for computing the AUC and confidence interval with linearithmic time complexity added. regression_head now accepts customized link function, to satisfy the usage that user can define their own link function if the array_ops.identity does not meet the requirement. initialized_value and initial_value behaviors fixed for ResourceVariables created from VariableDef protos. TensorSpec added to represent the specification of Tensors. Constant folding pass is now deterministic. To know about other bug-fixes and changes visit the Tensorflow 1.7.0-rc0 Github Repo.
Read more
  • 0
  • 0
  • 2205
Visually different images

article-image-google-launches-nsynth-super-hardware-companion-nsynth-ai-tool-algorithmically-create-new-sounds
Savia Lobo
15 Mar 2018
2 min read
Save for later

Google launches Nsynth Super, a hardware companion to its NSynth AI tool to algorithmically create new sounds

Savia Lobo
15 Mar 2018
2 min read
Magenta, a research project by Google Inc., which creates music using machine learning, has launched a new instrument - NSynth Super. It is an open source experimental physical interface for the NSynth or Neural Synthesizer machine learning algorithm. About NSyth NSyth (Natural Synthesizer) is an algorithm that generates new sounds by combining the features of existing sounds. This is done by taking different sounds as an input, use a deep neural network to learn the characteristics of the input sounds, and then create a completely new sound based on these characteristics. Magenta developed NSynth by using WaveNet, a neural network developed by Google’s own DeepMind to make artificial speech sound more natural. WaveNet allows NSynth to simulate musical instruments that would be impossible in the real world. Watch this video to listen to some cool music created by NSynth Super. [embed]https://www.youtube.com/watch?time_continue=3&v=iTXU9Z0NYoU[/embed] NSynth provides artists with intuitive control over timbre and dynamics, and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer by learning directly from the data. About NSynth Super NSynth Super is an open source experimental instrument which gives musicians the ability to make music using completely new sounds generated by the NSynth algorithm from 4 different source sounds. It features touch screen and dial controls, along with an OLED display and custom-designed printed circuit board. Source : NSynth Super Website NSynth Super is built using open source libraries, which includes TensorFlow and openFrameworks. This allows a wider community of artists, coders, and researchers to experiment with machine learning within their creative sphere. The open source version of the NSynth Super prototype, which includes all of the source code, schematics, and design templates, is available for download on GitHub. Read more about this exciting project on NSynth Super’s official website.
Read more
  • 0
  • 0
  • 1879

article-image-aws-makes-amazon-rekognition-image-recognition-ai-available-asia-pacific-developers
Savia Lobo
14 Mar 2018
1 min read
Save for later

AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers

Savia Lobo
14 Mar 2018
1 min read
Amazon Rekognition, one of AWS’ Artificial Intelligence (AI) services, is now available in the AWS Asia Pacific (Sydney) Region. With this provision, Australian developers can add visual analysis and recognition to their applications. Amazon Rekognition is a deep learning-based service which easily add images and analyzes video for your applications. Rekognition Image API allows you to detect objects, scenes, faces and inappropriate content, extract text, search and compare faces within images, and so on. One can also use Rekognition Video to detect objects, scenes, activities and inappropriate content, and also search faces in video stored in Amazon S3 in the AWS Asia Pacific (Sydney) region. With Rekognition API, developers can easily: Build an application that measures the likelihood that faces in two images are of the same person, thereby being able to verify a user against a reference photo in near real-time. Also, developers can create collections of millions of faces (detected in images) and can search for a face similar to their reference image in the collection. Amazon Rekognition has no minimum fees or upfront commitment and works on a pay-per-usage model. To know more in detail and other regions where these APIs are available, read the Amazon documentation.
Read more
  • 0
  • 0
  • 2648

article-image-data-science-news-daily-roundup-14th-march-2018
Packt Editorial Staff
14 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 14th March 2018

Packt Editorial Staff
14 Mar 2018
2 min read
Big Squid, Inc. releases Kraken platform, Baidu’s Machine Reading Comprehension Challenge, Apache Kylin new version supports SparkSQL, and more in today’s top stories around machine learning, deep learning, and data science news. Top Data science News Stories of the Day Anaconda version 5.1.1 released! AWS releases image recognition AI for Asia-Pacific developers Other Data Science News at a Glance 1. “Released the Kraken”, announces Big Squid, Inc.. The Kraken platform, through self-service machine learning, aims to bring powerful analytical insights to customers, which will further extend the ability of organizations to gain insight into future trends affecting their business and to make decisions on those forecasts with greater certainty, maximizing their data and business intelligence investment.  Read more on PRWeb. 2. Baidu Research launches a Machine Reading Comprehension Challenge to advance the state of the art in Natural Language Processing (NLP). Contestants will have access to the world's largest Chinese MRC dataset and a chance to win 100k RMB. Read more on Baidu’s Challenge Page. 3. Progress Launches AI-Driven Chatbot, Progress NativeChat. It is an artificial intelligence-driven platform for creating and deploying chatbots. NativeChat is based on patent pending CognitiveFlow technology that can be trained with goals, examples and data from existing backend systems, similar to the process used for training new customer service agents. Read more on Digital Journal. 4. Apache Kylin has been updated with a new version that supports SparkSQL in building intermediate flat Hive tables. Kylin is an open source distributed analytics engine designed to provide a SQL interface and multi-dimensional analysis (OLAP) on Apache. Read more on I Programmer blog. 5. Google Open Sources its Exoplanet-Hunting AI, Kepler. Google is saving everyone the time of training a neural network on Kepler data by releasing its code freely. One can get the TensorFlow code on GitHub. Read more on Extreme Tech.
Read more
  • 0
  • 0
  • 2409
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-anaconda-version-5-1-1-released
Savia Lobo
14 Mar 2018
2 min read
Save for later

Anaconda Enterprise version 5.1.1 released!

Savia Lobo
14 Mar 2018
2 min read
Anaconda, a Python-based tool for encapsulating, running, and reproducing data science projects has released its enterprise version 5.1.1. This release includes some administrator-facing and user-facing changes. Following are some of the changes included in the Anaconda Enterprise 5.1.1: Administrator-facing changes This version includes the ability to specify custom UID for service account at install-time (default UID: 1000) An added pre-flight checks for kernel modules, kernel settings, and filesystem options when installing or adding nodes. Improved consistency between GUI- and CLI-based installation paths. Also, and improved security and isolation between internal database from user sessions and deployments. Added capability to configure a custom trust store and LDAPS certificate validation Simplified installer packaging using a single tarball and consistent naming Updated documentation for system requirements, including XFS filesystem requirements and kernel modules/settings. Added documentation for configuring AE to point to online Anaconda repositories, securing the internal database, and an updated documentation for mirroring packages from channels. Other added documentation for configuring RBAC, role mapping, and access control and also for LDAP federation and identity management. Includes fixed issues related to deleting related versions of custom Anaconda parcels, default admin role (ae-admin), using special characters with AE Ops Center accounts/passwords, Administrator Console link in menu, and many more. Added command to remove channel permission User-facing changes This version includes some improvements to the collaborative workflow such as, added notification on changes made to a project, ability to pull changes, and resolve conflicting changes when saving or pulling changes into a project. Additional documentation and examples for connecting to remote data and compute sources: Spark, Hive, Impala, and HDFS Optimized startup time for Spark and SAS project templates. Improvement in the initial startup time of project creation, sessions, and deployments by pre-pulling images after installation. Increased upload limit of projects from 100 MB to 1GB Added capability to sudo yum install system packages from within project sessions Fixed R kernel in R project template, and issues related to loading sparklyr in Spark Project, displaying kernel names and Spark project icons. Improved performance when rendering large number of projects, packages, etc. Improved rendering of long version names in environments and projects Render full names when sharing projects and deployments with collaborators. Read more on this, and some other changes on the Anaconda Enterprise Documentation.
Read more
  • 0
  • 0
  • 2422

article-image-google-open-sources-deeplab-model-semantic-image-segmentation-using-tensorflow
Savia Lobo
13 Mar 2018
2 min read
Save for later

Google open sources DeepLab-v3+: A model for Semantic Image Segmentation using TensorFlow

Savia Lobo
13 Mar 2018
2 min read
DeepLab-v3+, Google’s latest and best performing Semantic Image Segmentation model is now open sourced! DeepLab is a state-of-the-art deep learning model for semantic image segmentation, with the goal to assign semantic labels (e.g., person, dog, cat and so on) to every pixel in the input image. Assigning these semantic labels sets a much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection. Examples of semantic image segmentation tasks include synthetic shallow depth-of-field effect shipped in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones and mobile real-time video segmentation. DeepLab-v3+ is implemented in TensorFlow and has its models built on top of a powerful convolutional neural network (CNN) backbone architecture for the most accurate results, intended for server-side deployment. Source: Google Research blog Let’s have a look at some of the highlights of DeepLab v3: Google has extended DeepLab-v3 to include a simple yet effective decoder module to refine the segmentation results especially along object boundaries. In this encoder-decoder structure one can arbitrarily control the resolution of extracted encoder features by atrous convolution to trade-off precision and runtime. They has also shared their Tensorflow model training and evaluation code, along with models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks. This version also adopts two network backbones, MobileNetv2 and Xception. MobileNetv2 is a fast network structure designed for mobile devices. Xception is a powerful network structure intended for server-side deployment. You can read more about this announcement on the Google Research blog.  
Read more
  • 0
  • 0
  • 10453

article-image-microsoft-adds-artificial-intelligence-features-microsoft-teams-turns-one
Sugandha Lahoti
13 Mar 2018
2 min read
Save for later

Microsoft adds artificial intelligence features to Microsoft Teams as it turns one

Sugandha Lahoti
13 Mar 2018
2 min read
Microsoft has announced several new enhancements to Microsoft Teams centered around artificial intelligence on its first year anniversary. Microsoft Teams is a chat-based workspace that consolidates all the people, content, and tools a team needs to be more engaged and effective. One of the major features in this AI-centric rollout is the integration with the Cortana virtual assistant. Cortana voice interactions for Teams-enabled devices will allow workers to use spoken commands in Microsoft Teams. The initial voice controls will make it possible to join a call, start a new one and add colleagues to a teleconference already underway. Microsoft said the Cortana integration will work not just in the native interface but also with compatible devices such as conference phones. Other machine learning features include: Proximity detection for Teams Meetings—This feature will make it easy for workers to discover and add a nearby and available Skype Room System to any meeting. Cloud recording—Provision of one-click meeting recordings with automatic transcription and time coding. This will provide all team members the ability to read captions, search within the conversation, and playback all or part of the meeting. It may also possibly include facial recognition capabilities, so remarks can be addressed to specific meeting attendees. Message translation and transcription—Different language speakers will be able to fluidly communicate with one another by translating posts in channels and chat. Mobile sharing in meetings—Meeting attendees will be able to share a live video stream, photos, or the screen from their mobile device. The new features will begin rolling out in the second quarter. More information on the new Microsoft Teams release is available on the Official Microsoft blog.
Read more
  • 0
  • 0
  • 1733

article-image-data-science-news-daily-roundup-13th-march-2018
Packt Editorial Staff
13 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 13th March 2018

Packt Editorial Staff
13 Mar 2018
2 min read
Microsoft Teams to get AI features, Google’s semantic image segmentation model DeepLab-v3+, the 3rd Annual Postgres Vision Conference, and more in today’s top stories around machine learning, deep learning, and data science news. Top Data science News Stories of the Day Microsoft adds artificial intelligence features to Microsoft Teams as it turns one. Google open sources DeepLab: A model for Semantic Image Segmentation using TensorFlow. Other Data Science News at a Glance The 3rd Annual Postgres Vision Conference will take place June 5-6, 2018, at the Royal Sonesta Hotel, located on the Charles River, This conference assembles Innovators in Open Source Data Management.   Read more on PR Newswire. SRAX’s blockchain identification graph platform, BIG, today announced the release of the Alpha version of its consumer data management and distribution system to a limited, by invitation only, group of users. Read more on the PR Newswire. NXG Logic introduces two new Windows-based products, the Explorer package for machine learning and statistical analysis, and the Instructor package for generation of biostatistical learning and teaching materials. Read more on Digital Journal. Evernote to launch Spaces a note-taking app for easier collaboration, which uses AI to deliver better search results and suggest relevant tasks. Read more on Engadget. Thomson Reuters, has launched version 3.0 of its MarketPsych Indices (TRMI). This includes its first sentiment data feed for Bitcoin in addition to new enhanced market sentiment data for several asset classes, new user capabilities, and additional coverage. Read more on Finextra.
Read more
  • 0
  • 0
  • 1495
article-image-improve-interpretability-machine-learning-systems
Sugandha Lahoti
12 Mar 2018
6 min read
Save for later

How to improve interpretability of machine learning systems

Sugandha Lahoti
12 Mar 2018
6 min read
Advances in machine learning have greatly improved products, processes, and research, and how people might interact with computers. One of the factors lacking in machine learning processes is the ability to give an explanation for their predictions. The inability to give a proper explanation of results leads to end-users losing their trust over the system, which ultimately acts as a barrier to the adoption of machine learning. Hence, along with the impressive results from machine learning, it is also important to understand why and where it works, and when it won’t. In this article, we will talk about some ways to increase machine learning interpretability and make predictions from machine learning models understandable. 3 interesting methods for interpreting Machine Learning predictions According to Miller, interpretability is the degree to which a human can understand the cause of a decision. Interpretable predictions lead to better trust and provide insight into how the model may be improved. The kind of machine learning developments happening in the present times require a lot of complex models, which lack in interpretability. Simpler models (e.g. linear models), on the other hand,  often give a correct interpretation of a prediction model’s output, but they are often less accurate than complex models. Thus creating a tension between accuracy and interpretability. Complex models are less interpretable as their relationships are generally not concisely summarized. However, if we focus on a prediction made on a particular sample, we can describe the relationships more easily. Balancing the trade-off between model complexity and interpretability lies at the heart of the research done in the area of developing interpretable deep learning and machine learning models. We will discuss a few methods to increase the interpretability of complex ML models by summarizing model behavior with respect to a single prediction. LIME or Local Interpretable Model-Agnostic Explanations, is a method developed in the paper Why should I trust you? for interpreting individual model predictions based on locally approximating the model around a given prediction. LIME uses two approaches to explain specific predictions: perturbation and linear approximation. With Perturbation, LIME takes a prediction that requires explanation and systematically perturbs its inputs. These perturbed inputs become new, labeled training data for a simpler approximate model. It then does local linear approximation by fitting a linear model to describe the relationships between the (perturbed) inputs and outputs. Thus a simple linear algorithm approximates the more complex, nonlinear function. DeepLIFT (Deep Learning Important FeaTures) is another method which serves as a recursive prediction explanation method for deep learning.  This method decomposes the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT assigns contribution scores based on the difference between activation of each neuron and its ‘reference activation’. DeepLIFT can also reveal dependencies which are missed by other approaches by optionally giving separate consideration to positive and negative contributions. Layer-wise relevance propagation is another method for interpreting the predictions of deep learning models. It determines which features in a particular input vector contribute most strongly to a neural network’s output.  It defines a set of constraints to derive a number of different relevance propagation functions. Thus we saw 3 different ways of summarizing model behavior with a single prediction to increase model interpretability. Another important avenue to interpret machine learning models is to understand (and rethink) generalization. What is generalization and how it affects Machine learning interpretability Machine learning algorithms are trained on certain datasets, called training sets. During training, a model learns intrinsic patterns in data and updates its internal parameters to better understand the data. Once training is over, the model is tried upon test data to predict results based on what it has learned. In an ideal scenario, the model would always accurately predict the results for the test data. In reality, what happens is that the model is able to identify all the relevant information in the training data, but sometimes fails when presented with the new data. This difference between “training error” and “test error” is called the generalization error. The ultimate aim of turning a machine learning system to a scalable product is generalization. Every task in ML wants to create a generalized algorithm that acts in the same way for all kind of distributions. And the ability to distinguish models that generalize well from those that do not, will not only help to make ML models more interpretable, but it might also lead to more principled and reliable model architecture design. According to the conventional statistical theory, small generalization error is either due to properties of the model family or because of the regularization techniques used during training. A recent paper at ICLR 2017,  Understanding deep learning requires rethinking generalization shows that current machine learning theoretical frameworks fail to explain the impressive results of deep learning approaches and why understanding deep learning requires rethinking generalization. They support their findings through extensive systematic experiments. Developing human understanding through visualizing ML models Interpretability also means creating models that support human understanding of machine learning. Human interpretation is enhanced when visual and interactive diagrams and figures are used for the purpose of explaining the results of ML models. This is why a tight interplay of UX design with Machine learning is essential for increasing Machine learning interpretability. Walking along the lines of Human-centered Machine Learning, researchers at Google, OpenAI, DeepMind, YC Research and others have come up with Distill. This open science journal features articles which have a clear exposition of machine learning concepts using excellent interactive visualization tools. Most of these articles are aimed at understanding the inner working of various machine learning techniques. Some of these include: An article on attention and Augmented Recurrent Neural Networks which has a beautiful visualization of attention distribution in RNN. Another one on feature visualization, which talks about how neural networks build up their understanding of images Google has also launched the PAIR initiative to study and design the most effective ways for people to interact with AI systems. It helps researchers understand ML systems through work on interpretability and expanding the community of developers. R2D3 is another website, which provides an excellent visual introduction to machine learning. Facets is another tool for visualizing and understanding training datasets to provide a human-centered approach to ML engineering. Conclusion Human-Centered Machine Learning is all about increasing machine learning interpretability of ML systems and in developing their human understanding. It is about ML and AI systems understanding how humans reason, communicate and collaborate. As algorithms are used to make decisions in more angles of everyday life, it’s important for data scientists to train them thoughtfully to ensure the models make decisions for the right reasons. As more progress is done in this area, ML systems will not make commonsense errors or violate user expectations or place themselves in situations that can lead to conflict and harm, making such systems safer to use.  As research continues in this area, machines will soon be able to completely explain their decisions and their results in the most humane way possible.
Read more
  • 0
  • 0
  • 5181

article-image-fae-fast-adaptation-engine-iolites-tool-write-smart-contracts-using-machine-translation
Savia Lobo
12 Mar 2018
2 min read
Save for later

FAE (Fast Adaptation Engine): iOlite's tool to write Smart Contracts using machine translation

Savia Lobo
12 Mar 2018
2 min read
iOlite Labs have developed a Google Translate clone known as the FAE (Fast Adaptation Engine). This new engine can quickly adapt to any known language as its input, and further outputs results in the user’s desired programming language. The iOlite labs team, at present, is focusing on facilitating the huge need for smart contract development through the programming language Solidity on the Ethereum blockchain. This means, iOlite is all set to dissolve the existing technical learning boundaries: Programmers can write smart contracts using their existing skills in programming languages such as Python, C, JavaScript, and so on. Non-programmers can also write smart contracts using natural languages such as English. Although this new engine is free to use, it encourages inter-collaboration of intermediate programmers and expert developers to benefit greatly in two ways. Firstly, auditing the writing process of an author’s smart contract, and secondly by developing/optimizing features. The developers will receive small fees in the form of iLT tokens each time they audit a smart contract or when the features that they have developed are used. This illuminates two of three actors in the ecosystem, regular users (either authors or customers) and contributors (developers/auditors). Currently, iOlite is focussed on smart contracts, which means entering via the intelligence market. However, there can be numerous applications that can include insurance underwriters, lawyers, financial services, businesses, automation, and so on. iOlite, as a collective macro-system is a knowledge generator, it inherently fosters the best features to win through market forces, making it an ideal model for finding truth. As this journey of iOlite advances, it aims to provide solutions for many more language-system problems, such as formal ones in mathematics, and maybe even bridging a gap between natural and formal definitions in fields like neuropsychology, and so on. Read more on this tool in detail along with real-world examples on iOlite’s whitepaper.
Read more
  • 0
  • 0
  • 3006

article-image-crypto-ml-machine-learning-powered-cryptocurrency-platform
Sugandha Lahoti
12 Mar 2018
2 min read
Save for later

Crypto-ML, a machine learning powered cryptocurrency platform

Sugandha Lahoti
12 Mar 2018
2 min read
Crypto-ML is a machine learning powered cryptocurrency price prediction service for Cryptocurrency Traders.  It currently supports Bitcoin, Litecoin, and Bitcoin Cash trading. Individuals at level with Enterprises Individuals have relied on outdated and speculative technical indicators, as opposed to enterprises who have sophisticated machine-learning technologies at their disposal to enhance their trading results. Outdated methods have reduced reliability due to human error, emotional inputs, selection bias, lag, and changing market dynamics. Crypto-ML focuses on bringing newer machine learning technology to individuals, helping to level the playing field. How does Crypto-ML work? Traditional technical indicators generally provide mediocre results, particularly in crypto markets, which are often hectic. Additionally, they are subject to interpretation, can conflict with other indicators, and often lag rather than making an accurate prediction. Crypto-ML uses vast data sets to generate proprietary models for predicting future price movement. It uses machine learning to generate triggers or signals. These signals belong to three categories - buy, sell, or hold. These signals are generated by an end-to-end systematic machine-learning mechanism. Crypto-ML has historically opened an average of 12 trades per year (24 buy/sell signals). Since the models are continuously optimizing, the frequency of triggers may change. The use of ML algorithms eliminates human emotion and error. Moreover, as the crypto markets undergo constant change and flux, the Crypto-ML models are trained and evaluated every day. However, this service predicts future outcomes using past data. Changes in market conditions, including, but not limited to, micro, macro, and global conditions, may invalidate existing models or cause exceptions for one or more days. Thus the team at Crypto-ML says that the tool is to be “used for informational purposes only. Each individual has unique risk tolerances and is responsible for their own investment decisions. It provides no warranties of any kind.” Crypto's ML service early access is currently $19 per month. Users may cancel at any time. This service is based on a month-to-month membership with no commitment. Users can cancel their account from the membership site. To know more, visit Crypto-ML official website.
Read more
  • 0
  • 0
  • 4468
article-image-data-science-news-daily-roundup-12th-march-2018
Packt Editorial Staff
12 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 12th March 2018

Packt Editorial Staff
12 Mar 2018
2 min read
Pydbgen lightweight Python library, Crypto-ML launches machine as a service, and more in today’s top stories around machine learning, deep learning, and data science news. Top Data science News Stories of the Day FAE (Fast Adaptation Engine): iOlite's tool to write Smart Contracts using machine translation Crypto-ML, a machine learning powered cryptocurrency platform   Other Data Science News at a Glance Big data market to hit $103B by 2027; services are key, say analysts.      Read more on SiliconAngle. 2. InfoSum Ltd., a British analytics startup launched a new platform, which enables companies to complete      the work in as little as a few minutes by automating key steps.     Read more on SiliconAngle. 3. Introducing Pydbgen, a lightweight Python library. It generates random useful entries (e.g. name,                  address, credit card number, etc.) and saves them in either Pandas dataframe object, or as a SQLite table in        a database file, or in a MS Excel file.     Read more on Medium. 4. MyRocks Storage Engine in MariaDB is Now a Release Candidate with the release of MariaDB Server        10.3.5 (RC) last week. The MyRocks storage engine was introduced in MariaDB Server 10.2 as an alpha               plugin. Note that the maturity of plugins is separate from the database.      Read more on MariaDB Blog.  
Read more
  • 0
  • 0
  • 1735

article-image-data-science-news-daily-roundup-9th-march-2018
Packt Editorial Staff
09 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 9th March 2018

Packt Editorial Staff
09 Mar 2018
2 min read
Microsoft’s updates to Azure services, Snips NLU, Google’s AstroNet, and more in today’s top stories around machine learning, deep learning, and data science news. Top Data science News Stories of the Day Microsoft updates Azure services for SQL Server, MySQL, and PostgreSQL. Snips open sources Snip NLU, its Natural Language Understanding engine. Other Data Science News at a Glance 1. Google open sources AstroNet, a neural network for identifying exoplanets in light Curves. This code was used to find two exoplanets by training a neural network to analyze data from NASA’s Kepler space telescope and then to accurately identify the most promising planet signals. Read more on the Google research blog. 2. A new database VeritasDB is launched. It a key-value store that guarantees data integrity to the client in the presence of exploits or implementation bugs in the database server. Read more on the Cryptology ePrint Archive. 3. Anexinet has announced the release of ListenLogic 3.0 with AI and Ensemble Machine Learning Capabilities. It includes advanced topic extraction using AI and machine learning, natural language processing, and regex classifiers. Read more on KMWorld. 4. Baidu announced today that it will launch a quantum computing institute. It will be led by Runyao Duan, a professor at the University of Technology Sydney, with the aim of building devices that can be used in other parts of the business over the next five years. Read more on MIT tech review. 5. MariaDB MaxScale 2.2 is now Generally Available. New capabilities include self-healing automation, HA, hardened database security and more. Read more on the MariaDB blog.
Read more
  • 0
  • 0
  • 1256