Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-mysql-8-0-is-generally-available-with-added-features
Amey Varangaonkar
23 Apr 2018
2 min read
Save for later

MySQL 8.0 is generally available with added features

Amey Varangaonkar
23 Apr 2018
2 min read
The long awaited general release of MySQL 8.0 was finally announced last week.  MySQL, as we all know, is the world’s most popular open source database. Despite the growing adoption of NoSQL databases, MySQL continues to be widely used across the industry. The new features introduced in version 8.0 bring significant upgrades in performance, security as well as database development and administration. What’s new in MySQL 8.0 server? Let’s take a quick overview of all the new features and additions introduced in MySQL 8.0, and how they may affect the developers and DBAs: SQL Window Functions, including major enhancements to table expressions, indexes, regular expressions, and more New JSON functions and enhancements to performance, for working with JSON values GIS Support added, which means MySQL 8.0 is now capable of handling and working with geographic data with efficiency. Spatial data types, indexes and functions have been introduced. Better reliability, with DDL statements becoming atomic and crash-safe. New enhancements to InnoDB means the metadata is now stored more securely and can be worked with in a better manner. Significant enhancements to performance schema, configuration variables, and error logging. New security enhancements, with improvements to OpenSSL, SQL roles, changes to authentication and privileges and more Performance improvements, with InnoDB now able to perform better read/write workloads and better resource optimization There are lot more enhancements to the MySQL database such as replication, MySQL shell, and the different DevAPI-based connectors.To know more about the newly added features in MySQL 8.0 in detail, you can check out their official blog page. Download the in-demand 8.0 release to try the new features of MySQL! Additionally, to upgrade your existing MySQL installation from the previous version, you can also check out official MySQL 8.0 documentation. Read More Top 10 MySQL 8 performance bench-marking aspects to know 12 most common MySQL errors you should be aware of  
Read more
  • 0
  • 0
  • 2543

article-image-data-science-news-bulletin-monday-23-april
Richard Gall
23 Apr 2018
2 min read
Save for later

Data science news bulletin - Monday 23 April

Richard Gall
23 Apr 2018
2 min read
Welcome to the new week. Here is the data science bulletin with the latest data science news and software releases. The new TensorFlow is here (not long after the last version...), there's news of Apple open sourcing FoundationDB, and another Blockchain product from one of tech's biggest organizations. Data science news from the Packt Hub JupyterLab v0.32.0 releases. Data science news from across the web Apple has open sourced FoundationDB. Apple purchased the database company in 2015. But with a strategic goal of making FoundationDB "the foundation of the next generation of distributed databases." The team went on to explain that "The vision of FoundationDB is to start with a simple, powerful core and extend it through the addition of "layers". The key-value store, which is open sourced today, is the core, focused on incorporating only features that aren't possible to write in layers. Layers extend that core by adding features to model specific types of data and handle their access patterns." You can now access the FoundationDB source code on GitHub. AWS has announced a new Blockchain product. Blockchain Templates makes setting up cryptocurrency networks easier for app developers working on AWS. The announcement is an important move for Amazon, as it seeks to compete with the likes of Oracle and IBM who have also recently made plays within the Blockchain space. New software releases and updates SciPy 1.1.0rc1 has been released. TensorFlow 1.8.0-rc1 has been released. We reported last week that a new release was going to drop last week and here it is. Clearly the TensorFlow community has been busy... OpenAI releases evolved policy gradients. Evolved policy gradients are a metalearning algorithm that "evolves the loss function of learning agents, which can enable fast training on novel tasks." MySQL 8.0 now out on general availability. The new version of MySQL contains a range of interesting updates that are going to help consolidate its position in the database world. https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/
Read more
  • 0
  • 0
  • 1760

article-image-jupyterlab-v0-32-0-releases
Pravin Dhandre
19 Apr 2018
2 min read
Save for later

JupyterLab v0.32.0 releases

Pravin Dhandre
19 Apr 2018
2 min read
Jupyterlab announced another series of beta release, v0.32.0 with numerous breaking changes in enhancements, bug fixes, and rectifications. This announcement follows closely at the heels of the initial JupyterLab beta release announcement made just two months ago. Jupyterlab is steadily approaching 1.0 release quickly with exciting components and features such as notebook, terminal, text editor, powerful UI and various third party extensions. With the rapid progress by the Jupyter team putting their entire focus on this project, the full and final release  of v1.0 is expected by June-July of this year. Let’s have a quick look at what’s new in this round of release. Major features and improvements New feature additions Added better provision for handling corrupted and invalid state databases. New option created to save documents automatically. Added more commands on scrolling, kernel restart in the notebook context menu. Supports proactive checking for completion metadata from kernels. Added new separate "Shutdown all" button in the Running panel for Terminals/Notebooks. Added support to rotate, flip, and invert images in the image viewer. Added support to display kernel banner in console while kernel restart. Improvements Performance improvements wherein non-focused documents poll the server less. Performance improvements for rendering text streams, especially around progress bars. Major performance improvements for viewing large CSV files. Context menu always visible in the file browser, even for an empty directory. Ability to handle asynchronous comm messages in the services library more correctly. Bug Fixes and Miscellaneous changes Fixed file dirty status indicator. Changed keyboard shortcut for singled-document-mode. “Restart Kernel" cancellation task functions correctly. Fixed UI with better error handling. You can download the source code to access all the exciting features of JupyterLab v0.32.0.
Read more
  • 0
  • 0
  • 2275
Visually different images

article-image-data-science-news-bulletin-monday-16-april
Richard Gall
16 Apr 2018
3 min read
Save for later

Data science news bulletin - Monday 16 April

Richard Gall
16 Apr 2018
3 min read
Welcome to the new week! Here is the data science news you need to know. There's a theme emerging this week with machine learning popping up in new areas such as IoT and security. This is something we're probably going to see more of in the weeks and months to come... Data science news from the Packt Hub MongoDB goes relational with 4.0 release TensorFlow 1.8.0 has just been released in beta Amazon Sagemaker is going to make machine learning on the cloud easier Data science news from across the web UK's House of Lords report on AI raises concerns around ethics and monopolization Symantec has opened its machine learning tools - "Targeted Attacks Analytics" to the public. The cyber security company has been using machine learning in its research into state-sponsored cybercrime, and believes it to be incredibly effective in identifying and tackling threats. Algorithms capable of processing a huge range of behaviors are used to identify suspect activity. One of the researchers behind the project claimed that it helped them to identify threats they had never seen before - clearly, this could be big news for many companies battling cybercrime. NeoPulse Framework 2.0 launched in a bid to make artificial intelligence more accessible. Created by AI startup Dimensional Mechanics, NeoPulse Framework 2.0 is a toolkit that allows developers - and others - to solve business problems using artificial intelligence and machine learning with relative ease. Even for those with AI experience and knowledge, the company suggests that NeoPulse Framework 2.0 will "reduce the amount of code required to build AI models by up to an amazing 85%." Splunk gets on board with Industrial IoT. The big data analytics company has announced Splunk Industrial Asset Intelligence, a tool that will allow businesses to better process and analyze IoT. It will be formally announced on April 23 at the Hannover Messe conference, and made available as a limited release. It's expected to be opened up later this year. Nvidia and Arm partner to enter IoT marketplace. Nvidia's Deep Learning Accelerator (NVDLA) is going to be integrated within Arm's Project Trillium platform. This will make it easier to embed deep learning within IoT systems. PGAdmin 4 v.3.0 released.  EdgeDB is slated for release very soon. "In the next few weeks we will release the first public technology preview of EdgeDB say the team behind it on their blog. EdgeDB describes itself as a 'new generation of database' that aims to solve the "inconsistency between relational databases and modern programming." OmniDB 2.7 has been released. 2nd Quadrant, the PostgreSQL organization, has released an update to its broswer based database management tool. It offers improved debugging and security features.
Read more
  • 0
  • 0
  • 1656

article-image-tensorflow-1-8-0-rc0-releases
Pravin Dhandre
16 Apr 2018
2 min read
Save for later

TensorFlow 1.8.0-rc0 releases

Pravin Dhandre
16 Apr 2018
2 min read
With just 15 days from TensorFlow’s major release of 1.7.0, the TensorFlow Community is highly energetic and enthusiast to distribute its 1.8.0 in coming days. The team has announced the beta release 1.8.0-rc0 with numerous exciting features and bug fixes. This newer version has paid more attention towards supporting GPU memory, running on multiple GPUs and cloud performance. Major features and improvements in TensorFlow 1.8.0-rc0: Adds Gradient Boosted Trees for pre-made Estimators with BoostedTreesClassifier and BoostedTreesRegressor. Adds 3rd generation pipeline config for Cloud TPUs for performance improvement and usability. Support for running Estimator model on multiple GPUs by passing tf.contrib.distribute.MirroredStrategy() to tf.estimator.RunConfig(). Support for prefetching GPU memory using tf.contrib.data.prefetch_to_device(). Moving Bayesian computation tf.contrib.bayesflow to its own dedicated repository. Allows generic proto parsing and RPC communication with tf.contrib.{proto,rpc}. Bug Fixes in TensorFlow 1.8.0-rc0: Enabled support for prefetching of dataset elements to GPU memory with tf.contrib.data.prefetch_to_device. Allows automatic tuning of prefetch buffer sizes with tf.contrib.data.AUTOTUNE. Added support for building datasets of CSV files with tf.contrib.data.make_csv_dataset. Provision of creating iterators with Both Dataset.__iter__() and Dataset.make_one_shot_iterator() in eager execution mode. Enabled automatic device placement. tf.GradientTape voluntarily moved out of future contributions to the library. Added new data preprocessing functions and fashion mnist dataset to tf.keras. Accelerated Linear Algebra (XLA) with lexicographical feature. Allows exclusion of nodes in tensor-filter operations. Fixed spurious background colors in text terminals. Fixed batch dimensions reshaping with BatchReshape. Support for explicit gradient checkpointing on TPU with tf.contrib.layers.recompute_grad works. For the complete list of bug fixes and improvements, you can read TensorFlow’s Github page. Miscellaneous changes: Easy calling to TensorFlow C API. Description of shapes and pointer noted in tf.distributions.Distribution tutorial notebook. Scatter operations extended and updated with tf.scatter_min and tf.scatter_max cuDNN RNN operations moved to TensorFlow codebase. Added float64 support for Conv2d, Conv2dBackpropInput, Conv2dBackpropFilter and AvgPool/AvgPoolGrad. Localised graph name scope thread for multi-threaded environments. Updated nsync synchronization library for avoiding slow primitives on Linux. Non-uniformity of orthogonal matrices fixed. Multi-image Estimator eval summaries displays correctly. You can download the source code to access all the exciting features of TensorFlow 1.8.0-rc0.
Read more
  • 0
  • 0
  • 3069

article-image-mongodb-relational-4-0-release
Amey Varangaonkar
16 Apr 2018
2 min read
Save for later

MongoDB going relational with 4.0 release

Amey Varangaonkar
16 Apr 2018
2 min read
MongoDB is, without a doubt, the most popular NoSQL database today. Per the Stack Overflow Developer Survey, more developers have been wanting to work with MongoDB than any other database over the last two years. With the upcoming MongoDB 4.0 release, it plans to up the ante by adding support for multi-document transactions and ACID-based features (Atomicity Consistency Integrity and Durability). Poised to be released this summer, MongoDB 4.0 will combine the speed, flexibility and the efficiency of document models - features which make MongoDB such a great database to use - with the assurance of transactional integrity. This new addition should give the database a more relational feel, and would suit large applications with high data integrity needs regardless of how the data is modeled. It has also ensured that the support for multi-document transactions will not affect the overall speed and performance of the unrelated workloads running concurrently. MongoDB have been working on this transactional integrity feature for over 3 years now, ever since they incorporated the WiredTiger storage engine. The MongoDB 4.0 release should also see the introduction of some other important features such as snapshot isolation, a consistent view of data, ability to roll-back transactions and other ACID features. Per the 4.0 product roadmap, 85% of the work is already done, and the release seems to be on time to hit the market. You can read more about the announcement on MongoDB’s official page.You can also join the beta program to test out the newly added features in 4.0.  
Read more
  • 0
  • 0
  • 5331
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-openai-charter-safety-standards-transparency
Richard Gall
10 Apr 2018
3 min read
Save for later

OpenAI charter puts safety, standards, and transparency first

Richard Gall
10 Apr 2018
3 min read
OpenAI, the non-profit that promotes the development of artificial intelligence, has released a charter. In it, the organization outlines the core principles it believes should govern the development and management of artificial intelligence. The OpenAI charter represents an important step in initiating a broader discussion around the ethical considerations of artificial intelligence. Revealed in a short blog post, the organization explains that the OpenAI charter is a summation of the development of its strategy over the last two years. Its mission remains central to the charter, however: ensuring that the development of artificial intelligence benefits all of humanity.  What's inside the OpenAI charter? The charter is then broken down into 4 other areas. Broadly-distributed benefits - OpenAI claims its primary duty is to humanity Long-term safety Technical leadership - OpenAI places itself at the cutting edge of the technology that will drive AI forward Cooperative orientation - working with policy-makers and institutions Core concerns the OpenAI charter aims to address A number of core concerns lie at the heart of the charter. One of the most prominent is what OpenAI see as the competitive race to create AGI "without time for adequate safety precautions". It's because of this that OpenAI seeks cooperation with "other research and policy institutions" - essentially ensuring that AI doesn't become a secretive corporate arms race. Clearly, for OpenAI, transparency will be key to creating artificial intelligence that is 'safe'. OpenAI also claims it will publish its most recent AI research. But perhaps even more interestingly, the charter goes on to say that "we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research." There appears to be a tacit recognition of a tension between innovation of AI and the ethics around such innovations. A question nevertheless remains over how easy it is for an organization to be at the cutting-edge of AI technology, while taking part in conversations around safety and ethics. As the last decade of technical development has proved, innovation and standards sometimes seem to be diametrically opposed, rather than in support of one another. This might be important in moving beyond that apparent opposition. 'If tech is building the future, let’s make that future inclusive and representative of all of society’ – An interview with Charlotte Jee What your organization needs to know about GDPR 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017 Mark Zuckerberg’s Congressional testimony: 5 things we learned The Cambridge Analytica scandal and ethics in data science
Read more
  • 0
  • 0
  • 2512

article-image-deeplearning4j-1-0-0-alpha-arrives
Sunith Shetty
09 Apr 2018
4 min read
Save for later

Deeplearning4j 1.0.0-alpha arrives!

Sunith Shetty
09 Apr 2018
4 min read
The Skymind team has announced a milestone release of Eclipse Deeplearning4j (DL4J), an open-source library for deep learning. DL4J 1.0.0-alpha has some breakthrough changes which will ease development of deep learning applications using Java and Scala. From a developer’s perspective, the roadmap provides an exciting opportunity to perform complex numerical computations with the major updates done to each module of Deeplearning4j. DL4J is a distributed neural network library in Java and Scala which allows distributed training on Hadoop and Spark. It provides powerful data processing that enables efficient use of CPUs and GPUs. With new features, bug fixes and optimizations in the toolkit, Deeplearning4j provides excellent capabilities to perform advanced deep learning tasks. Here are some of the significant changes available in DL4J 1.0.0-alpha: Deeplearning4j: New changes made to the framework Enhanced and new layers added to the DL4J toolkit. Lots of new API changes to optimize the training, building, and deploying neural network models in the production environment. A considerable amount of bug fixes and optimizations are done to the DL4J toolkit. Keras 2 import support Now you can import Keras 2 models into DL4J, while still keeping backward compatibility for Keras 1. The older module DL4J-keras and Model API in DL4J version 0.9.1 is removed. In order to import Keras models, the only entry point you can use is KerasModelImport.   Refer DL4J-Keras import support to know more about the complete list of updates. ND4J: New features A powerful library used for scientific and numerical computing for the JVM: Hundreds of new operations and features added to ease scientific computing, an essential building block for deep learning tasks. Added NVIDIA CUDA support for 9.0/9.1. They are continuing support for CUDA 8.0, however dropping support for CUDA 7.5. New API changes are done to the ND4J library. ND4J: SameDiff There is a new Alpha release of SameDiff, which is an auto-differentiation engine for ND4J. It supports two execution modes for serialized graphs: Java-driven execution, and Native execution. It also supports import of TensorFlow and ONNX graphs for inference purposes. You can know all the other new features at SameDiff release notes. DataVec: New features An effective ETL library for getting data into the pipeline, so neural networks can understand: Added new features and bug fixes to perform efficient and powerful ETL processes. New API changes incorporated in the DataVec library. Arbiter: New features A package for efficient optimization of neural networks to obtain good performance: New Workspace support added to carry out hyperparameter optimization of machine learning models. New layers and API changes have been done to the tool. Bug fixes and improvements for optimized tuning performances. A complete list of changes is available on Arbiter release notes. RL4J: New features A reinforcement learning framework integrated with deeplearning4j for the JVM: Added support for LSTM layers to asynchronous advantage actor-critic (A3C) models. Now you can use the latest version of VizDoom since MDP for Doom is updated. Lots of fixes and improvements implemented in the RL4J framework. ScalNet A scala wrapper for DL4J resembling a Keras like API for deep learning: New ScalNet Scala API is released which is very much similar to Keras API. It supports Keras based sequential models. The project module closely resembles both DL4J model-import module and Keras. Refer ScalNet release notes, if you like to know more. ND4S: N-Dimensional Arrays for Scala An open-source Scala bindings for ND4J: ND4S now has Scala 2.12 support Possible issues with the DL4J 1.0.0-alpha release Since this is an alpha release, you may encounter performance related issues and other possible issues (when compared to DL4J version 0.9.1). This will be addressed and rectified in the next release. Support for training a Keras model in DL4J is still very limited. This issue will be handled in the next release. To know more, you can refer Keras import bug report. Major new operations added in ND4J still do not use GPU yet. The same applies to the new auto-differentiation engine for ND4J. We can expect more improvements and new features on DL4J 1.0.0 roadmap. For the full list of updates, you can refer the release notes.  Check out other popular posts: Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss Top 10 Deep learning frameworks    
Read more
  • 0
  • 0
  • 3840

article-image-is-comet-the-new-github-for-artificial-intelligence
Pravin Dhandre
09 Apr 2018
2 min read
Save for later

Is Comet the new Github for Artificial Intelligence?

Pravin Dhandre
09 Apr 2018
2 min read
Comet.ml, is one of the infrastructure-agnostic machine learning (ML) platforms which is simple, fast and free for open source projects. It launched the first platform for data science and machine learning users to track, monitor and optimize their machine learning models. Comet allows data science teams to track their code, experiments, and results on machine learning projects. The newly launched platform allows users to optimize their machine learning and artificial intelligence models and twist hyperparameters of their demonstrations. The platform also provides dashboards which help in collaboration of codes of the ML research and results. It allows researchers to view results with an intuitive graph and compare various aspects and versions of the machine learning experiments. Comet also functions on popular Machine Learning libraries such as Keras, TensorFlow, PyTorch, scikit-learn, and Theano. The platform allows teammates to collaborate real-time without affecting the mobility and adaptability of the datasets and production models. Key Features of Comet: Single-line Tracking - Start tracking with just a single line into your training code. It works on any machine and with any type of model. Compare Experiments - Compare different experiments and observe the code differences, hyper-parameters, and various other data points. Integration with Git - Comet allows to integrate with Github and other git service providers. After finalizing  the experiment, it automatically generates a pull request with the model with the best accuracy to the Github repository. Collaboration - Share multiple projects with team members and stakeholders along with visibility and insights into project team performance. Documentation -  Provides Notes section allowing you to add and manage documentation for all projects and training experiments. Comet is already adopted by more than 30 industry leaders and research universities with more than 6000 large-scale machine learning models. Check out the video to know more about the platform functionality: https://www.youtube.com/watch?v=LlsRMQjV__c&feature=youtu.be Other latest news for a quick read: Deeplearning4j 1.0.0-alpha arrives! How greedy algorithms work?  
Read more
  • 0
  • 0
  • 2747

article-image-google-employees-protest-against-the-use-of-artificial-intelligence-in-military
Amey Varangaonkar
06 Apr 2018
3 min read
Save for later

Google Employees Protest against the use of Artificial Intelligence in Military

Amey Varangaonkar
06 Apr 2018
3 min read
Thousands of Google employees have raised their concerns regarding the use of Artificial Intelligence for military purposes. The employees, which included many senior engineers as well, have signed a petition requesting Google CEO Sundar Pichai to pull Google out of Project Maven - a Pentagon-backed project harvesting AI to improve the military technology. Pichai was also urged by employees to establish and enforce strict policies which keep Google and its associated subsidiaries from indulging in ‘the business of war’. What does the petition say? The letter, signed by over 3000 Google employees, argues that collaborating with the government to work on military projects is strictly against Google’s core ideology that technology must be used for welfare and not for destruction of mankind. It argues that backing the military could backfire tremendously by creating a negative image of Google in the minds of customers, and also affect potential recruitment. The concerned employees are of the opinion that since Google is currently engaged in a serious competition with many other companies to hire the best possible talent, some candidates could be put off by Google’s military connections with the government. What is Project Maven? Project Maven is a Pentagon-backed initiative which was announced in May 2017. The main purpose of this project was to integrate Artificial Intelligence with various defense programs to make them smarter. Backed with Google’s technology, this program aims to improve the image and video processing capabilities of drones to accurately pick out human targets for strikes, while identifying innocent civilians to reduce or prevent their accidental killing. Google have declared their participation in this program in a ‘non-offensive capacity’, and have maintained that their products or technology would not be used to create autonomous weapons that operate without human intervention. Connections with the Pentagon It is also interesting to note that some of Google’s top executives are connected to Pentagon in some capacity. Eric Schmidt, the former executive chairman of Google who is still a member of the executive board of Google’s parent company Alphabet, serves in the Defense Innovation Board, a Pentagon advisory body. Milo Medin, Vice President of Access Services, Google Capital is also a part of this body. What about Amazon and Microsoft? When it comes to connections with the Pentagon, Google aren’t the only ones involved. Amazon has collaborated with the Department of Defense through the Amazon Rekognition API for image recognition. Also, Microsoft announced their collaboration with the US government by providing IaaS (Infrastructure as a Service) and PaaS (Platform as a Service) capabilities to meet the data storage and security needs of the government. The news related to the dispute and the subsequent petition was initially reported by Gizmodo earlier this March. Considering the project is expected to cost close to $70 million in its first year, the petitioners are aiming to discourage Google from getting into more lucrative contracts as the demand for AI in defense and military applications grows.
Read more
  • 0
  • 0
  • 3046
article-image-cockroachdb-2-0-is-out
Sunith Shetty
05 Apr 2018
2 min read
Save for later

CockroachDB 2.0 is out!

Sunith Shetty
05 Apr 2018
2 min read
CockroachDB has announced a new version 2.0 with notable features in their armory. This breakthrough version has brought them one step closer to making data accessible to everyone. CockroachDB is an open source, cloud-native SQL database, which allows you to build global, large, and resilient cloud applications. They automatically scale, recover and repair things allowing the database to survive critical disasters. It has an excellent support with popular orchestration tools such as Kubernetes and Mesosphere DC/OS to simplify and automate operations.     Some of the noteworthy changes available in CockroachDB 2.0 : Re-adjustment to customer’s changing requirements: CockroachDB 2.0 support for JSON has bought more flexibility and consistency. You will be able to handle both structured and semi-structured data, thus allowing you to use multiple data models within the same database. Better project handling to cope up with changing customer requirements and rapid prototyping for large-scale systems. Now you can perform in-place transactions and inverted indices to accelerate queries on large volumes of data using CockroachDB 2.0’s Postgres-compatible JSON. Performance and scalability Improvements: Developers prefer an agile methodology while building real-world applications. CockroachDB 2.0 offers better scalability standards and performance measures to deal with increasing amount of data and application needs. New operators to handle growing amount of user request volume with ease. Managing multi-regional workloads: CockroachDB 2.0 has bolstered their efficiency in managing multi-regional data to deliver low latency applications New cluster dashboard helps you visualize globally distributed clusters. This means you can keep a close watch on performance bottlenecks and stability problems. You can now perform excellent customer service by adapting to multi-regional needs. Now you can bind the data to respective customers in the data centers in that same region using a compelling new feature called Geo-partitioning. For the full list of updates, you can refer the release notes.
Read more
  • 0
  • 0
  • 2447

article-image-paper-in-two-minutes-attention-is-all-you-need
Sugandha Lahoti
05 Apr 2018
4 min read
Save for later

Paper in Two minutes: Attention Is All You Need

Sugandha Lahoti
05 Apr 2018
4 min read
A paper on a new simple network architecture, the Transformer, based solely on attention mechanisms The NIPS 2017 accepted paper, Attention Is All You Need, introduces Transformer, a model architecture relying entirely on an attention mechanism to draw global dependencies between input and output. This paper is authored by professionals from the Google research team including Ashish Vaswani, Noam Shazeer, Niki Parmar,  Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. The Transformer – Attention is all you need What problem is the paper attempting to solve? Recurrent neural networks (RNN), long short-term memory networks(LSTM) and gated RNNs are the popularly approaches used for Sequence Modelling tasks such as machine translation and language modeling. However, RNN/CNN handle sequences word-by-word in a sequential fashion. This sequentiality is an obstacle toward parallelization of the process. Moreover, when such sequences are too long, the model is prone to forgetting the content of distant positions in sequence or mix it with following positions’ content. Recent works have achieved significant improvements in computational efficiency and model performance through factorization tricks and conditional computation. But they are not enough to eliminate the fundamental constraint of sequential computation. Attention mechanisms are one of the solutions to overcome the problem of model forgetting. This is because they allow dependency modelling without considering their distance in the input or output sequences. Due to this feature, they have become an integral part of sequence modeling and transduction models. However, in most cases attention mechanisms are used in conjunction with a recurrent network. Paper summary The Transformer proposed in this paper is a model architecture which relies entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and tremendously improves translation quality after being trained for as little as twelve hours on eight P100 GPUs. Neural sequence transduction models generally have an encoder-decoder structure. The encoder maps an input sequence of symbol representations to a sequence of continuous representations. The decoder then generates an output sequence of symbols, one element at a time. The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder. The authors are motivated to use self-attention because of three criteria.   One is that the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. The third is the path length between long-range dependencies in the network. The Transformer uses two different types of attention functions: Scaled Dot-Product Attention, computes the attention function on a set of queries simultaneously, packed together into a matrix. Multi-head attention, allows the model to jointly attend to information from different representation subspaces at different positions. A self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length is smaller than the representation dimensionality, which is often the case with machine translations. Key Takeaways This work introduces Transformer, a novel sequence transduction model based entirely on attention mechanism. It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers for translation tasks. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, the model achieves a new state of the art.  In the former task the model outperforms all previously reported ensembles. Future Goals Transformer has only been applied to transduction model tasks as of yet. In the near future, the authors plan to use it for other problems involving input and output modalities other than text. They plan to apply attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. The Transformer architecture from this paper has gained major traction since its release because of major improvements in translation quality and other NLP tasks. Recently, the NLP research group at Harvard have released a post which presents an annotated version of the paper in the form of a line-by-line implementation. It is accompanied with 400 lines of library code, written in PyTorch in the form of a notebook, accessible from github or on Google Colab with free GPUs.  
Read more
  • 0
  • 0
  • 16117

article-image-introducing-mapd-cloud-the-first-analytics-platform-with-gpu-acceleration-on-cloud
Pravin Dhandre
05 Apr 2018
2 min read
Save for later

Introducing MapD Cloud, the first Analytics Platform with GPU Acceleration on Cloud

Pravin Dhandre
05 Apr 2018
2 min read
California-based Big Data Analytics provider MapD Technologies announced MapD Cloud, a SAAS based analytics platform providing data scientists and big data buddies with one-click access to visual analytics with GPU acceleration. The platform is built and engineered for high scalability and speedy operations on hefty volumes of structured data sets. Backed with its enterprise technology MapD Core and NVIDIA GPUs, the cloud platform equips users with the capability of mining billions of datasets in just few milliseconds. No more waiting to get a real-time experience of analytics. Capabilities and Offerings of MapD Cloud: Multisource Dashboards & Multi Layer Geo charts Cross Filtering Geospatial Context Dashboard Auto refresh Advanced Security High Availability & Distributed Scale-out API Access to Apache Sqoop™, Apache Kafka® Import, Apache Thrift™ API, ODBC/JDBC and DB-API Apart from this, the cloud platform keeps you updated with its upgrades and changing innovations from time to time. With MapD Cloud, you no longer need heavy GPU hardware for your machine learning and deep learning initiatives. The platform provides complete security for your sensitive and rich datasets with 24/7 non-stop cloud availability. The co-founder of the world's largest software registry firm, npm Inc adds, “With more than 18 billion downloads per month and doubling every 9 months, we have a ton of data to sift through to spot trends and diagnose problems. MapD Cloud lets us answer questions about our community and explore trends in all the different dimensions of our data in real time. We're excited about MapD Cloud, which will give us all that power in a convenient, scalable way.” The MapD Cloud is available for all users with different price points offering subscription as low as $150 a month, for almost 10 million-row structured data sets. Moreover, it also allows users a free two-week trial period for almost 100 million rows of data. To know more about MapD Cloud, visit the MapD website.
Read more
  • 0
  • 0
  • 2513
article-image-coinbase-commerce-api-launches
Richard Gall
05 Apr 2018
2 min read
Save for later

Coinbase Commerce API launches

Richard Gall
05 Apr 2018
2 min read
Coinbase announced the launch of their Coinbase Commerce API on April 3rd. This represents an interesting step in the cryptocurrency world as it will allow eCommerce merchants to accept multiple cryptocurrencies in a "user controlled wallet". This means that those stores that want to accept cryptocurrencies will no longer have to expend energy developing their own payment platform. With platforms like Stripe recently ending its support for Bitcoin payments, it'ss a chance for Coinbase to position itself as an essential component in the continued growth of cryptocurrency. A number of well-established eCommerce platforms, including Shopify, will have Coinbase Commerce integration. The Coinbase team explained in a post on Medium: Starting today, instead of manually creating payment buttons or hosted pages to accept cryptocurrency payments, you can dynamically generate them using our API... Coinbase Commerce and Reddit However, this product launch hasn't been straightforward for Coinbase. The social media platform Reddit announced at the end of March that it would cease using Bitcoin as a means of paying for premium membership. This was revealed to be, in part, due to the launch of the new product. On Reddit, admin emoney04 said: "Yup that’s right. The upcoming Coinbase change, combined with some bugs around the Bitcoin payment option that were affecting purchases for certain users, led us to remove Bitcoin as a payment option." However, while this news may be frustrating for Coinbase, Reddit is open to accepting bitcoin payments again. emoney04 went on to say that "we're going to take a look at demand and watch the progression of Coinbase Commerce before making a decision on whether to reenable". Community misgivings about Coinbase Commerce However, as an article on BTCManager notes, there are some misgivings within the community that Coinbase Commerce represents a move away from the original peer to peer philosophy behind cryptocurrency. BTCManager quoted Reddit user Bitcoin-Yoda: "A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Any intermediary between your BTC payment and the merchant is violating the definition of Bitcoin and your privacy." How to get started with Coinbase Commerce You can learn more about Coinbase Commerce here. There's also developer documentation provided if you want to take a look a bit more closely at how you can get started.
Read more
  • 0
  • 0
  • 3445

article-image-huawei-launches-hiai
Richard Gall
04 Apr 2018
2 min read
Save for later

Huawei launches HiAI

Richard Gall
04 Apr 2018
2 min read
Huawei launched the P20 to considerable acclaim. But the launch features news that's even more exciting - particularly if you're a machine learning developer/aficionado. The Chinese telecoms giant has launched HiAI, its artificial intelligence engine, to coincide with the P20's release. What is HiAI? HiAI is Huawei's AI engine. It will power applications on the new P20 mobile, giving users an experience that contains some of the most exciting artificial intelligence capabilities on the planet. But more importantly, it will also open up new opportunities for mobile developers and machine learning engineers. Engineers can now download the Driver Development Kit (DDK), IDE and SDK to begin using HiAI. HiAI's key features Huawei has made sure HiAI brings a range of artificial intelligence features - it certainly looks like it should be enough to ensure they are competing with other innovators in the space. Here are some of the key features of the software: Automatic Speech Recognition - this isn't available outside of China at the moment. Essentially, it turns human speech into text. Natural Language Understanding engine - The Natural Language Understanding engine complements the work done by the ASR engine above. Essentially, it allows a computer to 'interpret' various dimensions of human language and 'act' accordingly. Computer vision - Computer vision is what makes a number of popular mobile apps possible - for example, in aging software, or even Snapchat where you can add filters. HiAI includes a computer vision engine which is capable of facial and object recognition HiAI is going to only make Huawei's new phone even better - the more applications that are able to utilize artificial intelligence, the more attractive the phone will be to consumers. Certainly, Huawei is an underrated giant of the telecoms space, particularly when it comes to consumer tech. With its new artificial intelligence engine, it might have created something that could be the beginning of more success and greater market share outside of China. Learn more on Huawei's website. Source: XDA
Read more
  • 0
  • 0
  • 3217