Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-jetbrains-announces-pycharm-2018-2
Savia Lobo
27 Jul 2018
4 min read
Save for later

JetBrains announces Pycharm 2018.2

Savia Lobo
27 Jul 2018
4 min read
JetBrains’ Pycharm is back with its Q2 release for this year, Pycharm 2018.2. The release of Pycharm 2018.1 in the month of March included several major features such as code shells and partial commits. SSH remote Python interpreters and a lot more features. This quarter’s release includes some new features such as pipenv config, pytest fixtures and much more! Pycharm’s motto is to help developers‘ Develop quickly, and with confidence’. Known as one of the popular Python IDEs for professional developers, Pycharm includes an integrated development environment for computer programming with support for all Python tools at one place. Pycharm is available in two editions, Pycharm Professional Edition, and Pycharm community edition Pycharm 2018.12 includes new features classified in five broad categories: Python improvements Pipenv support Pipenv makes an application's dependency management extremely simple. PyCharm 2018.2 will auto-create a pipenv when one open’s a project with a Pipfile, and will make it easy to create new projects with pipenvs. New and improved quick documentation PyCharm’s quick documentation is easier to read and appears right in the editor. Simply press Ctrl-Q (Ctrl-J on macOS) to see exactly the documentation required. pytest Fixtures and pytest-bdd PRO ONLY Pytest makes testing your code a cakewalk. The 2018.2 version includes an upgraded Pytest support with BDD and code intelligence for fixtures. Fixtures are available in both the community and the professional edition of PyCharm 2018.2. However, the BDD support is only available in the PyCharm Professional Edition. reST Preview and Attrs support With a PyCharm plugin, on can see how their Markdown document looks like. This and other functionalities are now available for reStructuredText. The attrs library, on the other hand, allows one to upgrade the project by including new Dataclasses in Python 3.7 Improvements in Code insight PyCharm’s aim is to help in writing better and faster Python code. Following this, the code insights have been improved. It now checks more type hints, and also if one’s correctly awaiting function calls in asynchronous code, and offers quick fixes. VCS (Version Control Support) improvements Support for multiple GitHub accounts This version of PyCharm makes switching between a whole bunch of GitHub accounts more convenient. Multiple tabs and diff preview using Logs tab The version 2018.2 allows segregating system’s history in multiple tabs to make it easier to learn from one’s project history. Browse the entire repository at a specific revision This improvement makes it easy to browse project repository at a specific revision. Database improvements Create query plan diagrams In order to keep the application performant, a regular check on the query plans is necessary. In PyCharm 2018.2 one can visually inspect where a query tweaking or adding an index is required. However, this feature is only available in the professional version of Pycharm. New icons PyCharm 2018.2 looks sleeker than ever before. The new design philosophy reduces the usage of color to where it is semantically important, making it easier to find what you need at a glance. Support for MacBook Pro Touch Bar PyCharm now provides context-sensitive touch bar contexts for running your code, debugging, VCS, and more on the MacBook Pro with a Touch Bar. Javascript improvements All the improvements here, are available only in the professional version of Pycharm. Code coverage for code running in the browser: find unused code Code maintenance is usually a non-favorite task for developers. Moreover, deleting unused code makes the job quicker and faster. PyCharm now helps in finding unused client-side JS code, resulting into a faster job completion. Faster indexing for Angular Indexing new Angular projects is now twice as fast in this new version. New intentions and refactorings, such as Extract React Component Now one can refactor JavaScript with confidence in PyCharm 2018.2 by Extracting React component, Implementing interface, Generating cases for a TypeScript switch statement over an enum, and a lot more. Code completion for Vue events and event modifiers Most Vue templates will have event code attached to them. PyCharm now makes it easier to hook up handlers to the correct event. Read more about the Pycharm 2018.2 on its official website. What is interactive machine learning? Cryptocurrency-based firm, Tron acquires BitTorrent Google Cloud Next: Fei-Fei Li reveals new AI tools for developers
Read more
  • 0
  • 0
  • 1746

article-image-announcing-databricks-runtime-4-2
Pravin Dhandre
25 Jul 2018
2 min read
Save for later

Announcing Databricks Runtime 4.2!

Pravin Dhandre
25 Jul 2018
2 min read
Databricks announces Databricks Runtime 4.2 with numerous updates and added components on Spark internals, Databricks Delta and improvisions to its previous version. The databricks runtime 4.2 is powered with Apache Spark 2.3 and recommended for its quick adoption to enjoy the upcoming GA release of Databricks Delta. Databricks Runtime is a set of software artifacts which runs on the clusters of machines and improves the usability and performance of big data analytics. New Features of Databricks Runtime 4.2 Added Multi-cluster writing support, enabling users to use the transactional writing features from Databricks Delta. Streams getting recorded directly to the registered table on Databricks Delta. These streams are stored in the Hive metastore of Databricks Delta platform using df.writeStream.table(...). Added new streaming foreachBatch() for Scala. This helps to define a function for processing output of every micro batch using DataFrame operations. Added support for streaming foreach() for Python language which was earlier available only to Scala. Added from_avro/to_avro functions to support read/write Avro data within DataFrame. Improvements All commands and queries of Databricks Delta support referring to a table using its path as an identifier (that is, delta.`/path/to/table`). DESCRIBE HISTORY includes commit ID and is now ordered newest to oldest by default. Bug Fixes Partition-based filtering predicates operate correctly for special cases like when the predicates differ from the table. Fixed missing column AnalysisException for performing better equality checks on boolean columns in Databricks Delta tables i.e. booleanValue = true. Stopped modifying transaction log while using CREATE TABLE for creating a pointer to an existing table. This prevents unnecessary conflicts with concurrent streams and allows the creation of metastore pointer to tables where the user only has read access to the data. Stopped causing Out Of Memory in the driver while Calling display() on a stream with large amounts of data. Fixed truncation of long lineages which were earlier causing StackOverFlowError while updating the state of a Databricks Delta table. For more details, please read the release notes officially documented by Databricks. Databricks open sources MLflow, simplifying end-to-end Machine Learning Lifecycle Project Hydrogen: Making Apache Spark play nice with other distributed machine learning frameworks Apache Spark 2.3 now has native Kubernetes support!
Read more
  • 0
  • 0
  • 2711

article-image-numpy-1-15-0-release-is-out
Savia Lobo
24 Jul 2018
2 min read
Save for later

NumPy 1.15.0 release is out!

Savia Lobo
24 Jul 2018
2 min read
NumPy 1.15.0 is out and is said to include a lot of changes including several cleanups, deprecations of old functions. It also includes improvements to many existing functions. The Python versions supported by NumPy 1.15.0 are 2.7, 3.4 to 3.7. Some of the highlights in this release include NumPy has switched to pytest for testing as this version no longer contains the maintained nose framework. However, the old nose based interface is still available for downstream projects. A new numpy.printoptions context manager can now set print options temporarily for the scope of the with block:: with np.printoptions(precision=2): ... print(np.array([2.0]) / 3) [0.67] Improvements to the histogram functions. This version includes numpy.histogram_bin_edges, a function to get the edges of the bins used by a histogram without needing to calculate the histogram. Support for unicode field names in python 2.7. Improved support for PyPy. Fixes and improvements to numpy.einsum, which evaluates Einstein summation convention on the operands. New features in the NumPy 1.15.0 Added np.gcd and np.lcm ufuncs for integer and objects types Both np.gcd and np.lcm used for computing the greatest common divisor, and the lowest common multiple respectively. These work on all the numpy integer types, as well as the built in arbitrary-precision Decimal and long types. Support for cross-platform builds for iOS The build system in this version has been modified to add support for the _PYTHON_HOST_PLATFORM environment variable, used by distutils when compiling on one platform for another platform. This makes it possible to compile NumPy for iOS targets. Addition of return_indices keyword for np.intersect1d New keyword return_indices returns the indices of the two input arrays that correspond to the common elements. Build system This version has an added experimental support for the 64-bit RISC-V architecture. Future Changes expected in the further versions Both NumPy 1.16 and NumPy 1.17 will be dropping support for Python 3.4 and Python 2.7 respectively. Read more about this release in detail on its GitHub Page Implementing matrix operations using SciPy and NumPy NumPy: Commonly Used Functions Installing NumPy, SciPy, matplotlib, and IPython  
Read more
  • 0
  • 0
  • 2875
Visually different images

article-image-tensorflow-1-10-rc0-released
Amey Varangaonkar
24 Jul 2018
2 min read
Save for later

Tensorflow 1.10 RC0 released

Amey Varangaonkar
24 Jul 2018
2 min read
Continuing the recent trend of rapid updates introducing significant fixes and new features, Google have released the first release candidate for Tensorflow 1.10. TensorFlow 1.10 RC0 brings some improvements in model training and evaluation, and also how Tensorflow runs in a local environment. This is Tensorflow’s fifth update release in just over a month, which includes two major version updates, the previous one being Tensorflow 1.9 What’s new in Tensorflow 1.10 RC0? The tf.contrib.distributions module will be deprecated in this version. This module is primarily used to work with statistical distributions Upgrade to NCCL  2.2 will be mandatory in order to perform GPU computing with this version of Tensorflow, for added performance and efficiency. Model training speed can now be optimized by improving the communication between the model and the Tensorflow resources. For this, the RunConfig function has been updated in this version. The Tensorflow development team also announced support for Bazel - a popular build and testing automation software - and deprecated support for cmake starting with Tensorflow 1.11. This version also incorporated some bug fixes and performance improvements to the tf.data, tf.estimator and other related modules. To get full details on the features list of this release candidate, you can check out Tensorflow’s official release page on Github. No news on Tensorflow 2.0 yet Many developers were expecting the next major release of Tensorflow, Tensorflow 2.0, to be released in late July or August. However, the announcement of this release candidate and the mention of the next version update (1.11) means they will have to wait for some more time before they get to know more about the next breakthrough release. Read more Why Twitter (finally!) migrated to Tensorflow Python, Tensorflow, Excel and more – Data professionals reveal their top tools Can a production ready Pytorch 1.0 give TensorFlow a tough time?
Read more
  • 0
  • 0
  • 2912

article-image-seaborn-v0-9-0-brings-better-data-visualization-with-new-relational-plots-theme-updates-and-more
Sugandha Lahoti
24 Jul 2018
3 min read
Save for later

Seaborn v0.9.0 brings better data visualization with new relational plots, theme updates, and more

Sugandha Lahoti
24 Jul 2018
3 min read
Seaborn, the popular data visualization library, has become a very timely and relevant tool for data professionals seeking to enhance their data visualizations. The team behind Seaborn realizes this and hence have pushed the release of Seaborn v0.9.0. This version is a major release with several substantial features and notable API name changes for better consistency with matplotlib 2.0. Three new relational plots Seaborn v0.9.0 features three new plotting functions relplot(), scatterplot(), and lineplot(). These functions bring the high-level API of categorical plotting functions to more general plots. They can visualize a relationship between two numeric variables and map up to three additional variables by modifying hue, size, and style semantics. replot() is a figure-level interface to the two plotting functions and combines them with a FacetGrid. The lineplot() function has support for statistical estimation and is replacing the older tsplot function. It is also better aligned with the API of the rest of the library and more flexible in showing relationships across additional variables. For a detailed explanation of these functions with examples of the various options, go through the API reference and the relational plot tutorial. Notable API name changes Seaborn has renamed a few functions and made changes to their default parameters. The factorplot function has been renamed to catplot(). The catplot() function shows the relationship between a numerical and (one or more) categorical variable using one of several visual representations. This change is expected to make catplot() easy to discover and to define its role better. The lvplot function has been renamed to boxenplot(). The new name makes the plot more discoverable by describing its format (it plots multiple boxes, also known as “boxen”). The size parameter to height is renamed in multi-plot grid objects (FacetGrid, PairGrid, and JointGrid) along with functions that use them (factorplot, lmplot(), pairplot(), and jointplot()). This is done to avoid conflicts with the size parameter that is used in scatterplot and lineplot functions and also makes the meaning of the parameter a bit clearer. The default diagonal plots in pairplot() are changed to now use func:kdeplot` when a "hue" dimension is used. Also, the statistical annotation component of JointGrid is deprecated. Themes and palettes updates Several changes have been made to the seaborn style themes, context scaling, and color palettes to make them more consistent with the style updates in matplotlib 2.0. Here are some of the changes: Some axes style()/plotting context() parameters have been reorganized and updated to take advantage of improvements in the matplotlib 2.0 update. The seaborn palettes (“deep”, “muted”, “colorblind”, etc.) are updated to correspond with the new 10-color matplotlib default. A few individual colors have also been tweaked for better consistency, aesthetics, and accessibility. The base font sizes in plotting context() and scaling factors for "talk" and "poster" contexts have been slightly increased. Calling set() will now call set color codes() to re-assign the single letter color codes by default. Apart from that, the introduction to the library in the documentation has been rewritten to provide more information and critical examples. These are just a select few major updates. For a full list of features, upgrades, and improvements, read the changelog. What is Seaborn and why should you use it for data visualization? Visualizing univariate distribution in Seaborn 8 ways to improve your data visualizations
Read more
  • 0
  • 0
  • 3785

article-image-facebook-is-investigating-data-analytics-firm-crimson-hexagon-over-misuse-of-data
Richard Gall
23 Jul 2018
2 min read
Save for later

Facebook is investigating data analytics firm Crimson Hexagon over misuse of data

Richard Gall
23 Jul 2018
2 min read
Facebook has suspended Boston-based data analytics firm Crimson Hexagon following concerns that the company has misused data. The decision was made after the Wall Street Journal reported that the company has contracts with government agencies and "a Russian nonprofit with ties to the Kremlin." Back in March 2017, Facebook banned the use of data to develop surveillance tools. It's this ruling for which Crimson Hexagon are being investigated. A Facebook spokesperson, speaking to CNN Money on Friday, said: "We don't allow developers to build surveillance tools using information from Facebook or Instagram... We take these allegations seriously, and we have suspended these apps while we investigate.” Crimson Hexagon CTO responds with a blog post Crimson Hexagon hasn't explicitly responded to their suspension, but CTO Chris Bingham did write a blog post: "Understanding the Role of Public Online Data in Society." He writes that "the real conversation is not about a particular social media analytics provider, or even a particular social network like Facebook. It is about the broader role and use of public online data in the modern world." Although the investigation is ongoing it's worth noting, as TechCrunch has, that Crimson Hexagon isn't quite as opaque in its relationships and operations as Cambridge Analytica. They have, for example, done data analytics projects for the likes of Adidas, the BBC, and Samsung. Read next Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project Is Facebook planning to spy on you through your mobile’s microphones? Did Facebook just have another security scare?
Read more
  • 0
  • 0
  • 2473
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-baidu-announces-clarinet-a-neural-network-for-text-to-speech-synthesis
Sugandha Lahoti
23 Jul 2018
2 min read
Save for later

Baidu announces ClariNet, a neural network for text-to-speech synthesis

Sugandha Lahoti
23 Jul 2018
2 min read
Text-to-speech synthesis has been a booming research area, with Google, Facebook, Deepmind, and other tech giants showcasing their interesting research and trying to build better TTS models. Now Baidu has stolen the show with ClariNet, the first fully end-to-end TTS model, that directly converts text to a speech waveform in a single neural network. Classical TTS models such as Deepmind’s Wavenet usually have a separately text-to-spectrogram and waveform synthesis models. Having two models may result in suboptimal performance. ClariNet combines the two models into one fully convolutional single neural network. Not only that, their text-to-wave model significantly outperforms the previous separate TTS models, they claim. Baidu’s ClariNet consists of four components: Encoder, which encodes textual features into an internal hidden representation. Decoder, which decodes the encoder representation into the log-mel spectrogram in an autoregressive manner. Bridge-net: An intermediate processing block, which processes the hidden representation from the decoder and predicts log-linear spectrogram. It also upsamples the hidden representation from frame-level to sample-level. Vocoder: A Gaussian autoregressive WaveNet to synthesize the waveform. It is conditioned on the upsampled hidden representation from the bridge-net. ClariNet’s Architecture Baidu has also proposed a new parallel wave generation method based on the Gaussian inverse autoregressive flow (IAF).  This mechanism generates all samples of an audio waveform in parallel, speeding up waveform synthesis dramatically as compared to traditional autoregressive methods. To teach a parallel waveform synthesizer, they use a Gaussian autoregressive WaveNet as the teacher-net and the Gaussian IAF as the student-net. Their Gaussian autoregressive WaveNet is trained with maximum likelihood estimation (MLE). The Gaussian IAF is distilled from the autoregressive WaveNet by minimizing KL divergence between their peaked output distributions, stabilizing the training process. For more details on ClariNet, you can check out Baidu’s paper and audio samples. How Deep Neural Networks can improve Speech Recognition and generation AI learns to talk naturally with Google’s Tacotron 2
Read more
  • 0
  • 0
  • 4849

article-image-google-microsoft-twitter-and-facebook-team-up-for-data-transfer-project
Richard Gall
21 Jul 2018
2 min read
Save for later

Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project

Richard Gall
21 Jul 2018
2 min read
Marvel's Infinity War might well have been 'the most ambitious crossover in history' but there's a new crossover that might just beat it. In a blog post Google has revealed that it's working with Microsoft, Facebook, and Twitter on something called 'Data Transfer Project'. The Data Transfer Project is, according to Google, "an open source initiative dedicated to developing tools that will enable consumers to transfer their data directly from one service to another, without needing to download and re-upload it." Essentially, the product is about making data more portable for users. For anyone that has ever tried to move data from one source to another, that could save some massive headaches. Standardizing and securing data with the Data Transfer Project The tools being developed by Google, Microsoft, Facebook and Twitter, should be able to transform a proprietary API into a standardized format. Google explains that "this makes it possible to transfer data between any two providers using existing industry-standard infrastructure and authorization mechanisms, such as OAuth." Tools for adapting data from 7 different services and 5 different data formats have already been developed. With trust and security being two key issues for consumers in terms of tech, Google was also to keen to point out how the Data Transfer Project is fully committed to data security: "Services must first agree to allow data transfer between them, and then they will require that individuals authenticate each account independently. All credentials and user data will be encrypted both in transit and at rest. The protocol uses a form of perfect forward secrecy where a new unique key is generated for each transfer. Additionally, the framework allows partners to support any authorization mechanism they choose. This enables partners to leverage their existing security infrastructure when authorizing accounts." Google urges the developer community to get involved. You can find the source code for the project here and learn more about the history of the project in a white paper here. Read next: Why Twitter (finally!) migrated to Tensorflow 5 reasons government should regulate technology Microsoft’s Brad Smith calls for facial recognition technology to be regulated
Read more
  • 0
  • 0
  • 2192

article-image-deepmind-elon-musk-and-others-pledge-not-to-build-lethal-ai
Richard Gall
18 Jul 2018
3 min read
Save for later

DeepMind, Elon Musk, and others pledge not to build lethal AI

Richard Gall
18 Jul 2018
3 min read
Leading researchers and figures from across the tech industry have signed a pledge agreeing not to develop lethal weapons with AI. The pledge, which was published today (18 July) to coincide with the International Joint Conference on Artificial Intelligence in Sweden, asserted "the decision to take a human life should never be delegated to a machine." The pledge was coordinated by the Future of Life Institute, a charity which 'mitigates existential risks to humanity'.  The organization was previously behind an unsuccessful letter calling on the UN to ban "killer robots". That included some but not all of the signatories on the current letter. Who signed the AI pledge? This letter includes signatories from some of the leading names in the world of AI. DeepMind has thrown its support behind the letter, along with founders Demis Hassabis and Shane Legg. Elon Musk has also signed the letter, taking time out from his spat with members of the Thai cave rescue mission, and Skype founder Jaan Tallinn is also lending his support. Elsewhere, the pledge has support from a significant number of academics working on AI, including Stuart Russell from UC Berkeley and Yoshua Benigo from the University of Montreal. Specifically, the pledge focuses on weapons that use AI to remove human decision-making from lethal force. However, what this means in practice isn't straightforward - which means legislating against such weapons is incredibly difficult. As a piece in Wired argued last year, banning autonomous weapons simply may not be practical. It's also worth noting that the pledge does not cover the use of artificial intelligence for non-lethal purposes. Speaking to The Verge, military analyst Paul Scharre was critical of the pledge: "What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons,” he's quoted as saying. Here's how the letter ends: We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge. While the crux of the message represents the right step from industry leaders, whether this amounts to real change is another matter. With pledges and letters coming thick and fast over the last few years, perhaps it's time for concrete actions. Read next:  Google Employees Protest against the use of Artificial Intelligence in Military 5 reasons government should regulate technology The New AI Cold War Between China and the USA
Read more
  • 0
  • 0
  • 2565

article-image-anaconda-enterprise-5-2-releases-with-special-focus-on-machine-learning-in-production
Pravin Dhandre
18 Jul 2018
2 min read
Save for later

Anaconda Enterprise 5.2 releases with special focus on machine learning in production

Pravin Dhandre
18 Jul 2018
2 min read
With just a month after the release of Anaconda Distribution 5.2, the team at Anaconda, Inc excitingly announces the enterprise release of Anaconda Enterprise 5.2. The newer version of enterprise release boosts its capabilities with GPU-acceleration feature, cloud-native model management and scaling machine learning models. This release is expected to power-up the enterprises with high-speed digital interactions required in Artificial Intelligence and Machine Learning operations. New features in Anaconda Enterprise 5.2: GPU acceleration with Cloud-native support - Enables GPU computation for complex and heavy deep learning workloads. It provides secure and efficient utilization of GPU clusters thereby delivers efficient ways to perform large machine learning operations at scale. Job schedule - This allows priority jobs to be allocated with enough CPU and GPU allocation along with supporting regular deployments of recurring jobs. Easy Git integration - Provides support for collaborating your existing version control and continuous integration tools such as Bitbucket and Git With such new features, millions of scientists can take their machine learning projects from training to production level with complete security and absolute governance. The open source platform Anaconda Distribution is already benefiting over 6 million data science users to develop and optimize their machine learning models on large datasets. Anaconda Enterprise is the only product on the market that allows data scientists to go from laptop for model development to a 1000-node GPU cluster for training to production deployment—all with full reproducibility and governance. According to Anaconda, Inc., “Anaconda Enterprise is the only platform to combine core AI technologies, automated governance, and reproducibility, and cloud-native approaches to make data science teams as productive as possible. Anaconda Enterprise empowers organizations to develop, govern, and automate ML/AI pipelines from laptop to production, quickly delivering insights into the hands of business leaders and decision-makers.” To know more on the administrator facing and backend improvement changes, you can read the release note which would be very soon available on the Anaconda Documentation. Until then, you can also refer to the official announcement about this release on Anaconda’s blog. How Amazon is reinventing Speech Recognition and Machine Translation with AI Alibaba introduces AI copywriter Nvidia and AI researchers create AI agent Noise2Noise that can denoise images
Read more
  • 0
  • 0
  • 2960
article-image-attention-designers-artificial-intelligence-can-now-create-realistic-virtual-textures
Sugandha Lahoti
18 Jul 2018
2 min read
Save for later

Attention designers, Artificial Intelligence can now create realistic virtual textures

Sugandha Lahoti
18 Jul 2018
2 min read
A team of researchers have discovered a new tool that can assist designers and animators to create more realistic virtual textures. The tool makes uses of a deep learning technique, called the Generative Adversarial Networks (GANs). GANs train a neural network to learn to expand small textures into larger ones that bear a resemblance to the original sample. Texture synthesis has always remained a challenging job for designers. The design of accurate real-world textures such as water ripples in a river, concrete walls, or patterns of leaves is highly intricate and prone to errors. Current techniques for texture creation are tedious and time-consuming. However, the works of Yang Zhou et al, have made the texture synthesis process simplistic for texture artists in designing video games, virtual reality, and animation. Their method uses a generator to generate a texture, usually larger in size than the input, that closely resembles the visual characteristics of the sample input. The visual similarity between the newly created texture and the sample input is assessed using a discriminative network (discriminator). As typical of GANs, the discriminator is trained in parallel to the generator to distinguish between the actual and the desired output. The researchers tested their method on complex examples of peacock feathers and tree trunk ripples, which are seemingly endless in their repetitive patterns. The results are realistic designs produced in high-resolution, efficiently, and at a much larger scale. The team also intends to train a "universal" model on a large-scale texture dataset, as well as increase user control as part of their future work. Zhou and his collaborators will present their work at SIGGRAPH 2018, to be held on  12-16 August in Vancouver, British Columbia. This annual gathering showcases the works of professionals, and academicians practicing in CG, Animation, VR, Games, Digital Art, Mixed Reality and Emerging Technologies. Nvidia and AI researchers create AI agent Noise2Noise that can denoise images Adobe to spot fake images using Artificial Intelligence How Google’s DeepMind is creating images with artificial intelligence
Read more
  • 0
  • 0
  • 2989

article-image-hortonworks-data-platform-3-0-is-now-generally-available
Pravin Dhandre
17 Jul 2018
3 min read
Save for later

Hortonworks Data Platform 3.0 is now generally available

Pravin Dhandre
17 Jul 2018
3 min read
Hortonworks proudly announces the eagerly awaited full release of its data platform,  Hortonworks Data Platform 3.0. With businesses becoming more data-driven, Hortonworks Data Platform 3.0 (HDP 3.0) is a major footstep in the plan for dominating the Big Data ecosystem. They’ve made major changes within its stack and expanded their ecosystem to include trending technologies like Deep Learning. With the GA release, HDP 3.0 equips businesses with enterprise-grade functionalities, enabling speedy application deployment, managing machine learning workloads with real-time database management. The platform is designed to provide complete security and governance for your business applications. The data platform is added with additional support to GPU computing, containerization, Namenode Federation and Erasure Coding and all these new features are developed on Hadoop 3.1. The platform supports both on-premise and cloud deployment including major cloud platforms such as Amazon Web Services, Microsoft Azure, and Google Cloud. The platform is also stocked with Apache Ranger and Apache Atlas to provide a secure and trusted Data Lake infrastructure. To keep the stack intact and smooth, the new release has deprecated various Apache components like Falcon, Mahout, Flume and Apache Hue. Key features of HDP 3.0: Agile Application Deployment: It enables application developers to deploy their applications using containerization technology. With this, developers can test new versions and at the same time can create new features without damaging the old ones. This feature results in speedy application deployment along with optimum utilization of resources at hand. Deep Learning Support: With Deep Learning technology becoming the backbone of today’s intelligence, HDP 3.0 provides complete support for GPU computing and deep learning workloads. The platform provides both GPU pooling and GPU isolation support through which GPU resources can be used at the optimal level and at the same time can be used exclusively for a specific application based on its priority and complexity level. Cloud Optimization: It is accelerated with automated cloud provisioning for simpler deployment of big data applications with support to major cloud object stores such as Amazon S3, Azure Data Lake, and Google Cloud Storage. The platform also provides speedy query performance with the support of cloud connectors including Apache HBase and Apache Spark. This newly revamped and innovated big data platform can help businesses achieve faster insights and with decision-making in today's competitive business environment. For more detailed information on the HDP 3.0, please visit the official product page. Hortonworks partner with Google Cloud to enhance their Big Data strategy Why AWS is the preferred cloud platform for developers working with big data? Getting to know different Big data Characteristics
Read more
  • 0
  • 0
  • 2160

article-image-tech-unregulated-washington-post-warns-robocalls-could-get-worse
Sugandha Lahoti
16 Jul 2018
3 min read
Save for later

Tech, unregulated: Washington Post warns Robocalls could get worse

Sugandha Lahoti
16 Jul 2018
3 min read
Automated calls are annoying. There is rarely a day when you aren’t greeted with an unanticipated call from an unrecognized number.  And to make matters worse, Robocalls are expected to surge in number in the coming times. This gets even worse during the tax season. The online site AllAreaCodes.com looked at the number of complaints filed with the Federal Trade Commission and found that that phone scams increase by 20 percent in March and April as the tax filing deadline approaches. According to a report by the Washington Post, financial corporations and retailers are trying to persuade the current US administration to make it easier to send robocalls and texts to the masses. If the government approves of this request, industries would soon be permitted to send unlimited texts and calls, without the consumer having a say in the matter. Robocalls are fine as long as they come from legitimate businesses. For instance, alerts from a drugstore for a person to pick up their prescription, or an automated voice call from your bank, alerting your credit card bill due date, are useful. The sanctity of robocalls is maintained as long as telemarketing calls and alerts from companies are genuine and consented. Per the Post, almost three-quarters of more than four billion robocalls placed in June, according to data published by YouMail, came from companies the customers had a genuine relationship with. However, some of these calls are actually from scammers who abuse automated systems. According to the Post, Adrian Abramovich was fined $120 million for placing 96 million unsought calls offering fake travel deals in just three months. Businesses are equally guilty. Navient, a financial service company, is highly active in voicing its opinion on loosening restrictions on robocalls. Multiple lawsuits have hit Navient, as reported by the Post, for harassing consumers with automated calls. Last year consumer groups had requested the Federal Communications Commission to look into the company’s practices. The task of regulating these robocalls lie in the hands of the Federal Communications Commission. Per the Post, the FCC along with its sister organization Federal Trade Commission is figuring on out how to rethink the robocall rules, while focusing on combating fraud and abuse. The last time the FCC tried to implement laws to cut down robocalls, they were throttled by the trade association of debt collectors. This time as well, the Post says, businesses led by the US Chamber of Commerce have opposed the idea of adding any federal laws that would restrict how businesses communicate with their customers. "We are at serious risk of seeing the existing robocall problem, which is already serious, get far, far worse,” said Margot Saunders, a senior attorney at the National Consumer Law Center. YouTube has a $25 million plan to counter fake news and misinformation Microsoft’s Brad Smith calls for facial recognition technology to be regulated Too weird for Wall Street: Broadcom’s value drops after purchasing CA Technologies
Read more
  • 0
  • 0
  • 2064
article-image-announcing-tableau-prep-2018-2-1
Sunith Shetty
16 Jul 2018
3 min read
Save for later

Announcing Tableau Prep 2018.2.1!

Sunith Shetty
16 Jul 2018
3 min read
Tableau team has announced a new version of Tableau Prep 2018.2.1 with lots of new features for easy enterprise deployments and more data connector options made available to the customers. This update has lots of user experience improvements focused on helping you manage data more efficiently and easily. Tableau Prep is a brand-new product introduced in April 2018, which is specifically designed to quickly combine, shape and clean data to perform easy-to-complex data analysis tasks. This product helps everyone with an appealing visual experience which makes data prep easier. It is seamlessly integrated with the existing Tableau analytical workflow, thus delivering a smart experience to go from data prep to analysis. Some of the noteworthy changes available in Tableau Prep 2018.2.1 are: Activate and deactivate Tableau Prep from the command line Now you have new command line options to deploy Tableau Prep easily on the hundreds to thousands of machines in an enterprise environment. You can use a key management system and use silent activation and deactivation capabilities just like in Tableau Desktop. In order to learn more about the feature, you can refer to the Deploy Tableau Desktop blog post.   Virtual desktop support to optimize Tableau Prep installations Just like Tableau Desktop, the new virtual desktop support allows you to optimize Tableau prep installations for non-persistent virtual desktops. Tableau-hosted Authorization to Run (ATR) service can be used to automatically deactivate Tableau Prep licenses after a predetermined period of time. In order to configure this option, refer to Configure Virtual Desktop Support blog post. Union summary to quickly align fields You often need to align or merge fields that represent the same data with different names especially while combining data from multiple sources. With this new feature, Tableau Prep offers a summary of the mismatched fields. It automatically recommends potential matches based on attributes such as similar data types and field names, thus making it quicker to align your data. Use ISO-8601 date parts As per the Idea forums, many customers, particularly those in Europe, wanted an easier approach to extract ISO-8601 date parts, especially for week numbers. With a new native support in the calculation language, you can get the required data you want by creating a simple calculation - [Week Number] = DATEPART("iso-week",[Order Date]. Thus you don’t need to write any complex date calculations which were previously involved. Group and filter data values New techniques to easily group and filter your data are now available. You just need to select the field values and right-click to group them or select a group and right-click to ungroup the field values. You can create new filters based on wildcard matches without writing a calculation. New data connectors New support has been added to the following connectors to help you connect to various cloud data sources and Hadoop Hive: MapR Hadoop Hive Apache Drill SparkSQL Snowflake Amazon EMR Hadoop Hive Cloudera Hadoop (Hive and Impala) Hortonworks Hadoop Hive In order to learn more about how to connect Tableau Prep to your data, see the Supported Connectors blog post. These are the key new features offered by the Tableau Prep 2018.2.1 version. However, there are more updates and user experience improvements. To find a complete list of new features, please refer to the what’s new blog post. A tale of two tools: Tableau and Power BI “Tableau is the most powerful and secure end-to-end analytics platform”: An interview with Joshua Milligan Visualizing BigQuery Data with Tableau
Read more
  • 0
  • 0
  • 3845

article-image-microsoft-brad-smith-calls-for-facial-recognition-technology-to-be-regulated
Richard Gall
13 Jul 2018
3 min read
Save for later

Microsoft's Brad Smith calls for facial recognition technology to be regulated

Richard Gall
13 Jul 2018
3 min read
It has been a tough few months for some of the U.S.' biggest tech companies. Political upheaval has placed a big focus on data, privacy, and government contracts not only in the tech industry but across the public too. You could easily argue that these organizations have been quiet in spite of considerable noise, but that might be changing. Brad Smith, Microsoft President and Chief Legal Officer today (13 July) wrote a blog post arguing in favor of regulation of facial recognition technology. What Brad Smith argues in his blog post In his blog post Brad Smith sets the context clearly. He argues that while facial recognition technology can be "both positive and potentially even profound," it also "raises a critical question: what role do we want this type of technology to play in everyday society?" For Smith, that question can't be answered by tech companies alone. "As a general principle," he writes, "it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government." As Smith also notes, we shouldn't actually be that surprised at a company asking for government regulation. In other industries, it's actually common. Smith cites the aviation, food and pharmaceutical industries as areas in which corporations work closely with government to develop regulation and legislation. However, Smith doesn't outline anything specific with regards to legislation. Instead he is much more keen to urge a "thoughtful approach" from government, as a challenge to ways of using facial recognition technology that could be exploitative. He does, however, argue for Congress to form a bipartisan expert commission to properly assess the uses (and abuses) of facial recognition technology. "The purpose of such a commission should include advice to Congress on what types of new laws and regulations are needed, as well as stronger practices to ensure proper congressional oversight of this technology across the executive branch," he explains. Does Brad Smith think tech companies still have responsibility? In short, yes. Although Smith believes the debate and discussion around facial technology has to take place in a civic space away from the demands of industry and government, he is forthright in asserting technology companies' responsibility in developing and deploying new technologies. He outlines 4 things that the tech industry can do to ensure that facial recognition technology is used ethically: Working to minimize bias in machine learning and artificial intelligence systems More transparency in how facial recognition technology is being developed and used Being more cautious in how facial recognition technology is applied Actively participating in public policy discussions around facial recognition technology Are things changing in tech? Although it's only a blog post, this is one of the first instances of a senior figure at a tech company talking positively about working with government. Contrasted with Mark Zuckerberg's Congressional testimony, it couldn't look more different. At a time when conversation around the ethics of technology has never felt more visible and urgent, Brad Smith's intervention is welcome. How the wider tech world and government responds is another matter. Read next: Microsoft condemns ICE activity at U.S. border but still faces public and internal criticism Tech’s culture war: entrepreneur egos v. engineer solidarity Amazon is selling facial recognition technology to police
Read more
  • 0
  • 1
  • 2175