Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-fitness-app-polar-reveals-military-secrets
Richard Gall
09 Jul 2018
3 min read
Save for later

Fitness app Polar reveals military secrets

Richard Gall
09 Jul 2018
3 min read
You might remember that back in January, fitness app Strava was revealed to be giving away military secrets. The app, when used by military personnel, was giving the location of some potentially sensitive information. Well, it's happening again - this time another fitness app, Polar, is unwittingly giving up sensitive military locations. The digital investigation organization Bellingcat was able to scrape data from 200 sites around the world. From this, it gained information on exercises by nearly 6,500 Polar users. The level of detail Bellingcat was able to gain was remarkable. It was not only able to learn more about military locations - information that could be critical to national security - but also a startling level of information about the people that work on them. The investigation echoes the Strava data leak. It emphasizes the (disturbing) privacy issues that fitness tracking applications have been unable to confront. But Bellingcat explains that Polar is actually one of the worst apps for publicizing private data. On Strava and Garmin, for example, it's only possible to see individual exercises done by users. "Polar makes it far worse by showing all the exercises of an individual done since 2014, all over the world on a single map." Polar is reveals dangerous levels of detail about its users Some of the information found by Bellingcat is terrifying. For example: "A high-ranking officer of an airbase known to host nuclear weapons can be found jogging across the compound in the morning. From a house not too far from that base, he started and finished many more runs on early Sunday mornings. His favorite path is through a forest, but sometimes he starts and ends at a car park further away. The profile shows his full name." The investigators also revealed they were able to cross-reference profiles with social media profiles. This could allow someone to build up a very detailed picture of a member of the military or security personnel. Some of these people have access to nuclear weapons. Bellingcat's advice to fitness app users Bellingcat offers some clear advice to anyone using fitness tracking apps like Polar. Most of it sounds obvious, but it's clear that even people that should be particularly careful aren't doing it.  "As always, check your app-permissions, try to anonymize your online presence, and, if you still insist on tracking your activities, start and end sessions in a public space, not at your front door." The results of the investigation are, perhaps, just another piece in a broader story emerging this year about techno-scepticism. Problems with tech have always existed, it's only now that those are really surfacing and seem to be taking on a new urgency. This is going to have implications for the military for sure, but it is also likely to have an impact on the way these applications are built in the future. Read next The risk of wearables – How secure is your smartwatch? Computerizing our world with wearables and IoT
Read more
  • 0
  • 0
  • 1566

article-image-alibaba-introduces-ai-copywriter
Pravin Dhandre
09 Jul 2018
2 min read
Save for later

Alibaba introduces AI copywriter

Pravin Dhandre
09 Jul 2018
2 min read
Alibaba, the ecommerce leader and multinational conglomerate surprises the advertising market with a smart copywriting tool, AI-CopyWriter. The digital marketing and big data arm Alimama developed artificial intelligence-powered copywriting tool. The tool is backed with powerful deep learning mechanism and natural language processing technology to deliver thousands of marketing content in just couple of seconds. This AI copywriting tool runs through the tens of millions of sampled data in the back end and generates copies for products in just few seconds. The tool delivers high efficiency with more than 20,000 copy lines in single second, reducing the repetitive copywriting jobs of advertising and marketing teams. This mind-blowing product is simple to use. One simply needs to insert the url link of a product page and the smart copy engine returns with results of numerous innovative copy ideas with just a button click away. According to Alimama, the AI Copywriting tool has been validated through the Turing test and can generate tens of thousands of copy lines in one second. This tool is capable of providing tone-specific copy lines such as funny, loving, poetical or a promotional one along with a adjustments of their word characters. Prior to this tool, the team at Alimama innovated a smart banner designer tool for small and mid-sized businesses which they can use to redesign and resize the advertising banners on e-commerce platforms with just a slide of a mouse. The team also recently released a smart video editing tool powered with AI technology, through which advertising and promotion teams can generate a quick 20 seconds video in almost less than minute. The tool has already been proven very successful by renowned apparel chain Espirit, US fashion brand Dickies and by website aggregators such as Taobao and Tmall. “The AI copywriter is a really amazing tool. Based on a massive database of existing copy and advanced AI technologies, the tool can reduce the repetitive and tedious copywriting workload for our teams.” says E-Commerce Head - Asia Pacific market at Esprit. Google’s translation tool is now offline Adobe to spot fake images using Artificial Intelligence Microsoft start AI School to teach Machine Learning and Artificial Intelligence
Read more
  • 0
  • 0
  • 4447

article-image-biomind-ai-beats-chinese-doctors-in-a-tumor-diagnosis-competition
Amey Varangaonkar
06 Jul 2018
2 min read
Save for later

AI beats Chinese doctors in a tumor diagnosis competition

Amey Varangaonkar
06 Jul 2018
2 min read
AI has trumped humans yet again - this time in diagnosing tumor accurately and quickly. The BioMind AI system, designed especially for neuroimaging recognition, has managed to beat a team of 15 top Chinese doctors by a margin of two to one. When diagnosing tumors of the brain in the 225 test cases, BioMind managed to complete the task in just 15 minutes, as opposed to the doctors who took 30 minutes to complete the same task. BioMind correctly detected a tumor with a staggering accuracy of 87%, as compared to the doctors who managed only 66%. What is BioMind? BioMind is an artificially intelligent system developed at the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital, in collaboration with the Capital Medical University. For training the system, it was fed thousands of images of different nervous system-related diseases, especially related to the brain, that the hospital had archived for over 10 years. Such a large data set allowed the AI to diagnose the common and not-so-common neurological diseases with a staggering accuracy of approximately 90% in comparison to a senior doctor at the hospital. AI not something to be afraid of, says hospital vice president Wang Yongjun, the executive vice president of the Tiantan Hospital which conducted this competition, expressed no concerns over this comprehensive victory of the AI over humans. He said the contest was not intended to pit humans against AI, but to help the doctors interact better with the technology. One of the doctors who lost against BioMind, Dr. Lin Yi, said she welcomes Artificial Intelligence as a friend and thinks it will reduce the doctors’ workload. Not just that, it will also push them to keep learning and improve their skills. The tumor diagnosis competition demonstrated a well-trained AI’s capability to outperform humans yet again. Only recently, a team of AI algorithms had beaten the world’s best human Dota 2 players in the multiplayer mode, proving they are capable of working in teams. The recent events only fuel further the debate of whether AI would replace human effort, or would augment human expertise to accomplish the tasks more efficiently. Read more Meet CIMON, the first AI robot to join the astronauts aboard ISS 7 Popular Applications of Artificial Intelligence in Healthcare Top languages for Artificial Intelligence development
Read more
  • 0
  • 0
  • 2488
Visually different images

article-image-tensorflow-1-9-0-rc2-releases
Natasha Mathur
05 Jul 2018
3 min read
Save for later

TensorFlow 1.9.0-rc2 releases!

Natasha Mathur
05 Jul 2018
3 min read
After the 1.9.0-rc0 release early last month, the TensorFlow team is out with another update 1.9.0-rc2, unveiling major features and updates. The new release includes latest improvements, bug fixes, and other changes. Let’s have a look at the noteworthy features in TensorFlow 1.9.0-rc2: Key features and improvements Docs for tf.keras namely the new Keras-based ‘get started’ page and the programmers guide page have been updated. Layers tf.keras.layers.CuDNNGRU and tf.keras.layers.CuDNNLSTM have been added. Python interface for TFLite Optimizing Converter is expanded. Command line interface (AKA: toco, tflite_convert) is added in the standard pip installation again. Data loading and text processing has improved with: Tf.decode_compressed, tf.string_strip, and  tf.strings.regex_full_match Headers used for custom apps have moved from site-packages/external into site-packages/tensorflow/include/external. On opening empty variable scopes; replace variable_scope('', ...) by variable_scope(tf.get_variable_scope(), ...). Bug Fixes tfe.Network has now been deprecated. You can now inherit from tf.keras.Model. The layered variable names have changed for the conditions mentioned below: using tf.keras.layers with custom variable scopes using tf.layers in a subclass of tf.keras.Model class. In tf.data, DatasetBase::DebugString() method is now const. The tf.contrib.data.sample_from_datasets() API for randomly sampling from multiple datasets has now been added. In tf.contrib, tf.contrib.data.choose_from_datasets() is added and tf.contrib.data.make_csv_dataset() will now support line breaks in quoted strings. From make_csv_dataset, two arguments were removed.The tf.contrib.framework.zero_initializer supports ResourceVariable. "Constrained_optimization" is added to tensorflow/contrib. Other changes GCS Configuration Ops has been added. The signature of Makelterator has been changed to enable propagation of error status. The bug in tf.reduce_prod gradient has been fixed for complex dtypes. Benchmark for tf.scan has been updated in order to match ranges across eager and graph modes. Optional args argument added to Dataset.from_generator(). Ids in nn.embedding_lookup_sparse have been made unique which helps reduce RPC calls that are made for looking up the embeddings in case there are repeated ids in the batch. tf.train.Checkpoint is added for reading/writing object-based checkpoints. To get more information on the new updates and features in the latest TensorFlow 1.9.0-rc2 release, check out the official release notes. Use TensorFlow and NLP to detect duplicate Quora questions [Tutorial] TensorFlow.js 0.11.1 releases! Build and train an RNN chatbot using TensorFlow [Tutorial]
Read more
  • 0
  • 0
  • 2204

article-image-ggplot2-3-0-0-releases
Sunith Shetty
05 Jul 2018
3 min read
Save for later

ggplot2 3.0.0 releases!

Sunith Shetty
05 Jul 2018
3 min read
ggplot2 team has announced a new version 3.0.0 with breakthrough changes. This new release brings some revolutionary changes within their library to ease advanced data visualizations and create appealing aesthetics. ggplot2 is an open source library in R which allows you to create visual representations. It follows a process of breaking up the advanced graphs into semantic components such as scales and layers. ggplot2 has grown in use considerably within the R community thus becoming one of the popular R packages used today. Some of the noteworthy changes in the library are: Tidy evaluation ggplot2 now supports tidy evaluation. This allows you to easily build plots in the same way you can programmatically build data manipulation pipelines with dplyr Now you can use quasiquotation in aes(), facet_wrap(), and facet_grid() ggplot2 is now more easily programmable and consistent with the rest of the tidyverse packages New features added to the library It supports all simple features using sf with geom_sf() and coord_sf() It can automatically align CRS across layers, draw a graticule, and can set up the correct aspect ratio New stat() function now offers a cleaner and better-documented syntax for calculated aesthetics variables You can use syntax aes(y = stat(count)), thus replacing the old traditional approach of surrounding the variable name with ... (Example - aes(y = ..count..)) A new tag label has been added for identifying plots in addition to title, subtitle and, caption. Layers: geoms, stats, and position adjustments Now you can arrange the horizontal position of plots with variable widths for bars and rectangles in addition to box plots using the new function position_dodge2() There are many other functions and new parameters added to enhanced the layers of the graphics. To know more, you can refer to the GitHub page. Scales and guides Improved support for ordered factors and mapping data/time variables to alpha, size, color, and fill aesthetics, including date_breaks and date_labels arguments Several new functions have been added to make it easy to use Viridis colour scales - scale_colour_viridis_c() and scale_fill_viridis_c() for continuous, and scale_colour_viridis_d() and scale_fill_viridis_d() for discrete To know more about the enhanced support, you can refer the GitHub page. Nonstandard aesthetics Improved support for nonstandard aesthetics. They can now be specified independently of the scale name. There is a huge list of bug fixes and improvements done to the library, if you want to refer to the changes done, you can refer Minor bug fixes and improvements page. You can find the complete list of new updates and changes done to the library along with how to handle common errors and ways to work around them in the breaking changes section of ggplot2 GitHub page. R interface to Python via the Reticulate Package Why choose R for your data mining project 15 Useful Python Libraries to make your Data Science tasks Easier
Read more
  • 0
  • 0
  • 2457

article-image-meet-cimon-the-first-ai-robot-to-join-the-astronauts-aboard-iss
Natasha Mathur
03 Jul 2018
3 min read
Save for later

Meet CIMON, the first AI robot to join the astronauts aboard ISS

Natasha Mathur
03 Jul 2018
3 min read
A ball-shaped robot named CIMON (Crew Interactive Mobile Companion) joined the crew members for the SpaceX 15's launch of Falcon 9, aboard the International Space Station (ISS), two days ago. The robot is created to see if a bot with AI capabilities can boost the efficiency and morale of the crew while on longer missions. It has been built by IBM in conjunction with the German Aerospace Center. It is led by Alexander Gerst, DLR Astronaut. Source: SciNews Let’s have a look at CIMON’s functionalities and features. The AI robot comes with a language user interface which enables it to respond to spoken commands. It displays repair instructions on screen with a voice command. This keeps astronaut’s hands- free. It is also capable of displaying procedures for experiments, thereby, serving as the space station’s voice-controlled database. ISS performs tasks and activities which are quite complicated in nature, so AI robot can help with that. The AI bot can be easily called upon by astronauts for assistance. For instance, astronauts can ask the robot to display certain documents and media in their field of view. They can also ask CIMON to record or playback experiments with its onboard camera. CIMON is capable of sensing the tone of conversation among the crew. In fact, its behavior is quite similar to R2D2. It can also quote dialogues from famous movies like E.T. the extraterrestrial. It can move freely and perform rotational movements like shaking the head back and forth indicating disapproval. Key Features: The AI bot is ball-shaped with a flattened surface. It has of no sharp edges which makes it a safe equipment for the crew. It comes with 12 internal fans which allow it move in all directions. It is programmed with an ISTJ personality i.e. introverted, sensing, thinking and judging. It comes equipped with a kill switch. IBM’s Watson Technology is used for CIMON’S AI language and comprehension system. It costs less than 6 million dollars. And it took less than 2 years for the AI bot to develop. The SpaceX 15's launch of Falcon 9 that took place on 29th June was successful, and CIMON is now undergoing astronaut assistant training. To know more, check out the official post by NASA. Adobe to spot fake images using Artificial Intelligence Microsoft start AI School to teach Machine Learning and Artificial Intelligence IBM unveils world’s fastest supercomputer with AI capabilities, Summit  
Read more
  • 0
  • 0
  • 3402
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-mongodb-4-0-now-generally-available
Amey Varangaonkar
02 Jul 2018
2 min read
Save for later

MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more

Amey Varangaonkar
02 Jul 2018
2 min read
In the annual MongoDB World event that took place last week, MongoDB announced the general availability of the new MongoDB 4.0 release, along with several significant beta features related to the cloud. With 4.0, MongoDB aims to attack two domains at once - the traditional database support as well as the ease of database development on the cloud. Some major features of MongoDB 4.0 There were talks of MongoDB 4.0 including support for ACID transactions earlier in the year, extending to multiple documents. However, this support will be restricted only to a single replica for now, with distributed support across multiple clusters to be incorporated with the 4.2 release that is expected to be out later in the year. The 4.0 release also includes a public beta release for MongoDB Charts, a new in-database visualization feature, which can be used to visualize JSON documents. Some of the other important features include: MongoDB Stitch, a serverless compute environment designed for developers, now generally available A mobile-embedded version of MongoDB that runs on smartphones, tablets and IoT devices are now released as a private beta release in 4.0 A new multi-region capability for MongoDB Atlas, letting you deploy a cloud instance that is distributed across different regions throughout the world A beta version of Kubernetes Operator, which co-ordinates orchestration between Kubernetes and MongoDB Ops Manager is released A key milestone for MongoDB with 4.0 release MongoDB’s 4.0 is a quite significant release. It addresses the needs of modern businesses, who base their processes around cloud and serverless operations. Not just that, going multi-platform with support for mobile devices will be seen as a welcome addition by many users. Incorporating relational capabilities puts MongoDB on par with many popular relational databases as well. The ability to combine the power of business-critical transactional processes while offering the speed and ease of using non-relational databases can prove to be a real game-changer for MongoDB. Read more Why MongoDB is the most popular NoSQL database today 4 must-know levels in MongoDB security Connecting your data to MongoDB using PyMongo and PHP
Read more
  • 0
  • 0
  • 2727

article-image-plotly-py-3-0-releases
Pravin Dhandre
29 Jun 2018
2 min read
Save for later

plotly.py 3.0 releases

Pravin Dhandre
29 Jun 2018
2 min read
Team at plotly excitingly announces the biggest release of plotly pythonic interface, plotly.py 3.0. This new release comes with great support to Jupyter and Jupyterlab environment, imperative manipulation techniques, animation transition, lots of performance improvements and bug fixes. Plotly is an interactive data analysis and live graphing library. The Python API allows you to access all of Plotly’s functionality from Python. The advantage of plotly is its collaborative features through which one can share, track and edit the graph real-time over web. This library is developed on the renowned JavaScript library, plotly.js equipped with numerous charts and plots such as line plots, heatmaps, histograms, bubble charts etc. What’s new in plotly.py 3.0 New widget support for Jupyter and Jupyterlab: New widget added called, FigureWidget that creates final object to be plotted with another dictionary-like object containing both data and layout objects. It is compatible with the widget frameworks. One can even hover around the plot and zoom in into regions. Manipulation Attributes:  Specific and dedicated attributes added making it easier to edit your graphs in the jupyter environment. With this set of attributes, the figures can be manipulated and graphs can be explored more in detail. Docstring support: This new support adds informative docstrings for better documentation of your python codes, classes and functions. These docstrings are directly fetched from plotly.js schema and automatically updated to python interface plotly.py. Performance Improvements Figure specs are now serialized and transferred to plotly.js over Jupyter comm protocol. Plotting speed of large data is now much faster, reducing the plotting time from 35 seconds to as low as 3 seconds for 1 million data points. Added direct support of Typed Arrays for faster access to raw data. plotly.py is considered to be a high-performance charting library through which one can plot data across different charts and graphs such as 3D graphs, statistical charts, financial charts, scientific charts and more. To know more on its different styled charts and custom control options, read the official documentation. 15 Useful Python Libraries to make your Data Science tasks Easier Visualizing 3D plots in Matplotlib 2.0 10 reasons why data scientists love Jupyter notebooks
Read more
  • 0
  • 0
  • 2576

article-image-ai-beats-human-again-this-time-in-a-team-based-strategy-game
Amey Varangaonkar
29 Jun 2018
3 min read
Save for later

AI beats human again - this time in a team-based strategy game

Amey Varangaonkar
29 Jun 2018
3 min read
Till date, there has been a general perception that AI algorithms operate independently. Question marks have been raised over their ability to collaborate to perform complex tasks. Researchers at OpenAI have been working on this problem for some time now, and they seem to have found the answer. A team of AI algorithms called the OpenAI Five have managed to beat a team of human video game players in Dota 2 - the popular battle arena game. OpenAI had previously developed an algorithm which was capable of competing against human players in the single-player mode in Dota 2. This latest achievement using a team of similar algorithms modified to factor in both individual and team success has proved to be quite evolutionary. These algorithms do not communicate directly, but only through gameplay. How OpenAI Five beat the human Dota experts The OpenAI Five mastered the game of Dota 2 by initially playing against different versions of themselves. Over a period of time, they managed to learn different strategies which human players generally use - figuring out ways to attack, defend and perform a variety of other tasks. Most importantly, they learnt the art of collaboration and working as a team - something that eventually led them to beat some of the world’s top Dota 2 players. One of the founders of OpenAI, Greg Brockman thinks that this is a milestone achievement for AI - with great implications that could help humanity in a positive way. “What we’ve seen implies that coordination and collaboration can emerge very naturally out of the incentives”, he says. He added that substituting a human player for an algorithm to play Dota 2 in a team mode worked out very well. What is Dota 2? Dota 2 is one of the world’s most popular strategy games, played by millions across the world. In the team mode, five players collaborate to control a building or a structure by planning attacks and engaging in real-time combat. Each of the players have different strengths, weaknesses and roles within the team, and they have to optimize their capabilities to work with the team in the best possible way. Games continue to be the perfect test-bed for AI The tradition of pitting AI algorithms against expert game players has been an ongoing tradition. Last year DeepMind developed an AI algorithm AlphaGo that beat the world’s best human Go player, while another program AlphaGo Zero perfected its Go and Chess skills simply by playing against itself iteratively. Collaborative AI algorithms could be the future Beating humans in a Dota 2 team game is a rather important achievement for AI. With the commercial applications of AI on the rise, this collaborative approach used by the AI algorithms can prove to be invaluable. These algorithms, for example, can collaborate to outperform humans in a bidding war, or give faster, more accurate predictions related to certain events. One cannot rule out the possibility of them collaborating even with humans and helping them with their day to day activities in the near future. However, could there be a downside to this? Could human effort be replaced by a combination of AI algorithms working together? We will find out in due course of time, but there seems to be no evidence to suggest this...just yet. Read more Unity Machine Learning Agents: Transforming Games with Artificial Intelligence Developing Games Using AI 5 Ways Artificial Intelligence is Transforming the Gaming Industry
Read more
  • 0
  • 0
  • 3123

article-image-google-introduces-machine-learning-courses-for-ai-beginners
Amey Varangaonkar
27 Jun 2018
3 min read
Save for later

Google introduces Machine Learning courses for AI beginners

Amey Varangaonkar
27 Jun 2018
3 min read
Machine learning and Artificial Intelligence are two of most popular buzzwords today. Everyone wants to use them to their advantage, but not many know how to do it right. In a bid to promote awareness and help more developers get proficient in machine learning and AI, Google had introduced a Machine Learning Crash Course earlier this February. With the tremendous success of the program, they have now added an interactive course on image classification - the process of extracting information from images. Computer vision is a very popular use-case of machine learning and AI. Neural networks are trained with lots of image data and are then asked to classify a random image based on its characteristics. Data scientists and Machine Learning developers strive to increase the accuracy of this prediction. What is this machine learning course about? Dubbed as Machine Learning Practica, this newly added interactive course will walk the students through the basics of machine learning and its application in image classification - one of the most important use-cases of Computer Vision. They will start with understanding the basics of image classification, and go on to learn about Convolutional Neural Networks, the neural network model that can be best used for image classification. This course will also teach the readers how to build a CNN from scratch, and demonstrate the best practices in training a highly effective and accurate model for classification. Topics such as preventing over-fitting, using pre-trained models and more, are also covered. The course is primarily aimed at developers with a basic knowledge of machine learning. The examples and exercises included in this course are written in Keras - a highly popular Python library for training neural networks. While prior experience in Keras is not required, some exposure to Python programming will make it easier for you to get the best out of this course. Google’s data scientists and researchers have collaborated with the image model experts to develop this course.It contains video, interactive programming exercises as well as relevant documentation for reference. The techniques highlighted in this course are already being used to power search in Google Photos. Till date, more than 10,000 developers have benefited from this course. So, what are you waiting for? Get started with image classification in machine learning! In case you’re looking for some hands-on resources to master image classification using machine learning and deep learning, we’ve got you covered as well! Check out our books Deep Learning for Computer Vision, Tensorflow Machine Learning Cookbook, and Deep Learning with Tensorflow, Second edition to get started! Read more How machine learning as a service is transforming cloud Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) Microsoft start AI School to teach Machine Learning and Artificial Intelligence
Read more
  • 0
  • 0
  • 2737
article-image-akon-is-planning-to-create-a-cryptocurrency-city-in-senegal
Richard Gall
27 Jun 2018
2 min read
Save for later

Akon is planning to create a cryptocurrency city in Senegal

Richard Gall
27 Jun 2018
2 min read
In a world where tech innovators  are gradually revealing themselves as cynical corporate masterminds, we're lucky to have Akon. The rapper, entrepreneur, and philanthropist announced last week that he is planning to build a city in Senegal where you can only trade using Akoin, Akon's branded cryptocurrency. The plan is to build what's being described as a 'real life Wakanda', the fictional city in Marvel's Black Panther movie. [caption id="attachment_20338" align="alignright" width="200"] image via commons.wikimedia.org[/caption] Akon believes cryptocurrency can play a huge role in boosting economic development in Africa. During a panel discussion at this year's Cannes Lions festival earlier this month, he said that "blockchain and crypto could be the savior for Africa." He explained that cryptocurrencies would offer many Africans a level of independence and security for ordinary Africans. This has the potential to be hugely empowering, he argues. Akon Crypto City: Akon's cryptocurrency city Plans for Akon's cryptocurrency city - called Akon Crypto City - aren't fully fleshed out. At the Cannes Lions panel discussion, he's reported as saying "I come with the concepts and let the geeks figure it out." However, there are some details which you can find on the ICO Impact website. Akon Crypto City will, if everything goes to plan, be "built on 2,000 acres of land gifted to Akon from the President of Senegal" and located near Senegal's new international airport. You certainly can't accuse Akon of lacking ambition. According to ICO Impact's information the goal is "inventing a radical new way of existence." The Verge draws parallels between Akon Crypto City and a similar project in the Marshall Islands. In February, the small Pacific republic made its own cryptocurrency - called 'Sovereign' - its national legal tender. Thanks to the likes of Akon, expect to see cryptocurrency enter the international language of finance much more over the next few years. Perhaps don't expect to see Akon Crypto City too soon. It's going to take some time and energy to put everything in place for the project to even begin. That said, don't bet against Akon either. After the success of his incredible Akon Lighting Africa (which brought lighting to 6 million people on the continent) in 2014, it would seem he's a man unfazed by limitations.
Read more
  • 0
  • 0
  • 2295

article-image-adobe-to-spot-fake-images-using-artificial-intelligence
Natasha Mathur
26 Jun 2018
3 min read
Save for later

Adobe to spot fake images using Artificial Intelligence

Natasha Mathur
26 Jun 2018
3 min read
Adobe had already been venturing into the AI domain by coming up with products such as Adobe Sensei. Now, Adobe has developed a product that is said to be using Artificial Intelligence for detecting images that are heavily edited or images that have been tinkered with. Adobe is aiming to create more products in the AI space in order to build trust among people in digital media. Adobe has been widely used for editing images that express artistic creativity. However, some people use it to their own unfair advantage by manipulating images for deception. But, with AI in the game, the image deception problem seems to be getting fixed. A senior research scientist at Adobe, Vlad Morariu, has been working on computer vision technologies for a while now to detect manipulated images. Vlad mentions that there are existing tools which help trace digitally altered photos. For instance, different file formats have metadata that store information about the image captured and manipulated. Also, forensic tools help detect the altered images by analyzing strong edges, lighting, noise distribution, and pixel values of a photo. But, these tools are not as efficient at detecting fake images. Source: Adobe Vlad’s continuous research led him to come up with three new techniques in Artificial Intelligence for detecting image manipulation. Splicing: This combines different parts of two different images. Copy-move: This involves cloning or moving of objects within a photograph from one place to another. Removal: In this, an object within a photograph is removed and that space is filled in. This has greatly cut down on the time it would take forensic experts to detect fraud images. Vlad also mentions how they have trained a deep learning neural network to detect deception on thousands of known, manipulated images. It combines two different methods in one network to enhance the detection process even more. The first method makes use of RGB stream to detect tampering. And the second method uses a noise stream filter. Although these techniques are not foolproof, they provide more options for controlling digital manipulation currently. Adobe might get its hands dirty in the AI world even more in the future by including tools for detection of other kinds of manipulation in photographs. To know more about Adobe’s effort in controlling digital manipulation, check out Adobe’s official blog post. Adobe glides into Augmented Reality with Adobe Aero Adobe is going to acquire Magento for $1.68 Billion  
Read more
  • 0
  • 0
  • 2885

article-image-microsoft-start-ai-school-to-teach-machine-learning-and-artificial-intelligence
Amey Varangaonkar
25 Jun 2018
3 min read
Save for later

Microsoft start AI School to teach Machine Learning and Artificial Intelligence

Amey Varangaonkar
25 Jun 2018
3 min read
The race for cloud supremacy is getting interesting with every passing day. The three major competitors - Amazon, Google and Microsoft seem to be coming up with fresh and innovative ideas to attract customers, making them try and adopt their cloud offerings. The most recent dice was thrown by Google - when they announced their free Big Data and Machine Learning training courses for the Google Cloud Platform. These courses allowed the students to build intelligent models on the Google cloud using the cloud-powered resources. Microsoft have now followed suit with their own AI School - the promise of which is quite similar: Allowing professionals to build smart solutions for their businesses using the Microsoft AI platform on Azure. AI School: Offering custom learning paths to master Artificial Intelligence Everyone has a different style and pace of learning. Keeping this in mind, Microsoft have segregated their learning material into different levels - beginner, intermediate and advanced. This helps the intermediate and advanced learners pick up the relevant topics they want to skill up in, without having to compulsorily go through the basics - yet giving them the option to do so in case they’re interested. The topic coverage in the AI School is quite interesting as well - from introduction to deep learning and Artificial Intelligence to building custom conversational AI. In the process, the students will be using a myriad of tools such as Azure Cognitive Services and Microsoft Bot framework for pre-trained AI models, Azure Machine Learning for deep learning and machine learning capabilities as well as Visual Studio and Cognitive Toolkit. The students will have the option of working with their favourite programming language as well - from Java, C# and Node.js to Python and JavaScript. The end goal of this program, as Microsoft puts it perfectly, is to empower the developers to use the trending Artificial Intelligence capabilities within their existing applications to make them smarter and more intuitive. All this while leveraging the power of the Microsoft cloud. Google and Microsoft have stepped up, time for Amazon now? Although Amazon does provide training and certifications for Machine Learning and AI, they are yet to launch their own courses to encourage learners to learn these trending technologies from scratch, and adopt AWS to build their own intelligent models. Considering they dominate the cloud market with almost 2/3rds of the market share, this is quite surprising. Another interesting point to note here is that Microsoft and Google have both taken significant steps to contribute to open source and free learning. While Google-acquired Kaggle is a great platform to host machine learning competitions and thereby learn new, interesting things in the AI space, Microsoft’s recent acquisition of GitHub takes them in the similar direction of promoting the open source culture and sharing free knowledge. Is Amazon waiting for a similar acquisition before they take this step in promoting open source learning? We will have to wait and see.
Read more
  • 0
  • 0
  • 2847
article-image-announces-general-availability-of-azure-sql-data-sync
Pravin Dhandre
22 Jun 2018
2 min read
Save for later

Microsoft announces general availability of Azure SQL Data Sync

Pravin Dhandre
22 Jun 2018
2 min read
The Azure team at Microsoft were highly excited to release the general availability of Azure SQL Data Sync tool for synchronization with their on-premises databases. This new tool allows database administrators to synchronize the data access between Azure SQL Database and any other SQL hosted server or local servers, both unidirectionally and bidirectionally. This new data sync tool allows you to distribute your data apps globally with a local replication available in each region, keeping data synchronization continuous across all the regions. This tool would help to significantly eradicate the connection failure and eliminate the issues related to network latency. It will also boost the response time of the applications and enhance the reliability of the application run time. Features/Capabilities of Azure SQL data Sync: Easy-to-Config - Simple and better configuration of database workflow with exciting user experience Speedy and reliable database schema refresh - Faster loading of database schemas with new Server Management Objects (SMO) library Security for Data Sync - End-to-end encryption provided for both unidirectional and bi-directional data flows with GDPR compliance. However, this particular tool would not be a true friend to DBAs as it does not support disaster recovery task. Microsoft has also made it very clear that this technology would not be supporting scaling Azure workloads, nor the Azure’s Database Migration Service. Check out the Azure SQL Data Sync Setup documentation to get started. To know more details, you can refer to the official announcement at official Microsoft web page. Get SQL Server user management right Top 10 MySQL 8 performance benchmarking aspects to know Data Exploration using Spark SQL
Read more
  • 0
  • 0
  • 2656

article-image-autoaugment-googles-research-initiative-to-improve-deep-learning-performance
Sunith Shetty
21 Jun 2018
5 min read
Save for later

AutoAugment: Google's research initiative to improve deep learning performance

Sunith Shetty
21 Jun 2018
5 min read
Deep learning and artificial intelligence implement cognitive abilities to build specialized solutions to solve a range of problems. With growing innovations, artificial intelligence field is practically exploding. Deep learning has already shown the mettle in handling all shapes and forms of data such as text, images, video, audio, social interaction and more. There are many existing vendors such as Google, Microsoft, Amazon, and IBM constantly working towards bringing AI within an organization by providing a range of services. However,  Google no doubt is doubling down on its research for their existing deep learning techniques. The company’s latest research, AutoAugment: Learning Augmentation Policies from Data, involves a reinforcement learning algorithm to increase both the amount and the variety of data in an existing training dataset. What is AutoAugment? AutoAugment is a new latest research paper by the Google team to tackle one of the biggest hurdle faced in deep learning i.e.a huge amount of quality data available to train models. This technique finds ways to automatically augment existing data with machine learning principles. This research paper uses a procedure called data augmentation, specifically used for images that help in finding the improved data augmentation policies. The idea is creating a search space of data augmentation policies, evaluating the quality of each policy directly on the dataset. The researchers have created a search space, where each policy consist of many sub-policies. Each policy can be randomly chosen for each image in each mini-batch. A sub-policy further consists of two set of operations. Each operation is an image processing function. A search algorithm is used to find the best policy so that the neural network model provides the highest validation accuracy on large datasets. Why AutoAugment? One of the core reasons why deep learning is doing exceptionally well in computer vision is the availability of large amounts of labeled training data. A model’s performance improves as you increase the quality and the amount of training data. However, collecting quality data in order to train a model for the optimized result is a difficult task. A possible way to deal with this issue is to hardcode image symmetries into neural network architectures in order to provide optimized results. Or researchers and developers manually design data augmentation techniques such as rotation and flipping, that are extensively used to train computer vision models. However, this can be time-consuming and tedious. Now, imagine a technique which automatically augments existing data using machine learning? Google team took inspiration from the results of AutoML research which were used to build neural network architectures and optimizers to replace components of traditional systems designed by humans. They thought of doing the same to automate the procedure of data augmentation. Data augmentation has ensured improved performance by training the model about image invariances (images have many symmetries that don’t change the information present in the image) in the data domain in a way that makes a neural network unchanged to these important symmetries. The traditional deep learning models use human-designed data augmentation policies. While this technique uses reinforcement learning algorithm to find the optimal image transformation policies from the data itself. It improves the performance of computer vision models to a great extent. Advantages of using AutoAugment Using AutoAugment will automatically design custom data augmentation policies for computer vision datasets. Hence, it will select the basic image transformation operations such as flipping the image horizontally or vertically, changing the color of the image, and more. This technique automatically predicts which image transformations to combine. It also predicts the per-image probability and magnitude of the transformation used, so that the image is not always worked around in the same way. It automatically learns different transformations based on the dataset used. Using AutoAugment algorithm has ensured better augmentation policies for some of the most widely used computer vision datasets. It additionally led to better accuracy when incorporated into the training of the neural network. AutoAugment achieves a new state-of-the-art accuracy of 83.54% when augmenting ImageNet data. On CIFAR10, the error rate of 1.48% is achieved, which is 0.83% value improvement over the traditional data augmentation. Further an improved state-of-the-art error rate from 1.30% to 1.02% was achieved on the street view of house numbers (SVHN) dataset. Most importantly, you can transfer AutoAugment policies. Hence, the policy used for the ImageNet dataset can also be applied to other datasets, ultimately improving neural network performance. AutoAugment technique has shown good signs in achieving a good level of performance on popular computer vision datasets. It will continue to work across more computer vision tasks and even in other domains such as audio processing or language models. You can refer to the research paper here, to apply them to improve your model performance on relevant computer vision tasks. For complete detailed information, visit the official Google blog. Read more Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) NASA’s Kepler discovers a new exoplanet using Google’s Machine Learning Google’s translation tool is now offline – and more powerful than ever thanks to AI
Read more
  • 0
  • 0
  • 3021