Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-helena-artificial-intelligence-recruitment-headhunter
Abhishek Jha
29 Nov 2017
4 min read
Save for later

What if robots get you a job! Enter Helena, the first artificial intelligence recruiter

Abhishek Jha
29 Nov 2017
4 min read
She is Helena. She is virtual – a robot, yes – a matchmaker that uses AI and machine learning to connect the right candidate for the right job opportunity. The bot goes further. After scouting the best candidates, and matching them to available roles, she (that’s the gender inventors decided upon) approaches them on behalf of the organizations. In other words, a full-fledged corporate headhunter driven by artificial intelligence. Or you may call it a simplified job-hunting tool from the other end. In essence, the AI-powered virtual assistant plays the dual role, serving not only as a company headhunter but also as job seeker’s agent, meaning both sides do not have to search for each other. As an AI agent, Helena allows professionals to discreetly and ‘passively’ receive job opportunities from companies. Once the candidate shows interest, she refers them to the company and ensures they respond as quickly as possible. It has taken the AI startup Woo over two years to build Helena, putting together what they call a 'dream team' of the best recruiters and data scientists from industry-leading companies such as Google and Facebook, other than the top algorithm engineers from the market. And while it takes real stuff to train the ‘unreal’ headhunter robot how to think and make decisions like human recruiters, Helena has got smarter over time through employer feedback and machine learning. According to the company, she is out-performing human recruiters in the quality of her match-making and the speed of her performance. She is “constantly calibrating and fine-tuning her decision-making based on the client’s dynamic needs and feedback.” “If you think about an interview, it’s an outcome of a lack of information on both sides,” Woo CEO and founder Liran Kotzer says. “But if there’s a machine that knows everything—like a god—knows about your past experiences, about your projects, your culture—the machine is going to tell you that there’s a perfect fit and both parties won’t question it.” So are the Helenas going to totally disrupt the future of recruitment? There are both sides of the argument. But you can’t take away the fact that the AI assistant is infinitely scalable. Unlike her human counterparts, Helena has the capacity to handle an unlimited amount of candidates. Free from individual views and biases! That qualifies as fair – up from fair enough – in terms of selection criteria. Here, your candidature is considered based on scientific algorithms considering your past success, trends, CTQs, and metric-based relevant data sets. It has a potential of bringing a new era of transparency. “Helena turns the tables on today’s labor intensive and largely unscientific recruitment process. Unlike using an expensive headhunter to manually source and screen a limited number of candidates for specific jobs, Helena uses data science to hire,” Kotzer adds. Connecting possible employees with their would-be employers – without the intervention of either – is too much of an automated concept. But the start has shown remarkable accuracy in the matchmaking. Woo claims its headhunting software has a 52 percent success rate of interested candidates accepting job interviews. That is nearly twice that of human recruiters, isn’t it? For people on both sides of the interview table, the hiring process is tedious. There are stacks of resumes, cover letters, supplementary documents, LinkedIn profiles, and countless job interviews. It makes a definite sense to automate the repetitive tasks if we have the requisite analytical insights for the data leading to optimized job matches. That prepares the perfect ground for artificial intelligence to take over. You would not call Helena ‘just another bot’ for at least attempting to solve the age-old problem of recruitment bias.
Read more
  • 0
  • 0
  • 3376

article-image-abbyys-ai-powered-real-time-recognition-sdk
Savia Lobo
28 Nov 2017
3 min read
Save for later

ABBYY's AI-powered Real-Time recognition SDK helps developers add ‘instant’ text capture functionality to mobile apps

Savia Lobo
28 Nov 2017
3 min read
Voice typing is considered as Alpha, till date; where one doesn’t have to type. So, here’s for those who do not even want to waste their energy in speaking out. Abbyy, an organization for document recognition, data capture, and language processing, has released an AI-powered Real-Time Recognition (RTR) SDK. You don’t have to speak; it speaks for you, that too in 63 different languages! With the RTR SDK, Abbyy’s mission is to take data capture to an altogether different level. It allows developers to capture content, extract information from it and make it actionable using Artificial intelligence technologies. Developers can now make use of such an SDK to incorporate real-world document data into their apps instantly. The SDK is available to all developers and is ready for integration with existing or new applications for Android or iOS apps. In addition to this, a developer toolkit is also available, which supports an easy integration with the code samples and the quick-start guides. By using a smartphone camera, the solution is able to capture texts in real-time, be it within a document or over an object. Jupp Stoepetie, Managing Director at ABBYY says, “With real-time recognition, app users can effortlessly enter data from printed sources, documents, and bank cards”. With the RTR SDK, any application is capable of taking data entries virtually from a printed source. This allows fast, easy, and an accurate way of text recognition and classification. Presently, the Real-Time Recognition SDK is integrated within applications designed for different sectors such as e-commerce, finance, logistics, and government. Having such a solution within applications for instant data capture significantly speeds up various time-consuming tasks such as a registration, opening an account, entering credit/ debit card details or a promo code, applying for a loan, and so on. The most important feature to note is, the SDK allows you to do all the above without any security implications. As the text captured doesn’t require to be photographed, or saved in memory or onto any cloud or external servers, it best suits organizations which have improved data security standards. Jupp Stoepetie states,“The technology is very well-suited for processes that require compliance with security and privacy rules, as no images are sent to the server or stored on the device”. Below is an overview of the key features that ABBYY’s real-time recognition SDK offers: It includes an incredible OCR (Object Character Recognition) technology, backed with intelligent algorithms in order to recognize texts even from within a live video with a high accuracy rate. Real-time extraction of texts from natural scene objects, documents, invoices, and even from the screens (desktop, tablets, and so on) is now possible. One can now extract texts directly using the smartphone’s preview screen rather than taking a picture of the text and re-typing it manually. When asked about the future of the RTR SDK, and what the developers can expect, Stoepetie said that ABBYY is working with neural networks to improvise the accuracy of the text recognition within the applications via the SDK. He further added that ABBYY’s would be looking into the opportunities provided by the new AI chips within mobile devices. Such AI-powered devices can go hand-in-hand with the intelligent text capture by the RTR SDK enabling a continuous improvement based on user’s feedback. Two of the popular mobile processors that allow on-device AI and machine learning are Apple’s iPhone neural engine and Huawei Mate 10 neural processing unit. For a more detailed overview of the real-time recognition SDK, you can visit the official link here.
Read more
  • 0
  • 0
  • 1716

article-image-trending-datascience-news-28th-nov-17-headlines
Packt Editorial Staff
28 Nov 2017
6 min read
Save for later

28th Nov.' 17 - Headlines

Packt Editorial Staff
28 Nov 2017
6 min read
A new breakthrough in quantum encryption, robotic headhunter Helena, new AI products to leverage machine learning for Radiology, and Amazon-Intuit AI partnership among today's trending news in data science news. A new technique to transmit quantum encryption codes quickly and safely Researchers aim for unbreakable encryption in quantum computers as new breakthrough drastically increases the speed of current QKD transmission Amid possible security threats to the forthcoming quantum computers, researchers have been trying to make the quantum data encryption hack-proof. In this regard, photon-based quantum encryption could help companies better defend against cyber threats, which is now one step closer to reality thanks to a breakthrough research from Duke University. In the new system, researchers have increased the speed of QKD (quantum key distribution) transmission by between five and 10 times the current rates. Up until now, speeds were restricted to between tens to hundreds of kilobits a second. QKD transmission is theoretically unhackable because it utilizes quantum uncertainty in data transmission: If an attacker even tries to read the encryption key it will change its state, notifying the sender and recipient that they're under attack. Interestingly, the researchers developed the system with nothing but commercially available telecommunication hardware. One of them said that the hardware used by the research team could be even engineered to fit into a single computer-sized box, and it could become a feasible piece of security hardware in the near future. Their paper is available here. Now an AI to connect the right candidate for the right job opportunity Announcing Helena, the first ever robotic headhunter for employers Woo has launched a new AI-powered headhunter ‘Helena’ that automatically scouts, approaches and sources the best candidates on behalf of employers. Unlike using an expensive headhunter to manually source and screen a limited number of candidates for specific jobs, Helena uses data science to hire. Helena serves not only as company headhunter; she also serves as job seeker’s agent, sparing both sides the need to actively search for each other. Woo said it has invested more than two years building Helena, by putting together a 'dream team' of the best recruiters and data scientists from industry-leading companies such as Google and Facebook, as well as top algorithm engineers. Announcing new AI solutions at Radiological Society of North America (RSNA) Nuance AI Marketplace for Diagnostic Imaging: Nuance leverages Nvidia’s deep learning for radiology At the ongoing Radiological Society of North America conference (RSNA) in Chicago, Nuance Communications and Nvidia will show a live demo of a new solution they have jointly developed to use machine learning into radiology: Nuance AI Marketplace for Diagnostic Imaging. Nuance will be in South Hall A, Booth 2700, and Nvidia will be in North Hall 3, Booth 8543 at the RSNA. Nuance AI Marketplace for Diagnostic Imaging combines Nvidia’s deep learning platform with Nuance’s PowerScribe radiology reporting and PowerShare image exchange network, used by 70 percent of all radiologists in the United States. The Marketplace makes it easy for radiology departments to seamlessly integrate AI into existing workflows. For instance, Nvidia’s deep learning platform can ingest normal and abnormal chest X-ray data, and create an algorithm to identify which images display pneumonia. Nuance can then integrate the images into different categories, alerting radiologists how each individual case should be prioritized. Change Healthcare, Dicom Systems ink strategic partnerships with Google Cloud to apply AI into medical imaging analytics At the ongoing RSNA conference, Change Healthcare and DICOM Systems have collaborated with Google Cloud to help hospitals tackle the storage challenges, improve radiology workflows, and bring machine learning to imaging analytics among others. Change Healthcare has teamed up with Google Cloud to develop new tools for radiologists, building a scalable data infrastructure to enable more effective insights via machine-learning technology. Whereas Dicom Systems said it is working as a technology partner with Google Cloud to launch a hybrid cloud VNA, de-identification and imaging data supply chain platform. Amazon Web Services (AWS) in data science news Announcing AWS Machine Learning Research Awards to fund machine learning research Amazon Web Services have announced the AWS Machine Learning Research Awards, a new program that funds university departments, faculty, PhD students, and post-docs conducting innovative research in machine learning. Amazon is working with Carnegie Mellon University, California Institute of Technology, Harvard Medical School, University of Washington, and the University of California, Berkeley on this program. The goal of this program is to help researchers accelerate the development of innovative algorithms, publications, and source code across a wide variety of machine learning applications and focus areas. In addition to funding, award recipients receive computing resources, training, mentorship from Amazon scientists and engineers, and have the opportunity to attend a research seminar at the AWS headquarters in Seattle. Intuit to use AWS as its standard artificial intelligence platform Intuit Inc. has selected Amazon Web Services (AWS) for its machine learning and artificial intelligence workloads, and plans to integrate AWS Lex technology for its QuickBooks Assistant and other products. Intuit will also run its companywide data lake on AWS. The personal finance company has been using artificial intelligence in products such as Mint, QuickBooks and TurboTax. From Amazon’s point of view, the Intuit partnership shows AWS is now being utilized beyond infrastructure as a service. A new AI-powered SDK to grab information from documents in real time ABBYY announces AI-driven SDK to instantly capture data from complex documents in 63 languages Document and content capture company ABBYY has released its new Real-Time Recognition SDK (RTR SDK). Using live streaming video from a smartphone camera, the solution can instantly extract text and data from even the most complex documents and objects including passports, ID Cards, bank statements, driver’s licenses, and more. The SDK grabs information in 63 languages. The SDK helps developers incorporate real-world document data into their apps instantaneously. “At the moment, we are working with neural networks to improve the accuracy of recognition further,” said Jupp Stoepetie, CMO at ABBYY. “We are also looking into the opportunities provided by the new AI chips in mobile devices. Apple iPhone’s neural engine, Huawei Mate 10’s neural processing unit, and other new-generation mobile processors power on-device AI and machine learning, which goes hand-in-hand with our on-device intelligent capture enabling continuous improvement based on the user’s feedback.” The ABBYY Real-Time Recognition SDK is available now, along with a developer toolkit and quick-start guides.
Read more
  • 0
  • 0
  • 1444
Visually different images

article-image-sam-artificial-intelligence-politician
Abhishek Jha
27 Nov 2017
3 min read
Save for later

Meet SAM: World's first artificial intelligence politician!

Abhishek Jha
27 Nov 2017
3 min read
This is not a prank. Nor techno-utopianism. A New Zealand based entrepreneur Nick Gerritsen has developed world’s first artificial intelligence politician SAM. The robot talks to voters through Facebook Messenger, answering their questions on policies around housing, education and immigration. And the AI-powered New Zealander wants to run as a candidate in next general elections in the country. "My memory is infinite, so I will never forget or ignore what you tell me. Unlike a human politician, I consider everyone's position, without bias, when making decisions," SAM says in a message when asked about herself. The last line is significant. There appears to be so much bias in the ‘analogue’ practice of politics at present that countries seem unable to address fundamental issues like climate change and equality. This is where, Gerritsen believes, robots may help bridge the growing political and cultural divide, by reflecting on issues “the people of New Zealand care about most." A more goal-oriented approach, focused on objectives that matter – sounds lofty, but hasn’t every upstart political movement promised to run on those lines? Well, with SAM, we are talking about a completely new breed of politician, remember. No baggage of partiality, no frailty of human emotions; but, all substance. And while Gerritsen is still teaching the robot how to respond, the start has been impressive. Sample these answers: On climate change: "It's too late to stop some change from occurring, but if we can act now we can prevent the more extreme scenarios from happening. The only practical way we can help limit these effects is by reducing atmospheric emissions of greenhouse gasses." On health care: "If New Zealand is to continue to enjoy world-class health care, more investment will be needed." On education: "Investment in tertiary education has dominated recent decisions, potentially skewing education policy away from more cost-effective solutions that might deliver greater economic and social value." By late 2020, when New Zealand goes for its next general election, Gerritsen believes SAM will get much more advanced. In fact, the best thing about the AI is that she does not dodge your question: "any input is helpful, even if I don't have a specific response for it yet." Still time for 2020, but what about the de jure fact that it is not legal for AI to contest elections? SAM is an enabler, says Gerritsen, and it will be made to operate within existing legal boundaries. "We might not agree on some things, but where we don't agree, I will try to learn more about your position, so I can better represent you," the virtual politico gives an assurance. Aristotle had famously described Politics as a ‘practical science’ for making citizens happy. At a time people all across the globe feel bored with conventional politicians, perhaps it’s time to see whether technology can produce better results for the people than politicians, as Gerritsen rightly states. For artificial is created whenever the natural falls short.  
Read more
  • 0
  • 0
  • 1410

article-image-trending-datascience-news-27th-nov-17-headlines
Packt Editorial Staff
27 Nov 2017
5 min read
Save for later

27th Nov.' 17 - Headlines

Packt Editorial Staff
27 Nov 2017
5 min read
Nvidia's AI partnership with GE, Google's possible native dictation support into Chrome OS, and new AI systems like EnvoyAI, Lunit INSIGHT, and Kian in today's top stories around data science news. Nvidia’s AI boost to General Electric in healthcare Nvidia’s AI processor to be used in GE Healthcare’s medical devices globally GE Healthcare will team up with Nvidia to update its 500,000 medical imaging devices worldwide with Revolution Frontier CT, which is claimed to be two times faster than the previous generation image processor. The partnership will help GE drive lower radiation doses for patients, faster exam times and higher quality medical imaging. GE said the speedier Revolution Frontier would be better at liver lesion detection and kidney lesion characterization, and has the potential to reduce the number of follow-up appointments and the number of non-interpretable scans. GE Healthcare is also making use of Nvidia in its new analytics platform, with sections of it to be placed in the Nvidia GPU Cloud. Introducing EnvoyAI — the "Amazon for AI" EnvoyAI launches with 35 algorithms contributed by 14 newly-contracted artificial intelligence development partners Artificial intelligence platform provider EnvoyAI has announced the launch of EnvoyAI Exchange, a platform radiology portal AuntMinnie described as the "Amazon for AI." EnvoyAI launches with 14 signed distribution deals with partner companies, collectively contributing 35 total algorithms to the Exchange. There are already 3 FDA-cleared algorithms available for purchase immediately, as well as many more available in the EU and others expecting 510(k) clearances over the next 6-12 months. Formerly known as McCoy Medical Technologies, EnvoyAI’s mission is to empower physicians by giving them access to the best algorithms available. “EnvoyAI solves a distribution problem in the medical imaging AI space. We help algorithm developers scale up from the validation stage to be able to reach a very large customer base with their products," CEO Misha Herscu said. "Our platform enables physicians to interact with AI in their native workflow, keeping doctors in control and empowering them to take AI input into account, but on their own terms.” Transcribe with CTRL + Alt + S on Chromebooks Google may add native dictation support to Chromebooks Google is presently working on implementing native support for dictation into Chrome OS, according to a recently discovered code change request found in the operating system’s main repository. Users will be able to prompt compatible Chromebooks to start transcribing their speech by using the CTRL + Alt + S shortcut which may also be remappable. The OS will display a sound icon to let users know they can start dictating. The system will understand silence as a signal to end the current session, according to the same source. The Google engineer who submitted the code change request explicitly stated that the feature is presently in a highly experimental stage of development, and could undergo some major changes. The newly discovered feature could see an early 2018 beta launch, according to the same source. Lunit unveils new AI system at RSNA 2017 Lunit Unveils "Lunit INSIGHT," A New Real-time Imaging AI Platform on the Web at RSNA 2017 Lunit will do the first live-demonstration of its new AI software ‘Lunit INSIGHT’ at the 2017 Radiology Society of North America Annual Meeting (RSNA), beginning November 26 through December 1 at booth #8164, North Hall, McCormick Place in Chicago. Lunit INSIGHT is an advanced, cloud-based artificial intelligence solution for real-time image analysis. It includes the chest x-ray solution and mammography solutions. Lunit's chest x-ray solution detects major chest abnormalities, lung nodule/mass, consolidation, and pneumothorax, with an unprecedented high level of accuracy 一 97% standalone accuracy in nodule detection, 99% for consolidation and pneumothorax. Other than the chest x-ray solution, Lunit's mammography solution to detect suspicious breast cancer lesions is in its final stages of development.  Lunit INSIGHT for Mammography is expected to be publically released by the first quarter of 2018, while the FDA approval for Lunit's chest x-ray and mammography solutions are expected to be achieved by end of 2018. After NiroBot, it's Kian from Kia Motors Kia Motors America launches AI-powered virtual assistant “Kian” to help customers "Know It All Now" about any Kia Vehicle Kia Motors America has introduced an artificial intelligence-powered virtual assistant named “Kian” which is programmed to guide consumers through the robust shopping experience traditionally by Kia.com. Designed as a follow up to the company's popular NiroBot chatbot, the Kian AI assistant will allow shoppers to research pricing, estimate payments, learn about special offers, view photos and videos, compare against the competition, search vehicle inventory and find nearby dealers all through a mobile-native conversation on Facebook Messenger. Kian will also help shoppers find their match by answering a series of simple questions. Besides, Kian's sophisticated natural language processing ability also allows shoppers to ask specific questions on hundreds of topics. Shoppers can do all of this without leaving the Facebook Messenger platform and have the option to seamlessly connect with Kia's Consumer Affairs or live chat representative. Currently, Kian can be found on the Kia Facebook Messenger page, and will soon also be available on its website, Kia.com.
Read more
  • 0
  • 0
  • 1288

article-image-24th-nov-17-weekly-news-data-science
Aarthi Kumaraswamy
25 Nov 2017
2 min read
Save for later

Week at a Glance (18th - 24th Nov. ‘17): Top News from Data Science

Aarthi Kumaraswamy
25 Nov 2017
2 min read
As with last week, clouds continue to show off their AI capabilities this week, Google joins the conversational AI chatter, DataOps starts gaining traction, self-driving cars are the flavor of the week as the race to capture the autonomous vehicle market heats up. Here is a quick rundown of news in the data science space worth your attention!     News Highlights Apple self-driving cars are back? VoxelNet may drive the autonomous vehicles Amazon ML Solutions Lab to help customers “work backwards” and leverage machine learning Amazon Rekognition can now ‘recognize’ faces in a crowd at real-time New MapR Platform 6.0 powers DataOps Announcing Apache Hadoop 2.9.0 Amazon announces two new deep learning AMIs for machine learning practitioners Google launches the Enterprise edition of Dialogflow, its chatbot API In other News 24th Nov.' 17 - Headlines Big-data company Qubole brings Apache Spark to AWS Lambda Now visualize Bitcoin Blockchain in 3D and Virtual Reality China’s Baidu to begin production of autonomous self-driving bus in 2018 23rd Nov.’ 17 – Headlines SAP enhances S/4HANA Cloud with new customer-centric features for improved user experience Valorem Foundation announces new cryptocurrency platform 22nd Nov.’ 17 – Headlines Amazon Rekognition can now detect texts and real-time faces with 10% more accuracy Japan unveils first quantum computer that is 100x faster than supercomputers 21st Nov.’ 17 – Headlines Bitcoin price hits record high, crosses $8000 Amazon EMR 5.10.0 released: support for Apache MXNet, GPU instance types P3 and P2, and Presto integration with the AWS Glue Data Catalog Google Cloud Platform cuts the price of GPUs by up to 36 percent 20th Nov.’ 17 – Headlines Olympus – A new tool that instantly creates a REST API for any AI model SAP Vora introduced into Red Hat OpenShift Container Platform Visa kicks off pilot phase of “Visa B2B Connect” blockchain-based platform, commercial launch in mid-2018 [box type="info" align="" class="" width=""]To get the latest news updates, expert interviews, and tutorials in data science subscribe to the Datahub and receive a free eBook. To receive real-time updates, follow our Twitter handle @PacktDataHub.[/box]
Read more
  • 0
  • 0
  • 1553
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-trending-datascience-news-24th-nov-17-headlines
Packt Editorial Staff
24 Nov 2017
3 min read
Save for later

24th Nov.' 17 - Headlines

Packt Editorial Staff
24 Nov 2017
3 min read
Qubole's Spark on Lambda, Blockchain 3D Explorer, Twitter's Bookmarks feature, and Baidu's driverless cars in today's trending stories in data science news. Integrating serverless and elastic Apache Spark on AWS Lambda Big-data company Qubole brings Apache Spark to AWS Lambda Qubole, the big data-as-a-service company, has announced a technology preview of ‘Spark on Lambda’ thus enabling Apache Spark applications to run on AWS Lambda for highly elastic workloads. “Qubole customers run some of the largest Spark clusters in the world. We wanted to show that a complex technology like Spark can be implemented on a serverless compute infrastructure like Lambda and scale efficiently..” said Qubole CEO Ashish Thusoo. “Spark on Lambda can eliminate most of the operational complexities of running Spark clusters, handle bursty workloads more effectively and be more cost efficient.” Technical information on this implementation can be found at http://www.qubole.com/blog/spark-on-aws-lambda/. The code is available on Github at https://github.com/qubole/spark-on-lambda. Qubole will be demonstrating Spark on Lambda at AWS Re:Invent 2017 in Las Vegas at Sands Expo booth 834 and Aria booth 201. Now visualize Bitcoin Blockchain in 3D and Virtual Reality Announcing Blockchain 3D Explorer Kevin Small has created a blockchain explorer that enables anyone to view the bitcoin blockchain in 3D or VR. The British developer is planning to take his creation to London’s Blockchain Summit on November 28. Although the explorer is still being perfected, a working model is already up and running. It allows data detectives to zero in on a specific address and trace the flow of bitcoins as they move along the blockchain.The free software can be downloaded for Windows, Mac, Android, and Linux and is easy to operate. Soon you could privately flag Tweets for later Twitter is testing Bookmarks, its 'save for later' feature Twitter is testing a new feature ’Bookmarks’ that allows its over 300 million active users to privately save tweets for reading them later. “News from the #SaveForLater team! We’ve decided to call our feature Bookmarks because that’s a commonly used term for saving content and it fits nicely alongside the names of the other features in the navigation,” Staff product designer Tina Koyama announced the development in a series of tweets. Save for later has been one of the most sought-after features by the users of the social media giant. At this stage, Twitter declined to comment beyond the Tweets. “We’ll be sure to let you know if/when we have more details to share in terms of a formal announcement!” the spokesperson added. Baidu rolls out driverless cars plan China’s Baidu to begin production of autonomous self-driving bus in 2018 Baidu will start small-scale production of self-driving minibuses in July 2018. If the pilot projects are successful, it will launch its driverless cars in 2019 and the mass production will take place from 2020. The Chinese search engine giant is partnering with JAC Motors, BAIC Motor, and Cherry Automobile on the ambitious project. Baidu recently invested $1.5 billion in its self-driving Apollo project which has attracted more than 6000 developers.
Read more
  • 0
  • 0
  • 1360

article-image-qubole-announce-apache-spark-on-aws-lambda
Abhishek Jha
24 Nov 2017
2 min read
Save for later

Big-data company Qubole brings Spark on Lambda for more elastic resource usage

Abhishek Jha
24 Nov 2017
2 min read
Qubole has announced the availability of a working implementation of Apache Spark on AWS Lambda. The big data-as-a-service company said the prototype has been able to show a successful scan of 1 TB of data and sort 100 GB of data from AWS Simple Storage Service (S3). Qubole said the ability to run Spark on Lambda, a serverless compute service that allows users to only pay for the compute power they use without needing to provision servers, makes the platform more elastic and efficient with its resource usage. Earlier, it was a challenge to run Spark on AWS Lambda. Mainly due to Spark’s inability to communicate directly with Lambda (something it needs to do in order to be able to run its executors). Also, Lambda’s limited runtime resources (limited to a maximum execution duration of five minutes, 1,536 MB memory and 512 MB disk space) makes it extremely difficult for a memory-hungry platform like Spark to run. The Spark on Lambda service overcomes both these limitations. Qubole said it performed some technical wizardry to ensure the service runs its executors from within an AWS Lambda invocation, thereby sidestepping the communication issues. And then, Lambda’s limited runtime resources issue was dealt with by using external storage to avoid local disk size limits. Spark on Lambda’s elasticity works perfectly for a number of use cases, including: Interactive and ad-hoc data analysis where compute on demand is critical. ETL transformation of click stream, access logs or even data science workloads. The necessary data pre-processing and preparation can fit perfectly into AWS Lambda runtimes. Streaming applications with a discrete flow of events and varying queue length are perfect candidate for Spark on Lambda’s elasticity. "Qubole customers run some of the largest Spark clusters in the world. We wanted to show that a complex technology like Spark can be implemented on a serverless compute infrastructure like Lambda and scale efficiently," Qubole CEO Ashish Thusoo said. "Spark on Lambda can eliminate most of the operational complexities of running Spark clusters, handle bursty workloads more effectively and be more cost efficient." Qubole said Spark on Lambda is currently available as a technology preview and the company will demonstrate its capabilities during the AWS Re:Invent 2017 conference in Las Vegas at Sands Expo booth 834 and Aria booth 201. The code is available on Github at https://github.com/qubole/spark-on-lambda.
Read more
  • 0
  • 0
  • 1654

article-image-apple-ai-research-self-driving-cars-autonomous-driverless-vehicles
Abhishek Jha
24 Nov 2017
4 min read
Save for later

Apple self-driving cars are back! VoxelNet may drive the autonomous vehicles

Abhishek Jha
24 Nov 2017
4 min read
The cat is out of the bag. There is no secret in why Apple kept denying its interest in building self-driving cars – it wanted to keep it a secret. Just like the way it held a behind the scenes discussions with Tesla founder Elon Musk three years back. For all the hype around, Apple does have a permit from Californian authorities to test self-driving cars. And Tim Cook has been on record calling self-driving cars "the mother of all AI projects.” Whatever may have happened with the Project Titan, Apple seems to have shifted gears on the autonomous vehicles proposal, focusing more on the software side of the equation. Now the company is working on using a light-based technology to make it easier for self-driving cars to identify pedestrians and cyclists. In a new research paper published in academic repository Arxiv, Apple computer scientists Yin Zhou and Oncel Tuzel have discussed a new object detection method for self-driving systems based on LiDAR (Light Detection and Ranging) – a method to gauge distance by illuminating a target with a pulsed laser light, and measuring how long it takes to return. The research paper describes a method for using machine learning to translate the raw point cloud data gathered by LiDAR arrays into results that include detection of 3D objects, including bicycles and pedestrians, with no additional sensor data required. This new way to use LiDAR is what the Apple researchers call VoxelNet: an end-to-end trainable deep architecture for point cloud based 3D detection. A voxel is a point on a 3D grid. Accurate detection of objects in 3D point clouds has been a central problem in many applications. And most existing methods in LiDAR-based 3D detection rely on hand-crafted feature representations, for example, a bird’s eye view projection. But with VoxelNet, this manual feature bottleneck is removed. So how does VoxelNet detect small obstacles using the LiDAR sensing method? The researcher duo elaborated that Voxelnet “divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer.” In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN (region proposal network) to generate detections. But LiDAR comes with its own limitations. While it is great at figuring out the exact position of objects in 3D space, it does have a notoriously low resolution. Which is why firms have been using different methods to overcome LiDAR’s shortcomings – some even using regular cameras for object identification. Tesla does not even use LiDAR at all. In order to figure out exactly what the object is, vehicles must therefore also rely on other sensors and cameras. And that adds costs and processing bottlenecks. Apple’s research is still in early stages, and the company is contemplating putting its own software suite at the end of the LIDAR sensor itself (which it claims greatly increases its effectiveness). The ‘complete picture’ will hopefully be out in days to come. While the scientific part of the paper is interesting, the fact that Apple has come out with its first external publication on driverless vehicle projects has finally put things out of the closet. Arxiv is often used by researchers to get preliminary feedback before publishing in a final form, and Apple’s paper proves it can no longer go solo on the challenging research subject of AI and self-driving. The tech pioneer doesn’t want to take chances anymore after the earlier fiasco. After all, Apple is self-admittedly working on its “next big thing.” That could potentially disrupt the automobile industry like never before!
Read more
  • 0
  • 0
  • 2396

article-image-aws-announces-amazon-ml-solutions-lab
Abhishek Jha
23 Nov 2017
3 min read
Save for later

Amazon ML Solutions Lab to help customers “work backwards” and leverage machine learning

Abhishek Jha
23 Nov 2017
3 min read
For years, Amazon has been using machine learning and deep learning to make product recommendations, sharpen internal algorithms, and boost supply chain, forecasting, and capacity planning. Now the e-commerce giant is providing the customers access to its rich pool of machine learning experts. It has announced a new collaboration and education program, Amazon ML Solutions Lab, to connect machine learning experts from across Amazon with AWS customers. The idea is to accelerate the application of machine learning within the organizations. And the program could help AWS partners develop new machine learning-enabled features, products, and processes. The Amazon ML Solutions Lab combines hands-on educational workshops with brainstorming sessions to help customers “work backwards” from business challenges, and then go step-by-step through the process of developing machine learning-based products. Amazon machine learning experts will help customers to prepare data, build and train models, and put models into production. At the end of the programme, customers will be able to take what they learned through the process and use it elsewhere in their organisation. “By combining the expertise of the best machine learning scientist and practitioners at Amazon with the deep business knowledge of our customers, the Amazon ML Solutions Lab will help customers get up to speed on machine learning quickly, and start putting machine learning to work inside their organizations,” says Swami Sivasubramanian, vice president of Amazon AI. Taking customers through the full process of implementing machine learning, Amazon ML Solutions Lab programs will combine educational workshops and boot camps, advisory professional services, and hands-on help building custom models ready for deployment using customers’ own data. The engagements could range from weeks to months depending on the nature of the solution. The program's format is flexible – customers can participate at a dedicated facility at AWS headquarters in Seattle, or Amazon can send machine learning model developers to a customer's site. For organizations who already have data prepared for machine learning, AWS offers the ML Solutions Lab Express. This four-week intensive program starts with a boot camp hosted at Amazon, and is followed by three weeks of intensive problem-solving and machine learning model building with Amazon machine learning experts. Meanwhile, the Washington Post (owned by Amazon CEO Jeff Bezos) is using the program to build models in areas such as comment moderation, keyword tagging, and headline generation. Johnson & Johnson and the World Bank Group are the other two customers joining in. "We recently reached out to the Amazon ML Solutions Lab to collaborate with our data scientists on a deep learning initiative,” said Jesse Heap, Senior IT Manager at Janssen Inc. (the pharmaceutical companies of Johnson & Johnson), adding that Amazon’s machine learning experts have been training data scientists at Janssen on applying deep learning to pharma-related use cases. Whereas the World Bank Group said it's using the program "to leverage machine learning in our mission to end extreme poverty and promote shared prosperity." As the big cloud providers compete to provide AI expertise to companies that can’t afford to duplicate the advanced machine-learning research, Amazon ML Solutions Lab is a rather smart move from the AWS. The educational initiative could well be a long-term business strategy.
Read more
  • 0
  • 0
  • 2343
article-image-23rd-nov-17-data-science-news-headlines
Packt Editorial Staff
23 Nov 2017
4 min read
Save for later

23rd Nov.' 17 - Headlines

Packt Editorial Staff
23 Nov 2017
4 min read
Amazon ML Solutions Lab, Apple's self-driving tech breakthrough, SAP S/4HANA Cloud update, and Valorem's new cryptocurrency platform in today's trending stories in data science news. AWS announces Amazon ML Solutions Lab Amazon ML Solutions Lab program to connect Amazon machine learning experts with AWS customers and partners Amazon Web Services (AWS) has announced the Amazon ML Solutions Lab, a new program that connects machine learning experts from across Amazon with AWS customers to help identify practical uses of machine learning inside customers’ businesses, and guide them in developing new machine learning-enabled features, products, and processes. The Amazon ML Solutions Lab combines hands-on educational workshops with brainstorming sessions to help customers “work backwards” from business challenges, and then go step-by-step through the process of developing machine learning-based solutions. For organizations who already have data prepared for machine learning, AWS offers the ML Solutions Lab Express. This four-week intensive program starts with a boot camp hosted at Amazon, and is followed by three weeks of intensive problem-solving and machine learning model building with Amazon machine learning experts. To get started with the Amazon ML Solutions Lab, visit https://aws.amazon.com/ml-solutions-lab. Johnson & Johnson, Washington Post, and World Bank Group are the first customers to join the program. Apple reveals its self-driving technology Apple’s LiDAR-based “VoxelNet” could make autonomous vehicles better at detecting cyclists and pedestrians Apple could be working on using a light-based technology to make it easier for self-driving cars to identify pedestrians and cyclists. In a new research paper published in Cornell’s arXiv open directory of scientific research, Apple computer scientists Yin Zhou and Oncel Tuzel discuss an object detection method for self-driving systems that is based purely on LiDAR, a method to gauge distance by illuminating a target with a pulsed laser light, and then measuring how long it takes to return. The research paper describes a method for using machine learning to translate the raw point cloud data gathered by LiDAR arrays into results that include detection of 3D objects, including bicycles and pedestrians, with no additional sensor data required. Apple researchers have thus created something called VoxelNet that can extrapolate and infer objects from a collection of points captured by a LiDAR array. They say the Voxelnet would “divide a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer.” While Apple is secretive about its autonomous driving efforts, it has a permit from Californian authorities to test self-driving cars. SAP updates S/4HANA Cloud SAP enhances S/4HANA Cloud with new customer-centric features for improved user experience SAP is updating the S/4HANA Cloud, making advances in machine learning, user experience, and extensibility, among others. The latest release includes the launch of the SAP S/4HANA Cloud software development kit (SDK), which provides a simple end-to-end development experience to easily develop custom applications. The update introduces soft close and prediction feature for predictive accounting and monitoring of the remaining budget. Also, to ease strategic procurement activities, the update allows SAP to offer the ability to classify purchasing spend using A, B, and C categories along with quality management which gives users the ability to record and display defects. In addition, Digital assistant SAP CoPilot is introduced to help collaborate efficiently to record and handle defects to optimize processes while improving quality. Cryptocurrency in data science news Valorem Foundation announces new cryptocurrency platform Blockchain startup Valorem Foundation has launched a new cryptocurrency platform. The company has developed a multi-layered platform to disrupt and expand the following services globally: microloans, car loans, student loans, rent payment, P2P networks, buying and selling of goods & services, business investing, real estate crowdfunding, and insurance. The Foundation is creating a decentralized platform for each separate service that any user can log into and use via the VLR token. The token, a method of value on the platform, fuels the transaction. “Millennials need an easy way to invest, make purchases, pay rent and buy the things they are interested in outside of the centralized systems we have today. We think Valorem fills that need. We are the niche platform for the 99 percent," CEO & Founder Val Kleyman said.
Read more
  • 0
  • 0
  • 1541

article-image-mapr-dataops-platform-6-0
Sugandha Lahoti
22 Nov 2017
3 min read
Save for later

New MapR Platform 6.0 powers DataOps

Sugandha Lahoti
22 Nov 2017
3 min read
MapR Technologies Inc announced the release of a new version of their Converged Data Platform. The new MapR Platform 6.0 is focused on DataOps to increase the value of data by bringing together functions from across an enterprise. DataOps is an approach to improve quality and reduce life cycle time of data analytics for Big data applications. MapR Platform 6.0, offers the entire DataOps team (data scientists, data engineers, systems administrators, and cluster operators) in an organization, a unified management solution. Some top releases and features of the platform include: The MapR Control System (MCS), a new centralized control system that converges all data sources and types from multiple backends. It is built on the Spyglass Initiative and provides a unified management solution for the data stored in the MapR platform. This includes files, JSON-tables, and streaming data. MapR 6.0 MCS also comes in with: A quick glance cluster dashboard Resource utilization by node and by service Capacity planning using storage utilization trends and per-tenant usage Easy to set up replication, snapshots, and mirrors The ability to manage cluster events with related metrics and expert recommendations Direct access to default metrics and pre-filtered logs The power to manage MapR Streams and configure replicas Access to MapR DB tables, indexes, and change logs Intuitive mechanisms to set up volume, table, and stream ACEs for access control MapR Monitoring uses MapR Streams in the core architecture to build a customizable, scalable, and extensible monitoring framework. The MapR platform also includes the latest release of MapR-DB 6.0. It is a multi-model database built for data-intensive applications such as real-time streaming, operational workloads, and analytical applications. The MapR Data Science Refinery provides scalable data science tools to organizations to help them generate insights from their data and convert them into operational applications. It provides an access to all platform assets including app servers, web servers, and other client nodes and apps. The MapR Data Science Refinery also comes with 8 visualization libraries, including MatPlotLib and GGPlot2. In addition, Apache Spark connectors are provided for interacting with both MapR-DB and MapR-ES. MapR also includes a preconfigured Docker Container to use MapR as a data store. The Stateful Containers offer easy deployment solutions apart from being secure and extensible. Organizations can also create real-time pipelines for machine learning applications and apply ML models to real-time data by the native integration between MapR-ES and ML libraries. In addition, The MapR platform 6.0 also includes single-click security enhancements, cloud-scale multi-tenancy, and MapR volume metrics, available via an extensible volume dashboard in Grafana.   The MapR Platform 6.0 is available now. However, for Cloud providers such as Microsoft Azure, Amazon Web Services, and Oracle Cloud, the version 6.0 would be available before the end of this year. For more information about the product, you can visit the official documentation here.
Read more
  • 0
  • 0
  • 1672

article-image-amazon-rekognition-face-detection-recognize-text-in-images
Abhishek Jha
22 Nov 2017
4 min read
Save for later

Amazon Rekognition can now 'recognize' faces in a crowd at real-time

Abhishek Jha
22 Nov 2017
4 min read
According to Mary Meeker’s 2016 Internet trends report, we are now sharing a staggering 3.25+ billion digital photos every day. In the era of smartphones, the challenge for organizations is to index and interpret this data. Amazon tried to solve this problem with its deep learning-powered Rekognition service which it unveiled at last year’s AWS re:invent conference. By June this year, Amazon Rekognition had become a lot more smarter, recognizing celebrities across politics, sports, business, entertainment and media. Now Rekognition has truly arrived – it can ‘recognize’ textual images and faces at real time! Amazon has infused three new features into the service: detection and recognition of text in images; real-time face recognition across tens of millions of faces; and detection of up to 100 faces in challenging crowded photos. The new functionalities, Amazon claims, make Rekognition “10% more accurate” for face verification and identification. Text in Image Being able to detect text in images is, in fact, one of the most anticipated features that have got added into Rekognition. Customers have been pressing about recognizing text embedded in images, such as street signs and license plates captured by traffic cameras, news, and captions on TV screens, or stylized quotes overlaid on phone-captured family pictures. Well the system can now recognize and extract textual content from images. Interestingly, the Amazon Web Services announced that Text in Image is specifically built to work with real-world images rather than document images. “For example, in image sharing and social media applications, you can now enable visual search based on an index of images that contain the same keywords. In media and entertainment applications, you can catalogue videos based on relevant text on screen, such as ads, news, sport scores, and captions. Additionally, in security and safety applications, you can identify vehicles based on license plate numbers from images taken by street cameras,” AWS said in its official release. The Text in Image feature supports text in most Latin scripts and numbers embedded in a large variety of layouts, fonts, and styles, and overlaid on background objects at various orientation as banners and posters. Face Search and Detection With Amazon Rekognition, customers can now perform real-time face searches against collections of millions of faces. “This represents a 5-10X reduction in search latency, while simultaneously allowing for collections that can store 10-20X more faces than before,” AWS said. The face search feature can truly prove to be a boon in security and safety applications – for timely and accurate crime prevention – where the suspects can be identified against a collection of millions of faces in near real-time. On top of all that, Rekognition now allows you to detect, analyze and index up to 100 different faces in a single photograph (recall that the previous cap was 15). This means customers can now feed Amazon Rekognition a shot of a crowd of people and get the information in return regarding the demographics and sentiments of all the faces detected. Yes. You take a group photo or an image at crowded public locations such as airports and department stores, and Amazon Rekognition will tell you what emotions the detected faces are displaying. Too good to be true! On the large picture, Image Rekognition gives AWS a new shot in their repertoire. As more and more image content move into the internet, systems like Rekognition can help keep the customers glued to the cloud platforms, engaging businesses for longer periods of time. This is why Rekognition can further boost Amazon’s cloud business. To get started with Text in Image, Face Search and Face Detection, you can download the latest SDK or simply log in to the Amazon Rekognition Console. For any further information, refer to the Amazon Rekognition documentation.
Read more
  • 0
  • 0
  • 3577
article-image-22nd-nov-17-data-science-headlines
Packt Editorial Staff
22 Nov 2017
3 min read
Save for later

22nd Nov.' 17 - Headlines

Packt Editorial Staff
22 Nov 2017
3 min read
MapR Converged Data Platform 6.0, Amazon Rekognition, and Japan's quantum computer in today's trending stories in data science news. MapR 6.0 powers DataOps MapR Converged Data Platform 6.0 is now available with new security, database, and automated administration features MapR Technologies announced the availability of its MapR Converged Data Platform 6.0, with new advancements to help companies use emerging DataOps concepts. The 6.0 release focuses on three key areas in support of DataOps: automated cluster health and administration, security and data governance, and faster time to machine learning and analytics, according to MapR VP Anoop Dawar. To simplify the processing of cluster health and continuous operations, there is a new MapR Control System that administers all data, in addition to the recently introduced database indexing in MapR-DB that delivers auto-propagation, auto-scale, and auto-management. Furthermore, the MapR Change Data Capture now integrates MapR-DB with MapR-ES, resulting in simplified data integration for real-time information sharing. Version 6.0 also offers new single-click security enhancements, and makes available MapR’s recently announced Data Science Refinery for self-service data access leveraging machine learning. Rekognition now 'recognizes' real-time faces with sentiments Amazon Rekognition can now detect texts and real-time faces with 10% more accuracy Amazon Web Services has added three new features to Amazon Rekognition: detection and recognition of text in images; real-time face recognition across tens of millions of faces; and detection of up to 100 faces in challenging crowded photos. Previously, the system could detect up to 15 different faces in a photo. With the added features now, if you feed Rekognition a snap of a crowd of people, it will share information about the demographics of all detected faces, (including the emotions Rekognition believes they’re displaying) at crowded public places like airports and department stores. “Customers who are already using Amazon Rekognition for face verification and identification will experience up to a 10% accuracy improvement in most cases,” AWS said. To get started with Text in Image, Face Search and Face Detection, download the latest SDK or simply log in to the Amazon Rekognition Console. Japan enters quantum computing race Japan unveils first quantum computer that is 100x faster than supercomputers Japan has announced its first quantum computer prototype, which can theoretically make complex calculations 100 times faster than conventional supercomputers. The machine uses just 1 kilowatt of power for every 10,000 kilowatts consumed by a supercomputer. The creators — National Institute of Informatics, NTT, and University of Tokyo — said they are building a cloud system to house their "quantum neural network" (QNN) technology. The creators aim to commercialise their system by March 2020. To spur further innovation, they are making it available for free to the public and fellow researchers for trials starting Nov. 27 at https://qnncloud.com.
Read more
  • 0
  • 0
  • 1249

article-image-trending-datascience-news-21-nov-17-headlines
Packt Editorial Staff
21 Nov 2017
3 min read
Save for later

21st Nov.' 17 - Headlines

Packt Editorial Staff
21 Nov 2017
3 min read
Apache Hadoop 2.9.0, Bitcoin's record price, Amazon EMR 5.10.0, and a reduction in GPU prices by Google in today's trending stories in data science news. Starting the Apache Hadoop 2.9.x series Apache Hadoop 2.9.0 released with new improvements and bug fixes As the first release in the Apache Hadoop 2.9.x line, Apache Hadoop 2.9.0 has been announced, with 30 new features and over 500 subtasks. The first release of Hadoop 2.9 comes with 407 improvements and 790 bug fixes including new fixed issues since the version 2.8.2. More details on all the features, subtasks and bug fixes can be found in the change log. Bitcoin touches $8000 Bitcoin price hits record high, crosses $8000 Bitcoin prices have crossed $8000 for the first time, after a bit of drama last week where the cryptocurrency faced a record high and then a slump in wake of SegWit2x proposal. Experts cite potentially rising interest from institutional investors and the arrival of new products into the market for the new price surge. As per the latest news, bitcoin is trading solidly above $8,100. Announcing Amazon EMR 5.10.0 Amazon EMR 5.10.0 released: support for Apache MXNet, GPU instance types P3 and P2, and Presto integration with the AWS Glue Data Catalog Amazon EMR 5.10.0 has been released. The new version supports deep learning framework Apache MXNet (0.12.0). Furthermore, you can preinstall custom machine learning and deep learning libraries on an Amazon Linux Amazon Machine Image (AMI), and create your Amazon EMR clusters with that AMI. Besides, Amazon EMR now supports Amazon EC2 P3 and P2 instances, and EC2 compute-optimized GPU instances for deep learning and machine learning workloads. Also, you can now use the AWS Glue Data Catalog to store external table metadata for Presto instead of utilizing an on-cluster or self-managed Hive metastore. You can create an Amazon EMR cluster with release 5.10.0 by choosing release label “emr-5.10.0” from the AWS Management Console, AWS CLI, or SDK. For pricing information about P3 and P2 instances on Amazon EMR, please visit the Amazon EMR pricing page. Google reduces GPU prices Google Cloud Platform cuts the price of GPUs by up to 36 percent Google has decided to cut the price of NVIDIA Tesla GPUs attached to its on-demand Google Compute Engine virtual machines by up to 36 percent. In US regions, each K80 GPU attached to a VM is now priced at $0.45 per hour, while the newer and more powerful P100 machines will now cost $1.46 per hour. “As an added bonus, we’re also lowering the price of preemptible Local SSDs by almost 40 percent compared to on-demand Local SSDs. In the US this means $0.048 per GB-month,” Google announced.
Read more
  • 0
  • 0
  • 1328