Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-applitools-introduces-ai-based-automated-root-cause-analysis-to-pinpoint-bugs-quickly
Amrata Joshi
06 Dec 2018
2 min read
Save for later

Applitools introduces AI based automated root cause analysis to pinpoint bugs quickly

Amrata Joshi
06 Dec 2018
2 min read
Yesterday, Applitools, the creator of Application Visual Management (AVM), introduced the AI based automated root cause analysis tool for developers and QA teams for quickly pinpointing bugs within web applications. Root cause analysis saves time                                                  Source: Applitools Adam Carmi, CTO at Applitools told PR Newswire,“Root cause analysis for visual testing will significantly shorten the meantime for resolution of application bugs from hours down to minutes.” The root cause analysis lets the front-end developers and QA teams target the cause of bugs in application code in lesser time as compared to traditional bug diagnosis practices. With the root cause analysis, developers won’t have to go through thousands of lines of DOM and CSS for finding the root cause, instead, a few lines could also work. This increases productivity as the development and QA teams can function smoothly and deliver the product on time. It makes the process of bug fixing faster. It also saves the time of shift left testing, an approach to system testing and software testing. The root cause analysis also shows the visual difference between a baseline screenshot and a test screenshot. Applitools’ user interface version control now includes DOM and CSS associated with each screenshot. It helps the development teams to experience the visual appearance of a web application and the changes made in DOM and CSS. The Applitools root cause analysis correlates the visual differences to DOM and CSS. The major advantage of Applitools AI tool is that it shows only those DOM differences that highlight the visual bug. Root Cause Analysis is currently working with Applitools SDKs for Selenium WebDriver and Java, JavaScript, C#, Python, and Ruby; WebdriverIO, Cypress, and Storybook. Read more about this news on Applitools. Microsoft fixing and testing the Windows 10 October update after file deletion bug SapFix and Sapienz: Facebook’s hybrid AI tools to automatically find and fix software bugs Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019
Read more
  • 0
  • 0
  • 1774

article-image-arangodb-3-4-releases-with-a-native-search-engine-full-geojson-support-and-more
Sugandha Lahoti
06 Dec 2018
2 min read
Save for later

ArangoDB 3.4 releases with a native search engine, full GeoJSON support, and more

Sugandha Lahoti
06 Dec 2018
2 min read
ArangoDB 3.4 has been released today. Major new enhancements include ArangoSearch, a feature which transforms ArangoDB, when combined with traversals or joins in AQL, from a data retrieval to an information retrieval solution.  It also comes with full GeoJSON Support enabled by a Google S2 Geo Index library integration. ArangoSearch This new feature provides a rich set of information retrieval capabilities. It consists of two components – a search engine and an integration layer. The search engine manages the index, querying and scoring. The integration layer provides search capabilities for the end user. ArangoSearch can be combined with all three data models in ArangoDB. It uses materialized view to enable full-text search on multiple collections at once. Users can now perform relevance-based matching, phrase and prefix matching, search with complex Boolean expressions, query time relevance tuning and combine complex traversals, geo-queries, and other access patterns with information retrieval techniques. GeoJSON support GeoJSON is an open standard format designed for representing simple geographical features, along with their non-spatial attributes. ArangoDB comes with full support of all geo primitives, including multi-polygons or multi-line strings. It also includes a Google S2 Geometry Library integration which complements ArangoDB’s RocksDB storage engine. Users can also directly visualize results in OpenStreetMap which is integrated into the Query Editor of ArangoDBs WebUI. Other features Query Profiler: Developers can now execute a query with special instrumentation code resulting in a printed query plan with detailed execution statistics. Cluster Management: Enhancements include faster cluster startup, synchronization and query execution. Streaming Cursors: Includes integrated streaming cursors which provide first results as they become available on the server. RocksDB is now the default Storage Engine, previous versions of ArangoDB used MMfiles as the default storage engine. The full list of features is available in ArangoDB release notes. Introducing TigerGraph Cloud: A database as a service in the Cloud with AI and Machine Learning support RedisGraph v1.0 released, benchmarking proves its 6-600 times faster than existing graph databases. Introducing EuclidesDB, a multi-model machine learning feature database.
Read more
  • 0
  • 0
  • 2870

article-image-neurips-2018-deep-learning-experts-discuss-how-to-build-adversarially-robust-machine-learning-models
Melisha Dsouza
04 Dec 2018
5 min read
Save for later

NeurIPS 2018: Deep learning experts discuss how to build adversarially robust machine learning models

Melisha Dsouza
04 Dec 2018
5 min read
The NeurIPS Conference 2018 being held in Montreal, Canada this week from 2nd December to 8th December will feature a series of tutorials, releases and announcements. The conference, previously known as NIPS, underwent a re-branding of its name (after much debate) as some members of the community found the acronym as “sexist”, pointing out that it is offensive towards women. “The Adversarial Robustness: Theory and Practice” is a tutorial that was presented at NIPS 2018 yesterday. The tutorial was delivered by J. Zico Kolter, a professor at Carnegie Mellon and chief Scientist of Bosch AI center and Aleksander Madry from MIT. In this tutorial, they explored the importance of building adversarially robust machine learning models as well as the challenges one will encounter while deploying the same.  Adversarial Robustness is a library dedicated to adversarial machine learning. It allows rapid crafting and analysis of attacks and defense methods for machine learning models. Alexander opened the talk by highlighting some of the challenges faced while deploying Machine Learning in the real world.  Even though machine learning has had a success story so far, is ML truly ready for real- world deployment? Also, can we truly rely on machine learning? These questions arise because developers don’t fully understand how machine learning interacts with other parts of the system, and this can lead to plenty of adversaries. Safety is still very much an issue while deploying ML. The tutorial tackles questions related to adversarial robustness and gives plenty of examples for developers to understand the concept and deploy ML models that are more adversarially robust. The measure of machine learning performance is the fraction of mistakes made during the testing phase of the algorithm. However, Alexander explains that in reality, the distributions we use machine learning on are NOT the ones we train it on. These assumptions are sometimes misleading. The key implication is that machine learning predictions are most of the time accurate, but they can also can turn out to be brittle. For example, the slightest of noise can alter an output and make the wrong prediction which accounts for brittleness of the ML algorithms. Besides, rotation and translation can fool state of the art vision models. Brittleness and other issues in Machine Learning Brittleness hampers the following domains in machine learning: Security: When a machine learning system has loopholes, a hacker can manipulate it leading to a system/data breach. An example of this would be, adding external entities to manipulate an Object recognition system. Safety and Reliability: Alexander gives an example of Tesla's self-driving cars, where the AI sometimes drives the car over a divider and the driver has to take over. In addition, the system does not report this as an error. ML alignment: Developers need to understand the “failure modes” of machine learning to understand how they work and succeed. Adversarial issues occur in the inference models. The training phases also involves a risk called ‘Data poisoning’. The goal of Data poisoning is maintaining training accuracy but hampering generalization. Machine learning is always in need of a huge amount of data to function and train on. To fulfill this need, sometimes the system works on data that cannot be trusted. This occurs mostly in classic Machine learning scenarios and less in Deep Learning. In deep learning, data poisoning causes training accuracy but hampers classification of specific inputs. It can also plant an undetectable backdoor in the system that can give it almost total control over the model. The final issue arises in Deployment.  During the deployment stage, restricted access is given to a user- for example, just access to the input-output of a model can also lead to Black box attacks. Alexander’s Commandments of Secure/Safe ML 1. Do not train on data you do not trust 2. Do not let anyone use the model or observe its outputs unless you completely trust them 3. Do not fully trust the predictions of your model (because of adversarial examples) Developers need to re-think the tools they use in machine learning to understand if they are robust enough to stress test the system. For Alexander, we need to treat training as an optimization problem. The aim is to find parameters that minimize loss on the training sample. Zico then builds on the principles put forward by Alexander, showing a number of different adversarial examples in action. This includes something called convex relaxations, which help to train and find the most optimal models for a given training set. Takeaways from the Tutorial After understanding how to implement adversarily robust ML models, developers can now ask themselves how does adversarial robust ML differ from standard ML.  That being said, adversarial robustness comes at a cost. Optimization during training is difficult and models need to be larger. More training data might be required. We also might need to lose on standard measures of performance. However, adversarial robustness helps machine learning models become semantically meaningful. Head over to NeurIPS facebook page for the entire tutorial and other sessions happening at the conference this week. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements! Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019      
Read more
  • 0
  • 0
  • 3798
Visually different images

article-image-ipython-7-2-0-is-out
Savia Lobo
04 Dec 2018
2 min read
Save for later

IPython 7.2.0 is out!

Savia Lobo
04 Dec 2018
2 min read
Last week, the IPython community announced its latest release, IPython 7.2, which is available on PyPI and will be soon available on Conda. This version includes some improvements, minor bug fixes, and also new configuration options. Users can update to IPython 7.2 by using the following command: pip install ipython --upgrade Improvements in IPython 7.2 Ability to show subcases while using pinfo and other utilities In this update, IPython will now list the first 10 subclasses whenever a ‘? / ??’ is used on a class. OSMagics.cd_force_quiet configuration option Users can now set the OSMagics.cd_force_quiet option to force the %cd magic to behave as if -q was passed. Here is the command: In [1]: cd / / In [2]: %config OSMagics.cd_force_quiet = True In [3]: cd /tmp In [4]: Current vi mode can now be configured To control this feature, users need to set the TerminalInteractiveShell.prompt_includes_vi_mode to a boolean value (default: True). Other improvements and bug fixes Fixed a bug preventing PySide2 GUI integration from working Users can now run CI on Mac OS The IPython ‘Demo’ mode has been fixed Fixed the %run magic with path in name This update has an added CWD to sys.path after stdlib The signatures (especially long ones) now have a better rendering Users can re-enable jedi by default if it’s installed This update has a new minimal exception reporting mode, which is mostly useful for educational purpose There are still some outstanding bugs that will be fixed in the next release, which the community plans to release before the end of the year. To know more about this release in detail, head over to IPython’s documentation. IPython 7.0 releases with AsyncIO Integration and new Async libraries Make Your Presentation with IPython How to connect your Vim editor to IPython
Read more
  • 0
  • 0
  • 2032

article-image-deepminds-alphafold-is-successful-in-predicting-the-3d-structure-of-a-protein-making-major-inroads-for-ai-use-in-healthcare
Sugandha Lahoti
04 Dec 2018
3 min read
Save for later

Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare

Sugandha Lahoti
04 Dec 2018
3 min read
Google’s DeepMind is turning its attention to using AI for science and healthcare. This statement is strengthened by the fact that last month, Google made major inroads into healthcare tech by absorbing DeepMind Health. In August it’s AI was successful in spotting over 50 sight-threatening eye diseases. Now it has solved another tough science problem. At an international conference in Cancun on Sunday, Deepmind’s latest AI system AlphaFold won the Critical Assessment of Structure Prediction (CASP) competition. The CASP is held every two years, inviting participants to submit models to predict the 3D structure of a protein from the amino acid sequence. The ability to predict a protein’s shape is useful to scientists because it is fundamental to understanding its role within the body. It is also used for diagnosing and treating diseases such as Alzheimer’s, Parkinson’s, Huntington’s and cystic fibrosis. AlphaFold’s SUMZ score was 127.9 (the previous winner SUMZ score was 80.46), achieving what CASP called “unprecedented progress in the ability of computational methods to predict protein structure.” The second team, named Zhang, scored 107.6. How does Deepmind’s AlphaFold work AlphaFold’s team trained a neural network to predict a separate distribution of distances between every pair of residues in a protein. These probabilities were then combined into a score that estimates how accurate a proposed protein structure is. They also trained a separate neural network that uses all distances in aggregate to estimate how close the proposed structure is to the right answer. The scoring functions were used to search the protein landscape to find structures that matched their predictions. They used two distinct methods to construct predictions of full protein structures. The first method repeatedly replaced pieces of a protein structure with new protein fragments. They trained a generative neural network to invent new fragments to improve the score of the proposed protein structure. The second method optimized scores through gradient descent for building highly accurate structures. This technique was applied to entire protein chains rather than to pieces that must be folded separately before being assembled, reducing the complexity of the prediction process. DeepMind Founder and CEO Demis Hassabis celebrated the victory in a tweet. https://twitter.com/demishassabis/status/1069411081603481600 Google CEO Sunder Pichai was also excited about this development on how AI can be used for scientific discovery. https://twitter.com/sundarpichai/status/1069450462284267520 NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale. Google makes major inroads into healthcare tech by absorbing DeepMind Health A new episodic memory-based curiosity model to solve procrastination in RL agents by Google Brain, DeepMind and ETH Zurich
Read more
  • 0
  • 0
  • 3913

article-image-unity-introduces-guiding-principles-for-ethical-ai-to-promote-responsible-use-of-ai
Natasha Mathur
03 Dec 2018
3 min read
Save for later

Unity introduces guiding Principles for ethical AI to promote responsible use of AI

Natasha Mathur
03 Dec 2018
3 min read
The Unity team announced guidelines to Ethical AI, last week, to promote more responsible use of Artificial Intelligence for its developers, community, and the company. Unity’s guide to Ethical AI comprises six guiding AI principles. Unity’s six guiding AI principles Be unbiased This principle focuses on designing AI tools in a way that complements the human experience in a positive way. To achieve this, it is important to take into consideration all types of diverse human experiences that can, in turn, lead to AI complementing experiences for everybody. Be Accountable This principle puts an emphasis on keeping in mind the potential negative consequences, risks, and dangers of the AI tools while building them. It focuses on assessing the factors that might cause “direct or indirect harm” so that they can be avoided. This ensures accountability. Be fair This principle focuses on ensuring that the kind of AI tools developed does not interfere with “normal, functioning democratic systems of government”. So, the development of an AI tool that can lead to the suppression of human rights (such as free expression), as defined by the Universal Declaration, should be avoided. Be responsible This principle stresses the importance of developing products responsibly. It ensures that AI developers don’t take undue advantage of the vast capabilities of AI while building a product. Be Honest This principle focuses on building trust among the users of a technology by being clear and transparent about the product so that they can better understand its purpose. This, in turn, will lead to users making better and more informed decisions regarding the product. Be Trustworthy This principle emphasizes the importance of protecting the AI derived user data. “Guard the AI derived data as if it were handed to you by your customer directly in trust to only be used as directed under the other principles found in this guide” reads the Unity blog. “We expect to develop these principles more fully and to add to them over time as our community of developers, regulators, and partners continue to debate best practices in advancing this new technology. With this guide, we are committed to implementing the ethical use of AI across all aspects of our company’s interactions, development, and creation”, says the Unity team. For more information, check out the official Unity blog post. EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018 Teaching AI ethics – Trick or Treat? SAP creates AI ethics guidelines and forms an advisory panel
Read more
  • 0
  • 0
  • 2314
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-uc-davis-students-bag-500k-award-and-the-2018-amazon-alexa-prize-for-creating-a-social-conversational-system-gunrock
Amrata Joshi
03 Dec 2018
3 min read
Save for later

UC Davis students bag $500k award and the 2018 Amazon Alexa prize for creating a social conversational system, Gunrock

Amrata Joshi
03 Dec 2018
3 min read
Last week, a team of students from the University of California, Davis, won the global 2018 Amazon Alexa prize and  $500,000 at the AWS re:Invent 2018 conference. They created a chatbot Gunrock that can communicate with humans on topics such as entertainment, sports, politics, technology, and fashion. The chatbot was named after the University’s mascot. Gunrock maintained an average of 9 minutes and 59 seconds of conversation in the final round. It scored 3.1 out of 5. The second prize went to the Team Alquist from the Czech Technical University in Prague scoring 2.6/5  and winning $100,000 prize money. This year, the finalists were announced on Twitch, live. The teams used Conversational Bot (CoBot) toolkit, Alexa Skills Kit, and the AWS cloud for creating socialbots for Alexa. The top teams were chosen on the basis of potential scientific contribution in the field of AI and research, the technical merit of approaches, the novelty of their ideas, and the team’s ability to execute against their plan. The finals were held at Amazon’s Seattle headquarters over two days in early November which involved three interactors who held conversations with the socialbots, and also academia experts and industry professionals who served as judges. The Amazon launched the Alexa prize in 2016 to overcome the challenge of building agents which can carry multi-turn open domain conversations. The objective of this competition is to build agents that can converse coherently and engage with humans for 20 minutes.Last year nearly 3 million Alexa U.S. customers logged around more than 162,000 hours of conversation with the 2017 Alexa prize bots. The Gunrock team programmed the chatbot using the conversational data from millions of Amazon Alexa users. The team of 11 students was led by Zhou Yu, an assistant professor in the Computer Science department. The most striking feature about the Gunrock bot is that it uses language disfluencies or pauses such as “hm” or “ah.” This makes Gunrock more human like and different from traditional bots. The team worked on a natural language understanding model for breaking down the dialogue into self-contained semantic units and  analyzing the language for determining the context. They further integrated structured knowledge bases such as Google knowledge into Gunrock. This helped the bot in handling a wide variety of user behaviors which includes topic switching and question answering. Some people on Hacker News are skeptical of this win. A user pointed out, “Only the Gunrock team had an industry professional, Chun-Yen C, maybe the team won because of this reason.” People are confused as to what benefit is Amazon getting out of such competitions. A user said, “Since each of the teams was given around US$250,000, is it sort of a paid work?” There are also questions raised on the idea of language disfluency. A user said, “ Google also recently faced a backlash over its feature in the Virtual Assistant.” As sentiments are something very natural and they might not sound convincing when coming from a bot. To know more about this news, check out Amazon’s blog post. Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements! Amazon announces the public preview of AWS App Mesh, a service mesh for microservices on AWS Introducing AWS DeepRacer, a self-driving race car, and Amazon’s autonomous racing league to help developers learn reinforcement learning in a fun way
Read more
  • 0
  • 0
  • 2084

article-image-neurips-2018-paper-deepmind-researchers-explore-autoregressive-discrete-autoencoders-adas-to-model-music-in-raw-audio-at-scale
Melisha Dsouza
03 Dec 2018
5 min read
Save for later

NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale

Melisha Dsouza
03 Dec 2018
5 min read
In the paper ‘The Challenge of realistic music generation: modelling raw audio at scale’, researchers from DeepMind have embarked on modelling music in the raw audio domain. They have explored autoregressive discrete autoencoders (ADAs) to enable autoregressive models to capture long-range correlations in waveforms. Autoregressive models are the best while generating raw audio waveforms of speech, but when applied to music, they are more biased towards capturing local signal structure at the expense of modelling long-range correlations. Since music exhibits structure at many different timescales, this issue is problematic; thereby making realistic music generation a challenging task. This paper will be presented in the 32nd Conference on Neural Information Processing Systems (NIPS 2018) to be held at Montréal, Canada this week. Challenges when music is symbolically represented Music has a complex structure by nature and is made up of waveforms that spans over different time periods and magnitudes. Therefore, modelling all of the temporal correlations in the sequence that arise from this structure is challenging. Most of the work in music generation has focused on symbolic representations. This method however has multiple limitations. Symbolic representations abstract away the idiosyncrasies of a particular performance, and these nuances abstracted away are often musically quite important, impacting a user’s enjoyment of music. The paper states an example of the precise timing, timbre and volume of the notes played by a musician do not correspond exactly to those written in a score. Symbolic representations are often tailored to particular instruments, reducing their generality, thereby leading to a lot of work being applied to existing modelling techniques to new instruments. Digital representations of audio waveforms retain all the musically relevant information. These models can be applied to recordings of any set of instruments. However, the task is challenging as compared to modelling symbolic representations. These generative models of waveforms capturing musical structure at many timescales requires high representational capacity, distributed effectively over the various musically-relevant timescales. Steps performed to address music generation in the raw audio domain The researchers use autoregressive models to model structure across roughly 400,000 timesteps, or about 25 seconds of audio sampled at 16 kHz. They demonstrate a computationally efficient method to enlarge their receptive fields using autoregressive discrete autoencoders (ADAs). They explore the domain of autoregressive models for this task, while they use the argmax autoencoder (AMAE) as an alternative to vector quantisation variational autoencoders (VQ-VAE). This autoencoder converges more reliably when trained on a challenging dataset. To model long-range structure in musical audio signals, the receptive fields (RFs) of AR models have to be enlarged. One way to do this is by providing a rich conditioning signal. The paper concentrates on this notion which turns an AR model into an autoencoder by attaching an encoder to learn a high-level conditioning signal directly from the data. Temporal downsampling operations can be inserted into the encoder to make this signal more coarse-grained than the original waveform. The resulting autoencoder uses its AR decoder to model any local structure that this compressed signal cannot capture. The researchers went on to compare the two techniques that can be used to model the raw audio: Vector quantisation variational autoencoders and the argmax autoencoder (AMAE). Vector quantisation variational autoencoders use vector quantisation (VQ): the queries are vectors in a d-dimensional space, and a codebook of k such vectors is learnt on the fly, together with the rest of the model parameters. The loss function is as follows: LV Q−V AE = − log p(x|qj) + (qj − [q])2 + β · ([qj] − q)2. However, the issue with VQ-VAEs when trained on challenging (i.e. high-entropy) datasets is that they often suffer from codebook collapse. At some point during training, some portion of the codebook may fall out of use and the model will no longer use the full capacity of the discrete bottleneck, leading to worse results and poor reconstructions. As an alternative to VQ-VAE method, the researchers have come up with a model called the argmax autoencoder (AMAE). This produces k-dimensional queries, and features a nonlinearity that ensures all outputs are on the (k 1)-simplex. The quantisation operation is then simply an argmax operation, which is equivalent to taking the nearest k-dimensional one-hot vector in the Euclidean sense. This projection onto the simplex limits the maximal quantization error. This makes the gradients that pass through it more accurate. To make sure the full capacity is used, an additional diversity loss term is added which encourages the model to use all outputs in equal measure. This loss can be computed using batch statistics, by averaging all queries q (before quantisation) across the batch and time axes, and encouraging the resulting vector q¯ to resemble a uniform distribution. Results of the experiment This is what the researchers achieved: Addressed the challenge of music generation in the raw audio domain autoregressive models and extending their receptive fields in a computationally efficient manner. Introduced the argmax autoencoder (AMAE), an alternative to VQ-VAE which shows improved stability for the task. Using separately trained autoregressive models at different levels of abstraction captures long-range correlations in audio signals across tens of seconds, corresponding to 100,000s of timesteps, at the cost of some signal fidelity. You can refer to the paper for a comparison of results obtained across various autoencoders and for more insights on this topic. Exploring Deep Learning Architectures [Tutorial] Implementing Autoencoders using H2O What are generative adversarial networks (GANs) and how do they work? [Video]
Read more
  • 0
  • 0
  • 2813

article-image-amazon-rekognition-faces-more-scrutiny-from-democrats-and-german-antitrust-probe
Melisha Dsouza
30 Nov 2018
4 min read
Save for later

Amazon Rekognition faces more scrutiny from Democrats and German antitrust probe

Melisha Dsouza
30 Nov 2018
4 min read
On Thursday, a group of seven House Democrats sent a letter to Amazon CEO Jeff Bezos demanding further details about AWS Rekognition, Amazon’s controversial facial recognition technology. This is the third letter and like those before it, it raises concerns and questions about Rekognition’s accuracy and the possible effects it might have on citizens. The very first letter was sent late in July.  Amazon responded in August with a diplomatic yet unsatisfactory letter of their own that failed to provide much detail. A second letter was then sent in November. According to the congressmen, the second letter was sent because “Amazon didn’t give sufficient answers.”in their initial response. The initial inquiry was timed around a an ACLU report that found Rekognition—software the company has sold to law enforcement and pitched for use by Immigration and Customs Enforcement—had incorrectly matched the faces of 28 members of Congress with individuals included in a database of mugshots. Amazon employees then signed a June letter to senior management, demanding that they cancel ongoing Rekognition contracts with law enforcement agencies. Rep. Jimmy Gomez told BuzzFeed News in an interview “If there’s a problem with this technology, it could have a profound impact on livelihoods and lives. There are no checks or balances on the tech that’s coming out- and this is in the hands of law enforcement.” Written by Sen. Edward Markey and Reps. Gomez, Luis Gutiérrez, Ro Khanna, among others, the letter reprimands Amazon for “[failing] to provide sufficient answers” to the previous two letters sent by the House Democrats. It also raises additional concerns based on “newly available information” — specifically BuzzFeed News’s  investigation into how the Orlando Police Department uses the tech, as well as a report that Amazon had actively marketed the tech to US Immigration and Customs Enforcement. The House Democrats also wrote in today’s letter  about their concern regarding Rekognition’s “accuracy issues.” They write that they “have serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color”. There are also further questions around whether Amazon will build privacy protections into its facial recognition system, and how it will ensure it is not abused for secret government surveillance. As first reported by Gizmodo, AWS CEO Andy Jassy first addressed employee concerns at an all-hands meeting earlier this month. At that meeting, he cited the software’s Terms of Service as the core roadblock to potential abuses. At Amazon’s re:Invent conference, Jassy said that “Even though we haven’t had a reported abuse case, we’re very aware people will be able to do things with these services that could do harm.” Amazon continues to sell this potentially harmful software regardless.Lawmakers closed today’s letter with specific question about the operation and bias of Rekognition, and they’re giving Amazon a strict reply deadline of December 13. You can head over to BuzzFeed News to read the entire letter. Germany adds antitrust probe in addition to EU’s scrutiny on Amazon On Thursday, Germany's antitrust agency said that it has begun an investigation of Amazon over complaints that it is abusing its position to the detriment of sellers who use its "marketplace" platform. "Because of the many complaints we have received we will examine whether Amazon is abusing its market position to the detriment of sellers active on its marketplace," said agency head Andreas Mundt. "We will scrutinize its terms of business and practices toward sellers." This investigation adds to the EU’s scrutiny of the company’s information gathering practices.. Amazon's "double role as the largest retailer and largest marketplace has the potential to hinder other sellers on its platform” said the Federal Cartel Office. Amazon said it could not comment on ongoing proceedings but said that "We will cooperate fully with the Bundeskartellamt and continue working hard to support small and medium-sized businesses and help them grow". AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developer
Read more
  • 0
  • 0
  • 1869

article-image-foundationdb-open-sources-foundationdb-document-layer-with-easy-scaling-no-sharding-consistency-and-more
Amrata Joshi
30 Nov 2018
6 min read
Save for later

FoundationDB open sources FoundationDB Document Layer with easy scaling, no sharding, consistency and more

Amrata Joshi
30 Nov 2018
6 min read
Yesterday, the team at FoundationDB announced that they are open sourcing the FoundationDB Document Layer, a document-oriented database. It extends the core functionality of the FoundationDB key-value store which stores all the persistent data. FoundationDB, a distributed database has been designed to handle large volumes of structured data across the clusters of commodity servers. It organizes the data in ordered key-value store format. The FoundationDB Document Layer is a stateless microserver which is backed by the scalable and transactional features of FoundationDB. It is released under an Apache v2 license. The Document layer improves the usage of document database with the help of  MongoDB API. It allows the use of the MongoDB API through existing MongoDB® client bindings. The Document Layer also implements a subset of the MongoDB API (v 3.0.0), which mainly focuses on CRUD (Create, Read, Update Delete) operations, indexes, and transactions. The FoundationDB Document Layer works with all official MongoDBdrivers. Key Features No sharding The Document Layer does not rely on a fixed shard key for distributing data. The data partitioning and rebalancing is managed by the key-value store, automatically. This feature has been inherited from FoundationDB which provides robust horizontal scalability and avoids client-level complexity. Easy scaling Document Layer’s instances are stateless and are configured only with the FoundationDB cluster, where the data gets stored. This stateless design of the Document Layer indicates that the instances of Document Layer can be kept behind a load balancer. So, queries can get easily handled from any client and for any document. Safe defaults The write operations on the Document Layer execute with full isolation and atomicity by default. This consistency makes it easier to correctly implement the applications that handle more than one simultaneous request. Improvements No locking for conflicting writes The Document Layer makes use of the concurrency of Key-Value Store which doesn’t let write operations to put locks on the database. In case of two operations concurrently attempting to modify the same field of the same document, one of them will fail. The client will again retry the failed operation. Most of the operations automatically get retried for a configurable number of times. Irrelevant commands removed Many database commands, including the commands related to sharding and replication have been removed as they are not applicable for the Document Layer. Multikey compound indexes FoundationDB Document Layer comes with Multikey compound indexes which allows the document to have array values for more than one of the indexed fields. MongoDB API Compatible The Document Layer is compatible with the MongoDB protocol as simple applications leveraging MongoDB can have a lift-and-shift migration to the Document Layer. In order to connect the application to the Document Layer, one has to use any existing MongoDB client. Saves time Instead of logging all the operations that take more than a certain amount of time, the Document Layer logs all operations that perform full collection scans on non-system collections. Custom projections The Document Layer supports custom projections of query results but it does not support the projection operators. The literal projection documents are used instead. It also does not support the $text or $where query operators. Non-multikey indexes If the indexed field on a document contains an array, then all the indexes allow multiple entries for that document. Auth/auth The FoundationDB Document Layer does not support MongoDB authentication, auditing, role-based access control, or transport encryption. sort parameter The sort parameter has been disabled in Document Layer. $push and $pull operators It doesn’t support the $position modifier to the $push operator. In Document Layer, the $sort modifier to the $push operator is only available if the $each modifier and the $slice modifier both have been used. listDatabases command The Document Layer comes with listDatabases which will always return a size of 1000000 bytes for a database that contains any data. Nested $elemMatch predicates In Document Layer, a query document of two or more nested $elemMatch predicates may behave differently. Future scope Currently, the Document Layer doesn’t allow the insertion or updation of a document which generates more than 1000 updates to a single index. In the future release, this limit might get configurable. The format of the information returned by the $explain operation is still not final but it may possibly get changed without prior warning in a future version. The Document Layer doesn’t support sessions yet so it could be expected in the future release. Future releases may emulate the Oplog for migration of the applications that directly examine it. The Document Layer does not support the deprecated BSON binary format or the MongoDB internal timestamp type. The Document Layer does not implement any geospatial query operators yet, they might be expected in a future release. Support for tailable cursors or capped collections could be expected soon. Currently, there is no support for the indexes which contains the entries for documents having indexed field which might get introduced soon. The FoundationDB Document Layer does not support the indexes which contains the entries for documents having indexed field. This feature might get implemented in the future. Currently there are no indexes, no data typing, and no query engine in Document Layer, which would be expected soon. Users are excited about this update but many are confused as to how would they shift from MongoDB as licensing could be an issue. Also, it still doesn’t support a list of features like the aggregation framework, indexes etc which could cause trouble to the users. Another concern is its -incompatibility with other API’s example DynamoDB. Another downfall is that it follows the layered approach which consumes more bandwidth unless the transaction is read-only. Few users are still having the bitter feeling of 2015 when FoundationDB was acquired by Apple. And they don’t trust the company since then. It would be interesting to see what happens in the next release. To know more about this news, check out the official post by FoundationDB. Introducing TigerGraph Cloud: A database as a service in the Cloud with AI and Machine Learning support Introducing EuclidesDB, a multi-model machine learning feature database ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features  
Read more
  • 0
  • 0
  • 2031
article-image-twitter-adopts-apache-kafka-as-their-pub-sub-system
Bhagyashree R
30 Nov 2018
3 min read
Save for later

Twitter adopts Apache Kafka as their Pub/Sub System

Bhagyashree R
30 Nov 2018
3 min read
Yesterday, Twitter shared that they are migrating to Apache Kafka from an in-house Pub/Sub system, EventBus which is built on top of Apache DistributedLog. The main reasons behind this adoption were Kafka’s lower latency, better resource savings, and a strong community support. Apache Kafka is an open-source distributed stream-processing software that provides a unified, high-throughput, low-latency platform for handling real-time data feeds. It has seen broad adoption from many big companies such as LinkedIn, Netflix, Uber and many more, making it the de facto real-time messaging system of choice in the industry. Why did Twitter decide to move to Apache Kafka? The Twitter team evaluated Kafka under similar workloads that are run on EventBus such as durable writes, tailing reads, catchup reads, and high fanout reads, for several months. They concluded these reasons for moving to Kafka: Lower latency This evaluation highlighted that Kafka provides significantly lower latency, regardless of the amount of throughput. Throughput was measured by the timestamp difference from the time the message was created to when the consumer read the message. Its lower latency could be attributed to these factors: In EventBus the serving and storage layers are decoupled, which introduces an additional hop. Kafka eliminates this as it has only one process handling both storage and request serving. The writes on fsync() calls in EventBus are explicitly blocked while in Kafka the OS is responsible to fsync() in the background. Kafka supports the zero-copy functionality, which greatly improves application performance by reducing the number of context switches between kernel and user mode. Better resource savings EventBus separates the serving and storage layers, which calls for additional hardware while Kafka uses only a single host to provide both. During the evaluation, the team saw a 68% resource savings for single consumer use cases, and 75% for multiple consumers use cases. In addition to this, the latest versions of Kafka support data replication, providing the durability guarantees. Strong community support As hundreds of developers are contributing to the Kafka project, which ensures regular bug fixes, improvements, and new features as opposed to only eight or so engineers working on EventBus/DistributedLog. Kafka comes with features that Twitter needed such as a streaming library, at-least-once HDFS pipeline, and exactly-once processing, which are not yet implemented in EventBus. Additionally, they will be able to get solutions to any problems they encounter on either the client or server side and get access to a better documentation. Check out the complete announcement on Twitter’s website. Apache Kafka 2.0.0 has just been released Getting started with the Confluent Platform: Apache Kafka for enterprise Working with Kafka Streams
Read more
  • 0
  • 0
  • 3907

article-image-introducing-aws-deepracer-a-self-driving-race-car-and-amazons-autonomous-racing-league-to-help-developers-learn-reinforcement-learning-in-a-fun-way
Amrata Joshi
29 Nov 2018
4 min read
Save for later

Introducing AWS DeepRacer, a self-driving race car, and Amazon’s autonomous racing league to help developers learn reinforcement learning in a fun way

Amrata Joshi
29 Nov 2018
4 min read
Yesterday, at the AWS re:Invent conference, Andy Jassy, CEO at Amazon Web Services introduced AWS DeepRacer and announced a global autonomous AWS DeepRacer racing league. Amazon DeepRacer AWS DeepRacer is a 1/18th scale radio-controlled, self-driving four-wheel race car which has been designed to help developers learn about reinforcement learning. This car features a 4-megapixel camera with 1080p resolution, an Intel Atom processor, multiple USB ports, and a 2-hour battery. The car comes with a 4GB RAM and 32GB expandable storage. The compute battery is a 13600mAh USB-C PD. It is embedded with an accelerometer and gyroscope. The console, simulator, and car are a great combination to experiment with RL algorithms and generalization methods. It includes a fully-configured cloud environment that users can use to train Reinforcement Learning models. This car also uses a camera to view the track and a reinforcement model to control throttle and steering. AWS DeepRacer is integrated with Amazon SageMaker to take advantage of its new reinforcement learning model and with AWS RoboMaker in order to provide a 3D simulation environment. It is also integrated with Amazon Kinesis Video Streams for the video streaming of virtual simulation footage and Amazon S3 for model storage. It also comes with support for Amazon CloudWatch for log capture. AWS DeepRacer League The AWS DeepRacer League gives users an opportunity to compete in a global racing championship to advance to AWS DeepRacer Championship Cup at re:Invent 2019 and to probably win the AWS DeepRacer Cup. This league is categorized into 2 categories, live events, and virtual events. Live Events Developers can compete by submitting their already built or new reinforcement learning models to the virtual leaderboard for the Summit. The top ten champions will compete in live races on the track, using AWS DeepRacer. The summit winners and top performers across the races will qualify for the AWS DeepRacer Championship Cup. The AWS DeepRacer League will be launched in AWS Summit locations around the world, including Tokyo, London, Sydney, Singapore, and New York in early 2019. Virtual events Developers can build RL models and compete online using the AWS DeepRacer console. The virtual races will take place on the challenging tracks in the 3D racing simulator. What is in store for the developers? Learn reinforcement learning in a new way AWS DeepRacer helps developers to get started with reinforcement learning by providing hands-on tutorials for training RL models and testing them in a fun way, with the car racing experience. It is easy to get started quickly, anywhere One can start training the model on the virtual track in minutes with the AWS DeepRacer console and 3D racing simulator irrespective of place or time. Idea sharing The DeepRacer League gives a platform to developers to meet fellow machine learning enthusiasts, online and also in-person to share ideas and insights. Also, it gives an opportunity to compete and win prizes. Developers will also get a chance to learn about reinforcement learning via workshops. No need to manually set up a software environment The 3D racing simulator and car provide an ideal environment for developers to test the latest reinforcement learning algorithms. With DeepRacer, developers don’t have to manually set up a software environment, simulator or configure a training environment. Public reaction to AWS DeepRacer is mostly positive, however, a few have their doubts. Concerns range from  CPU time, SageMaker requirement, and shipping related queries. https://twitter.com/emurmur77/status/1067955546089607168 https://twitter.com/heri/status/1067927044418203648 https://twitter.com/mnbernstein/status/1067846826571706368 To know more about this news, check out Amazon’s official blog. Amazon launches AWS Ground Station, a fully managed service for controlling satellite communications, downlink and processing satellite data Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 Learning To Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning  
Read more
  • 0
  • 0
  • 3244

article-image-amazon-launches-aws-ground-station-a-fully-managed-service-for-controlling-satellite-communications-downlink-and-processing-satellite-data
Amrata Joshi
28 Nov 2018
4 min read
Save for later

Amazon launches AWS Ground Station, a fully managed service for controlling satellite communications, downlink and processing satellite data

Amrata Joshi
28 Nov 2018
4 min read
Yesterday, Amazon introduced a fully managed service, AWS Ground Station used for controlling satellite communications, downlink and processing satellite data. It is also used for scaling satellite operations quickly and cost-effectively. Ground Stations are basically at the core of satellite networks and they provide communication between the ground and the satellite. Currently, Amazon has a pair of ground stations and it will soon have 12 in operation by mid-2019. Each of the ground stations are associated with a particular AWS Region. One can use AWS Ground Station on an as-needed, pay-as-you-go basis instead of building a ground station or entering into a long-term contract. With AWS Ground Station, it is possible to get an access to a ground station on a regular basis for capturing Earth’s observations or for distributing content world-wide at low cost. With AWS Ground Station, one need not build or maintain antennas. How does the AWS Ground Station work? The raw analog data from the satellite is processed by Amazon’s modem digitizer into a data stream. It is then routed to an EC2 instance which is responsible for processing the signal so that it could be turned into a byte stream. Once the data is in digital form, streaming, processing, analytics, and storage options get available. Streaming Amazon Kinesis Data Streams (KDS), a massively scalable and durable real-time data streaming service is used for capturing, processing, and storing data streams. KDS continuously captures gigabytes of data per second from thousands of sources such as website clickstreams, financial transactions, database event streams, social media feeds, location-tracking events and IT logs.The data  which gets collected then becomes available in milliseconds to enable real-time analytics use cases such as real-time anomaly detection, real-time dashboards, dynamic pricing, and more. Processing Amazon Rekognition, based on a highly scalable, deep learning technology developed by Amazon’s computer vision scientists is used to analyze billions of images and videos daily. No machine learning expertise is required to use it. It is a simple and easy to use API which quickly analyzes any image or video file stored in Amazon S3. It also provides highly accurate facial analysis and facial recognition on images and videos. Build, train, and deploy ML models Amazon SageMaker, a fully-managed platform helps developers and data scientists to build, train, and deploy machine learning models at any scale, easily. It also makes it easier for the developers as it removes all the complexity. Analytics / Reporting Amazon Redshift, a fast and scalable data warehouse makes it simple and cost-effective to analyze all the data across the data warehouse and data lake.It delivers ten times faster performance than other data warehouses by using machine learning. It is easy to setup and deploy a new data warehouse in minutes, and run queries across petabytes of data in the Redshift data warehouse. Storage Amazon Simple Storage Service (Amazon S3), an object storage service offers industry-leading scalability, security, data availability, and performance. It can be used by industries to store and protect any amount of data for a range of use cases, such as mobile applications, websites, archive, backup and restore, enterprise applications, etc. Amazon S3 Glacier is also useful as it is secure, durable, and extremely low-cost cloud storage service. It is used for data archiving and long-term backup. Though the idea of AWS Ground Station sounds very interesting but the cost is still a question. Users have to pay per minute of downlink time and which is expensive. So, the idea of low cost fails here. Also, observations might not be that accurate as the orbit determination needs the control of the antenna. To convince the ones who would still be interested in building their own Ground Station and not relying on the third party, would be difficult. To know more about this news, check out Amazon’s official blog. Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon FreeRTOS adds a new ‘Bluetooth low energy support’ feature
Read more
  • 0
  • 0
  • 2667
article-image-amazon-confirms-plan-to-sell-a-hipaa-eligible-software-amazon-comprehend-medical-which-will-mine-medical-records-of-the-patients
Amrata Joshi
28 Nov 2018
5 min read
Save for later

Amazon confirms plan to sell a HIPAA eligible software, Amazon Comprehend Medical, which will mine medical records of the patients

Amrata Joshi
28 Nov 2018
5 min read
Yesterday, at the AWS re:Invent conference, Amazon announced the HIPAA (Health Insurance Portability and Accountability Act) eligible software, Amazon Comprehend Medical. Amazon Comprehend Medical is a natural language processing service that uses machine learning to extract relevant medical records of patients from unstructured text. With this software, one can gather information, such as medical condition, medication, strength, dosage, and frequency from a variety of sources like clinical trial reports, doctors’ notes, and patient health records. This extracted medical information can be used to build applications for clinical decision support, revenue cycle management, and clinical trial management. Comprehend Medical follows the ‘pay for how much you use strategy’ and doesn’t charge any minimum fees. Also, the developers need to only provide unstructured medical text to Comprehend Medical and they don’t have to deal with servers to manage them. It also identifies protected health information (PHI), such as name, age, and medical record number. One can use this information to create applications that securely process, maintain, and transmit PHI. Benefits of Amazon Comprehend Medical Machine learning helps in accurately identifying the medical information Amazon Comprehend Medical uses advanced machine learning models for accurately identifying the medical information, such as medical conditions and medications. It also identifies their relationship to each other, for instance, prescribes the medicine dosage and strength for a better cure. One can access Amazon Comprehend Medical easily through a simple API call, without the need for machine learning expertise, complicated rules, and training models. Secures patient’s data Amazon Comprehend Medical identifies protected health information (PHI) stored in medical record systems while keeping up to the standards for General Data Protection Regulation (GDPR). Comprehend Medical helps the developers to implement data privacy and security solutions by extracting relevant patient identifiers as per HIPAA’s Safe Harbor method of de-identification. Also, it does not store or save any customer data,so users need not worry. Lowers medical document processing costs Comprehend Medical automates and lowers the cost for coding the unstructured medical text from patient records, billing, and clinical indexing. It also offers two APIs that developers can integrate into the existing workflows and applications with only a few lines of code. This would cost less than a penny for every 100 characters of analyzed text. A reinvention of cancer care The team at Amazon is also working with Seattle’s own Fred Hutchinson Cancer Research Center to support their goals to eradicate cancer. Amazon Comprehend Medical helps in identifying patients for clinical trials who may benefit from specific cancer therapies. “Amazon Comprehend Medical will reduce this time burden from hours per record to seconds. This is a vital step toward getting researchers rapid access to the information they need when they need it so they can find actionable insights to advance lifesaving therapies for patients,” said Matthew Trunnell, Chief Information Officer, Fred Hutchinson Cancer Research Center. What about Privacy then? This looks like a good move by Amazon but people are questioning the technology used by Amazon. Why is only ML/NLP used to analyze data? There is a lot of unstructured data available in EMRs including pharmacy, lab, eMAR, etc, well what about them? The efforts still by Amazon still aren’t enough to convince many. A user commented on Hacker News, “I work in the tech healthcare industry. I wonder why they only went with (or focused on?) ML/NLP text analysis to analyze data. There is a wealth of discrete data available in EMRs (pharmacy, lab, eMAR, etc.). Yes, there is plenty of diagnosis text but that is almost always associated with ICD10 codes. The only area where I believe text analysis would be useful is documentation and microbiology data, and in many cases micro is discrete as well.” Though Amazon Comprehend Medical is matching the standard of GDPR, people are still skeptical about the possibility of patient data being misused. Just five months ago, Amazon took over PillPack, a pharmacy for $1 Billion. Is the idea to use the hospitals next? If yes, then possibly the data of the patients might get endangered for few billions. Amazon could also use the medical data of the patients for its own advertising. Also, the users upload their health records on the Amazon cloud service and run the software there to analyze the data. The text then gets analyzed and the result is in the format of a spreadsheet. Any sort of security attack might cause trouble as there is a chance of data breach. Though Amazon Comprehend Medical is HIPAA eligible, it is not compliant. It could sometimes be inaccurate as it might not always meet the requirements for de-identification of protected health information under HIPAA. https://twitter.com/DrSafeSpineCare/status/1067523439152562177 To know more about this news, check Amazon’s official blog post. Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads  
Read more
  • 0
  • 0
  • 3133

article-image-twitter-blocks-predictim-an-online-babysitter-rating-service-for-violating-its-user-privacy-policies-facebook-may-soon-follow-suit
Natasha Mathur
28 Nov 2018
3 min read
Save for later

Twitter blocks Predictim, an online babysitter-rating service, for violating its user privacy policies; Facebook may soon follow suit

Natasha Mathur
28 Nov 2018
3 min read
Twitter and Facebook accused Predictim of violating their policies on user surveillance and privacy, yesterday. Predictim is an online service that gives you an overall risk score for the babysitter by scanning their social media profiles (Facebook, Twitter, Instagram, etc.) using language processing algorithms.   Predictim’s algorithms analyze “billions” of data points dating back to years in a person’s online profile. It then provides an evaluation, within minutes, of a babysitter’s predicted traits, behaviors, and areas of compatibility based on her digital history. It makes use of language-processing algorithms and computer vision to evaluate babysitters’ Facebook, Twitter and Instagram posts for clues and information on their personal life.  Facebook discovered Predictim’s activities on its platform earlier this month and retracted most of Predictim’s access to users, as first reported by BBC.  Facebook is now considering blocking the firm entirely from its platform after realizing that Predictim was still scraping public Facebook data to power its algorithms. “Scraping people's information on Facebook is against our terms of service. We will be investigating Predictim for violations of our terms, including to see if they are engaging in scraping,” a Facebook spokeswoman told BBC.  Twitter, on the other hand, told the BBC that it “recently” decided to block Predictim’s access to its users. Twitter also mentioned how they are strictly against companies making use of its data and APIs for surveillance or background checks.  Predictim responded to this saying that Twitter and Facebook are already mining their data and “ganged up” on them because there’s no other benefit for them. It also mentioned how they’re just trying to “take advantage of that data to help parents pick a better babysitter and make a little money in the process”.   Moreover, Drew Harwell, reporter, Washington Post, pointed out that Predictim appears to violate a ban on employers that demand job applicants verify or give access to their personal social media profiles. These demands seem to violate the law in 26 states, in the US, according to data from the National Conference of State Legislatures. https://twitter.com/drewharwell/status/1067557804381298688 However, as per the CEO of Predictim, Sal Parsa, Predictim is “perfectly legal” because the service does not "demand" access to babysitters' social-media but just requires it for the most complete and accurate results.   Predictim has already received heavy criticism over the concern that it is not only prone to biases over how an ideal babysitter should behave, look or share (online). But the personality scan results are also inaccurate mostly. This, in turn, leads to the software misunderstanding a person’s personality based on her/his social media use.  However, the firm insists that Predictim is not designed to be used to make hiring decisions.   "Kids have inside jokes. They’re notoriously sarcastic. Something that could sound like a ‘bad attitude’ to the algorithm could sound to someone else like a political statement or valid criticism”, Jamie Williams, Electronic Frontier Foundation, told BBC.   For more information, check out the official story by BBC.   Facebook AI researchers investigate how AI agents can develop their own conceptual shared language Twitter CEO, Jack Dorsey slammed by users after a photo of him holding ‘smash Brahminical patriarchy’ poster went viral Twitter plans to disable the ‘like’ button to promote healthy conversations; should retweet be removed instead? 
Read more
  • 0
  • 0
  • 1776