Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-sqlite-3-25-0-is-out-with-better-query-optimizer-and-support-for-windows-functions
Savia Lobo
17 Sep 2018
3 min read
Save for later

SQLite 3.25.0 is out with better query optimizer and support for windows functions

Savia Lobo
17 Sep 2018
3 min read
After the SQLite community, last month, released a draft of what users can expect in the SQLite’s next release, the community finally announced the release of SQLite 3.25.0 yesterday. The primary update, however, is that they have added a support for windows functions and have provided improvements in the query optimizer. Let’s have a closer look at what the new features in this version are. Features in SQLite 3.25.0 A support for windows functions has been added in this SQLite release The ALTER TABLE command has been enhanced with A support for renaming columns within a table using ALTER TABLE table RENAME COLUMN oldname TO newname. Fix table rename feature so that it also updates references to the renamed table in triggers and views. Improvements to the Query Optimizer include: Avoid unnecessary loads of columns in an aggregate query that are not within an aggregate function and that are not part of the GROUP BY clause. The IN-early-out optimization: When doing a look-up on a multi-column index and an IN operator is used on a column other than the left-most column, then if no rows match against the first IN value, check to make sure there exist rows that match the columns to the right before continuing with the next IN value. Use the transitive property to try to propagate constant values within the WHERE clause. For example, convert "a=99 AND b=a" into "a=99 AND b=99". Users now have a separate mutex on every inode in the unix VFS, rather than a single mutex shared among them all, for slightly better concurrency in multi-threaded environments. The PRAGMA integrity_check command has been enhanced for improved  detection of problems on the page freelist. This version showcases the infinity output as 1e999 in the “.dump” command of the command-line shell. A SQLITE_FCNTL_DATA_VERSION file-control has been added. A Geopoly module has been added. Bug fixes in SQLite 3.25.0 The August draft release had showcased fixes for two tickets. However, the final release has four ticket fixes including, Fix for ORDER BY LIMIT optimization The ORDER BY LIMIT optimization might have caused an infinite loop in the byte code of the prepared statement under very obscure circumstances, due to a confluence of minor defects in the query optimizer. Fix for rearrangement of the order of constraint checks On an UPSERT when the order of constraint checks is rearranged, ensure that the affinity transformations on the inserted content occur before any of the constraint checks. Fix for ".stats on" command Avoid using a prepared statement for ".stats on" command of the CLI after it has been closed by the ".eqp full" logicc. Fix for incorrect byte-code generation by  LIKE optimization The LIKE optimization was generating incorrect byte-code and hence getting the wrong answer if the left-hand operand has numeric affinity and the right-hand-side pattern is '/%' or if the pattern begins with the ESCAPE character. For more details, visit SQLite 3.25.0 release log. How to use SQLite with Ionic to store data? Introduction to SQL and SQLite Introducing Watermelon DB: A new relational database to make your React and React Native apps highly scalable
Read more
  • 0
  • 0
  • 2273

article-image-mary-meeker-one-of-the-premier-silicon-valley-investors-quits-kleiner-perkins-to-start-her-own-firm
Sugandha Lahoti
17 Sep 2018
2 min read
Save for later

Mary Meeker, one of the premier Silicon Valley investors, quits Kleiner Perkins to start her own firm

Sugandha Lahoti
17 Sep 2018
2 min read
Mary Meeker is leaving Kleiner Perkins and starting a new investment fund along with three other partners. Joining Meeker in her new fund are Kleiner Perkins other investors Mood Rowghani and Noah Knauf, as well as Juliet de Baubigny, a partner. As her latest stint as venture capitalist at Kleiner Perkins, Meeker has led the company’s investments in more mature start-ups and yielded several successful bets by putting money into Facebook, Twitter, Spotify and Snap when the companies were further along. She is popularly known as the Queen of the Internet for her coverage of internet stocks, since 2010. She also delivers an annual internet trends report considered as one of the most prestigious and thorough in the technology industry. She also remains one of the few women to earn a general partner title at Kleiner Perkins. The business industry is generally known to not award women with the highest roles in VC. In 2014, she was listed as the 77th most powerful woman in the world by Forbes. Her exit from Kleiner Perkins, will undoubtedly be a huge blow to the venture capitalist firm, given the fact that Meeker is by far the most senior woman in venture capital and has a high stature in the business community. It seems Kleiner Perkins is changing its tactics in the business consulting industry. In 2016, legendary investor John Doerr stepped down as managing partner, replaced by Ted Schlein. The departures of Meeker, Rowghani and Knauf’s add further support to the claim. Talking about Meeker at the time of her hiring, Schlein said, “There is only one Mary Meeker,” He now says he is “not naive” about what it will be like to lose Meeker. Kleiner Perkins will have to move on without its most famous name. Python founder resigns – Guido van Rossum goes ‘on a permanent vacation from being BDFL’ Anima Anandkumar, the machine learning guru behind AWS bids adieu to AWS. Dr. Fei Fei Li, Google’s AI Cloud head steps down amidst speculations; Dr. Andrew Moore to take her place
Read more
  • 0
  • 0
  • 1886

article-image-facebooks-glow-a-machine-learning-compiler-to-be-supported-by-intel-qualcomm-and-others
Bhagyashree R
14 Sep 2018
3 min read
Save for later

Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others

Bhagyashree R
14 Sep 2018
3 min read
Yesterday, Facebook announced that Cadence, Esperanto, Intel, Marvell, and Qualcomm Technologies Inc, have committed to support their Glow compiler in future silicon products. Facebook, with this partnership aims to build a hardware ecosystem for machine learning. With Glow, their partners will be able to rapidly design and optimize new silicon products for AI and ML and help Facebook scale their platform. They are also planning to expand this ecosystem by adding more partners in 2018. What is Glow? Glow is a machine learning compiler which is used to speed up the performance of deep learning frameworks on different hardware platforms. The name “Glow” comes from Graph-Lowering, which is the main method that the compiler uses for generating efficient code. This compiler is designed to allow state-of-the-art compiler optimizations and code generation of neural network graphs. With Glow, hardware developers and researchers can focus on building next generation hardware accelerators that can be supported by deep learning frameworks like PyTorch. Hardware accelerators for ML solve a range of distinct problems. Some focus on inference, while others focus on training. How it works? Glow accepts a computation graph from deep learning frameworks such as, PyTorch and TensorFlow and generates highly optimized code for machine learning accelerators. To do so, it lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation: Source: Facebook High-level intermediate representation allows the optimizer to perform domain-specific optimizations. Lower-level intermediate representation, an instruction-based address-only representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation, and copy elimination. The optimizer then performs machine-specific code generation to take advantage of specialized hardware features. Glow supports a high number of input operators as well as a large number of hardware targets with the help of its lowering phase, which eliminates the need to implement all operators on all targets. The lowering phase reduces the input space and allows new hardware backends to focus on a small number of linear algebra primitives. You can read more about Facebook’s goals for Glow in its official announcement. If you are interesting in knowing how it works in more detail, check out this research paper and also its GitHub repository. Facebook launches LogDevice: An open source distributed data store designed for logs Google’s new What-if tool to analyze Machine Learning models and assess fairness without any coding Facebook introduces Rosetta, a scalable OCR system that understands text on images using Faster-RCNN and CNN
Read more
  • 0
  • 0
  • 6141
Visually different images

article-image-youtube-starts-testing-av1-video-codec-format-launches-av1-beta-playlist
Melisha Dsouza
14 Sep 2018
2 min read
Save for later

YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist

Melisha Dsouza
14 Sep 2018
2 min read
YouTube has begun transcoding videos into the AV1 video codec and has created an AV1 beta launch playlist to test the same, yesterday. Why does the AV1 format matter? This transcoding aims to significantly reduce video stream bandwidths without loss in quality while exceeding the compression standards set by even HEVC. Google was opposed to using HEVC due to its high royalty costs. To combat this, the company developed its own VP9 format in 2012 for HD and 4K HDR video which saw limited uptake outside of Google’s own properties. AV1 now replaces both HEVC and VP9. The AV1 initiative was announced in 2015, where internet giants like Amazon, Apple, Google, Facebook, Microsoft, Mozilla, Netflix, and several others joined forces to develop a ‘next gen’ video format. Besides better compression as compared to VP (and HEVC), AV1 has a royalty-free license. This could lead to reducing the operating-cost savings for YouTube and other video streaming services. Since video streaming contributes to a massive chunk of total internet traffic, even a small improvement in compression can have massive effects on the network as well as on user experience. AV1 also provides an architecture for both moving and still images. More widespread support and adoption of the AV1 software is projected for 2020. Source: Flatpanelshd YouTube users of the new AV1 format will not notice a reduction in their data consumption just yet, because the first batch of videos has been encoded at a very high bitrate to test performance. Future playlists could, however, test the CODEC's other more important aspects- data savings. To watch the videos in AV1, users will have to use Chrome 70 or Firefox 63 that were both recently updated to support AV1. YouTube also mentions that AV1 videos are currently available in 480p SD only, switching to VP9 for higher resolutions. Head over to YouTube’s official site for more coverage on the news. YouTube’s Polymer redesign doesn’t like Firefox and Edge browsers YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law YouTube has a $25 million plan to counter fake news and misinformation
Read more
  • 0
  • 0
  • 3732

article-image-swarm-ai-that-enables-swarms-of-radiologists-outperforms-specialists-or-ai-alone-in-predicting-pneumonia
Natasha Mathur
14 Sep 2018
4 min read
Save for later

Swarm AI that enables swarms of radiologists, outperforms specialists or AI alone in predicting Pneumonia

Natasha Mathur
14 Sep 2018
4 min read
According to a study conducted by researchers at Stanford University School of Medicine and Unanimous AI, small groups of radiologists moderated by AI algorithms achieve higher diagnostic accuracy against individual radiologists or the machine learning algorithms alone. The technology used for research is called Swarm AI. Swarm AI is a swarm intelligence technology by Unanimous AI that empowers networked groups of humans by combining their individual insights in real-time with the help of AI algorithms to converge on optimal solutions. The research paper was presented on earlier this week at the SIIM Conference on Machine Intelligence in Medical Imaging. How does Swarm AI work? The researchers performed the study with a group of eight radiologists at different locations, connected by Swarm AI algorithms. The radiologists reviewed a set of 50 chest x-rays and for each of the X-ray predicted the likelihood that the patient has pneumonia. After a few seconds of individually assessing the results of the chest x-rays, the group worked together as a “Swarm”, converging on a probabilistic diagnosis to predict the likelihood of a patient having pneumonia. This generated a set of 50 probabilities for the 50 test cases. At the same time, separately, the same set of 50 chest x-rays were run through CheXNet software algorithm, a state-of-the-art 121-layer convolutional neural network, that beat humans last year in predicting which patient suffering from pneumonia. CheXNet has been proved to outperform individual human radiologists in pneumonia screening tasks as per prior studies. These two sets of probabilities were then, further compared using different statistical techniques. Results The performance of the Swarm AI system involving a small group of human radiologists was evaluated against the software-only CheXNet system. These two methods were analyzed across three different performance metrics, namely, binary classification accuracy, Mean Absolute Error, and ROC analysis. Let’s see how these two methods performed. Binary Classification: Fifty-percent was set as the cutoff probability for classifying a positive diagnosis. The CheXNet system achieved 60% diagnostic accuracy across the 50 test cases, while the Swarm AI system achieved 82% accuracy across the same 50 cases. Also, The Swarm AI was more accurate in binary classification as compared to the ML system (p<0.01, μdifference = 21.9%). Mean Absolute Error: MAE is the absolute value of the Ground Truth (checking the classifications that machine learning algorithms make against what they know in reality) minus the Predicted Probability. A bootstrap analysis was performed for calculating MAE which revealed that the Swarm AI had significantly higher probabilistic accuracy than the ML system (p<0.001, μdifference = 21.6%). ROC Analysis: The Swarm AI system and the CheXNet system have different approaches to probabilistic forecasting. This is why a ROC (Receiver operating characteristic) analysis was performed that compared the true positive rate to the false positive rate across different cut-off points. This meant that the higher the ratio the better the classification. Area Under the ROC Curve (AUROC) was measured for both methods. Again, the swarm AI system managed to achieve an AUROC of 0.906, while the ML system achieved 0.708. Swarm AI system produced far more accurate results in the diagnosis of pneumonia than a state-of-the-art ML system, like CheXNet. “Diagnosing pathologies like pneumonia from chest X-rays is extremely difficult, making it an ideal target for AI technologies. The results of this study are very exciting as they point towards a future where doctors and AI algorithms can work together in real-time, rather than human practitioners being replaced by automated algorithms,” says Dr. Matthew Lungren, Assistant Professor of Radiology at Stanford University, in the Unanimous AI blog. This suggests that Swarm algorithms are a powerful tool when it comes to establishing Ground Truth for training use as well as for validating the machine learning systems. “It is likely that the Swarm AI system excels in certain types of cases, while the ML system excels in others. We believe future research should identify these differences, so each method can be applied to those cases which are most appropriate. Additional research is warranted using more definitive Ground Truth and a wider range of cases,” write researchers in the paper. For more information, check out the official research paper. MIT’s Transparency by Design Network: A high performant model that uses visual reasoning for machine interpretability A new Video-to-Video Synthesis model uses Artificial Intelligence to create photorealistic videos Survey reveals how artificial intelligence is impacting developers across the tech landscape
Read more
  • 0
  • 0
  • 3026

article-image-microsoft-acquires-ai-startup-lobe-a-no-code-visual-interface-tool-to-build-deep-learning-models-easily
Natasha Mathur
14 Sep 2018
4 min read
Save for later

Microsoft acquires AI startup Lobe, a no code visual interface tool to build deep learning models easily

Natasha Mathur
14 Sep 2018
4 min read
Microsoft announced yesterday that it has acquired Lobe, a small San Francisco based AI startup. Lobe is a visual Interface tool that allows people to easily create intelligent apps capable of understanding hand gestures, hear music, read handwriting, and more, without any coding involved. Lobe is aimed at making deep learning simple, understandable and accessible to everyone. With the Lobe’s simple visual interface, anyone can develop deep learning and AI models quickly, without having to write any code. A look at Lobe’s features Drag, drop, learn Lobe lets you build custom deep learning models, train them, and ship them directly in your app without any coding required. You can start by dragging in a folder of training examples from your desktop. This lets you build a custom deep learning model and begin its training. Once you’re done with this, you can export a trained model and ship it directly in your app. Connect together smart lobes There are smart building blocks called lobes in Lobe. These lobes can be connected together allowing you to quickly create custom deep learning models. For instance, you can connect the Hand & Face lobe to let you find the most prominent hand in the image. After this, connect the Detect Features lobe to find the important features in the hand. Finally, you can connect the Generate Labels lobe to predict the emoji in the image. You can also refine your model by adjusting each lobes unique settings or by editing any lobe’s sub-layers. Exploring dataset visually With Lobe, you can have your entire dataset displayed visually. This helps you browse and sort through all your examples. All you have to do is select any icon and see how that example performs in your model. Your dataset gets automatically split into a Lesson which teaches your model during training. There’s also a Test used that evaluates how your model will perform in the real world on examples that have never been seen before. Real-time training results Lobe comes with super fast cloud training that provides real-time results without slowing down your computer.  There are interactive charts which help you monitor the accuracy of your model and understand how the model improves over time. The best accuracy then automatically gets selected and saved. Advanced control over every layer Lobe is built on top of the deep learning frameworks TensorFlow and Keras. This allows you to control every layer of your model. With Lobe, you can tune hyperparameters, add layers, and design new architectures with the help of hundreds of advanced building block lobes. Ship it in your application After you’re done training your model, it can be exported to TensorFlow or CoreML which you can then run directly into your app. There’s also an easy-to-use Lobe Developer API, which lets you host your model in the cloud and integrate it into your app. What could Microsoft’s plans be with this acquisition? This is not the first AI startup acquired by Microsoft. Other than Lobe, Microsoft also acquired Bonsai.ai, a deep reinforcement learning platform, in July to build machine learning models for autonomous systems of all kinds. Similarly, Microsoft acquired Semantic Machines this May to build a conversational AI center of excellence in Berkeley to advance the state of conservational AI. “Over the last few months, we’ve made multiple investments in companies to further this (expanding its growth in AI) goal. These are just two recent examples of investments we have made to help us accelerate the current state of AI development”, says Kevin Scott, EVP, and CTO at Microsoft, in yesterday’s announcement on their official blog. Looks like Microsoft is all set on bringing more AI capabilities to its users. In fact, major tech firms around the world are walking along the same path and acquiring as many technology companies as they can. For instance, Amazon acquired AI cybersecurity startup Sqrrl, Facebook acquired Bloomsbury AI, and Intel acquired Vertex.ai earlier this year. “In many ways though, we’re only just beginning to tap into the full potential AI can provide. This in large part is because AI development and building deep learning models are slow and complex processes even for experienced data scientists and developers. To date, many people have been at a disadvantage when it comes to accessing AI, and we’re committed to changing that” writes Kevin. For more information, check out the official Microsoft Announcement. Say hello to IBM RXN, a free AI Tool in IBM Cloud for predicting chemical reactions Google’s new What-if tool to analyze Machine Learning models and assess fairness without any coding
Read more
  • 0
  • 0
  • 2912
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-sapfix-and-sapienz-facebooks-hybrid-ai-tools-to-automatically-find-and-fix-software-bugssapfix-and-sapienz-facebooks-hybrid-ai-tools-to-automatically-find-and-fix-software-bugs
Melisha Dsouza
14 Sep 2018
2 min read
Save for later

SapFix and Sapienz: Facebook’s hybrid AI tools to automatically find and fix software bugs

Melisha Dsouza
14 Sep 2018
2 min read
“Debugging code is drudgery” -Facebook Engineers Yue Jia, Ke Mao and Mark Harman To significantly reduce the amount of time developers spend on debugging code and rolling out new software, Facebook engineers have come up with an ingenious tool called ‘SapFix’. Sapfix, which is still under development, can automatically generate fixes for specific bugs  identified by Sapienz. It will then propose these fixes to engineers for approval and deployment to production. SapFix will eventually be able to operate independently from Sapienz, Facebook’s intelligent automated software testing tool. For now, it is a proof-of-concept that relies on the latter tool to pinpoint bugs. How does SapFix work? This AI hybrid tool will generate bug fixes depending upon the type of bug encountered. For instance: For simpler bugs: SapFix will create patches that revert the code submission that introduced these bugs. For complicated bugs: The tool uses a collection of “templated fixes” that were created by human engineers based on previous bug fixes. If human-designed template fixes aren’t up to the job: The tool attempts a “mutation-based fix,” which works by continuously making small modifications to the code that caused the software to crash, until a solution is found. SapFix generates multiple potential fixes for every bug. This is then submitted to the engineers for evaluation. The fixes are tested in advance so engineers can check if they might cause problems like compilation errors and other crashes. Source: Facebook With an automated end-to-end testing and repair, SapFix is an important milestone in AI hybrid tool deployment. Facebook intends to open source both, SapFix and Sapienz, once additional engineering work has been completed. You can read more about this tool on Facebook’s Blog. Facebook introduces Rosetta, a scalable OCR system that understands text on images using Faster-RCNN and CNN How AI is going to transform the Data Center Facebook Reality Labs launch SUMO Challenge to improve 3D scene understanding and modeling algorithms  
Read more
  • 0
  • 0
  • 6122

article-image-meps-pass-a-resolution-to-ban-killer-robots
Bhagyashree R
14 Sep 2018
2 min read
Save for later

MEPs pass a resolution to ban “Killer robots”

Bhagyashree R
14 Sep 2018
2 min read
On Wednesday the Members of European Parliament (MEPs) passed a resolution on banning autonomous weapon systems. They emphasized that weapons like these, without proper human control over selecting and attacking targets should be banned before it is too late. Reportedly, some countries and industries are developing lethal autonomous weapon systems, which are also known as killer robots, ranging from missiles capable of selective targeting to learning machines with cognitive skills to decide whom, when, and where to fight. These might also include armed quadcopters that can search for and eliminate people meeting certain predefined criteria. According to the MEPs, giving machines so much power raises fundamental ethical and legal questions of human control, in particular with regard to critical functions such as target selection and engagement. They want the EU policy chief Federica Mogherini, the member states, and the Council to agree on a common position on lethal autonomous weapon systems, which will ensure meaningful human control over their critical functions and to speak with one voice in different international forums. Federica said during the debate at the European Parliament: "I know this might look like a debate about some distant future or about science fiction. It's not." A further discussion is scheduled at the United Nations in November, where it is hoped an agreement on an international ban can be reached. AI is growing, and it is growing fast. It has reached to a stage where building such systems is feasible in few years and could put power in wrong hands. This needs to stop because any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. UN meetings ended with US & Russia avoiding formal talks to ban AI enabled killer robots 15 millions jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce 6 powerful microbots developed by researchers around the world
Read more
  • 0
  • 0
  • 2756

article-image-ubers-marmaray-an-open-source-data-ingestion-and-dispersal-framework-for-apache-hadoop
Natasha Mathur
14 Sep 2018
3 min read
Save for later

Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop

Natasha Mathur
14 Sep 2018
3 min read
Uber came out with an open source data ingestion and dispersal framework for Apache Hadoop, called “Marmaray”, yesterday. Marmaray is a plug-in based framework built and designed on top of the Hadoop ecosystem by the Hadoop Platform team. Marmaray helps connect a collection of systems and services in a cohesive manner to be able to perform certain functions. Let’s have a look at these functions. Major Functions Marmaray is capable of producing quality schematized data via Uber’s schema management library and services. It ingests data from multiple data stores into Uber’s Hadoop data lake. It can build pipelines using Uber’s internal workflow orchestration service. This allows it to crunch and process the ingested data along with storing and calculating the business metrics based on this data in Hive. Marmaray serves the processed results from Hive to an online data store. This allows the internal customers to query the data and get close to instant results. Other than that, a majority of the fundamental building blocks and abstractions for Marmaray’s design were inspired by Gobblin, a similar project developed at LinkedIn. Marmaray Architecture There are certain generic components such as DataConverters, WorkUnitCalculator, Metadata Manager, ISourceand ISink in Marmaray that facilitates its overall job flow. Let’s discuss these components.  Marmaray Architecture DataConverters DataConverters are responsible for producing the error records with every transformation. It is important for all the raw data to conform to a schema before it is ingested into Uber’s Hadoop data lake, this is where DataConverts come into picture. It filters out any data that is malformed, missing required fields, or has other issues. WorkUnitCalculator Uber introduced the concept of WorkUnitCalculator in order to measure the amount of data to process. At advanced levels, WorkUnitCalculator analyzes the type of input source and the previously stored checkpoint. It then calculates the next work unit or batch of work. The WorkUnitCalculator also considers throttling information when measuring the next batch of data which needs processing. Metadata Manager The Metadata Manager is responsible to cache job level metadata information. The metadata store is capable of storing any relevant metrics which are useful to track, describe, or collect status on jobs. This helps Marmaray to cache job level metadata information. ISource and ISink The ISource consists of necessary information from the source data required for the appropriate work units, and ISink comprises all the necessary information on writing to the sink. Marmaray’s support for any-source to any-sink data pipelines can be applied to a wide range of use cases both in the Hadoop ecosystem and for data migration. “We hope that Marmaray will serve the data needs of other organizations, and that open source developers will broaden its functionalities,” reads the Uber Blog. For more information, check out the official Uber Blog. Uber open sources its large scale metrics platform, M3 for Prometheus Uber introduces Fusion.js, a plugin-based web development framework for high performance apps Uber’s kepler.gl, an open source toolbox for GeoSpatial Analysis
Read more
  • 0
  • 0
  • 4312

article-image-facebook-launches-logdevice-an-open-source-distributed-data-store-designed-for-logs
Bhagyashree R
13 Sep 2018
3 min read
Save for later

Facebook launches LogDevice: An open source distributed data store designed for logs

Bhagyashree R
13 Sep 2018
3 min read
Yesterday, Facebook open-sourced LogDevice, a distributed data store designed specifically for logs. This initial release includes the toolset that forms the core of LogDevice operational infrastructure. In future, they are planning to open-source the automation tools that they have built to manage LogDevice clusters. It is currently supported only on Ubuntu 18.04 (Bionic Beaver). However, it should be possible to build it on any other Linux distribution without significant challenges. What is LogDevice LogDevice, as the name suggests, is a log system which promises to be scalable and fault tolerant. Unlike filesystems, which store and serve data as organized files, LogDevice stores and delivers data as logs. A log is a record-oriented, append-only, trimmable file. How it works LogDevice uses a placement and delivery scheme, which is great for write availability and handling spiky write workloads: 1. Separating sequencing and storage: First, the ordering of records is decoupled from the actual storage of record copies. For every log in a LogDevice cluster, a sequencer object is executed to issue monotonically increasing sequence numbers as records are appended to that log. Source: Facebook 2. Placement: After a record is stamped with a sequence number, its copies can be potentially stored on any storage in the cluster. 3. Reading: A client who wants to read a particular log contacts all storage nodes that are permitted to store records of that log. These nodes are collectively called node set of the log and usually kept smaller than the total number of nodes in the cluster. The contacted nodes deliver record copies to the client by pushing them into the TCP connections as fast as they can. 4. Metadata history: The node set is part of the replication policy of the log. It can be changed at any time, with an appropriate note in the log’s metadata history. Readers can then consult it in order to find the storage nodes to connect to. 5. Reordering and de-duplication: The reordering and occasional de-duplication of records is done by the LogDevice client library. This is done to ensure that the records are delivered to the reader application in the order of their log sequence number (LSN). What are its common use cases Facebook uses LogDevice for various use cases, which include: Write-ahead logging for durability Transaction logging in a distributed database Event logging Journals of deferred work items Distribution of index updates in large distributed databases Machine learning pipelines Replication pipelines Durable reliable task queues Stream processing pipelines If you want to explore more on LogDevice and contribute to this open-source project, check out its official website and GitHub repository. Why Neo4j is the most popular graph database Why MongoDB is the most popular NoSQL database today Introducing Watermelon DB: A new relational database to make your React and React Native apps highly scalable
Read more
  • 0
  • 0
  • 2266
article-image-survey-reveals-how-artificial-intelligence-is-impacting-developers-across-the-tech-landscape
Richard Gall
13 Sep 2018
2 min read
Save for later

Survey reveals how artificial intelligence is impacting developers across the tech landscape

Richard Gall
13 Sep 2018
2 min read
The hype around artificial intelligence has reached fever pitch. It has captured the imagination - and stoked the fears - of the wider public, reaching beyond computer science departments and research institutions. But when artificial intelligence dominates the international conversation, it's easy to forget that it's not simply a thing that exists and develops itself. However intelligent machines are, and however adept they are at 'learning', it's essential to remember they are things that are engineered - things that are built by developers. That's the thinking behind this year's AI Now survey. To capture the experiences and perspectives of developers and to better understand the impact of artificial intelligence on their work and lives. Key findings from Packt's artificial intelligence survey Launched in August, and receiving 2,869 responses from developers working in every area, from cloud to cybersecurity, the survey had some interesting findings. These include... 69% of developers aren’t currently using AI enabling tools in their day to day role. But 75% of respondents said they were planning on learning AI enabling software in the next 12 months. TensorFlow is the tool defining AI development - 27% of respondents listed it as their top tool in their to-learn list. 75% of developers believe automation will have either a positive or significant positive impact on their career. 47% of respondents believe AGI will be a reality within the next 30 years The biggest challenges for developers in terms of AI are having the time to learn new skills and knowing which frameworks and tools to learn Internal data literacy is the biggest challenge for AI implementation As well as as quantitative results, the survey also produced  qualitative insights from developers. This provided some useful and unique perspectives on artificial intelligence. One developer, talking about bias in AI, said that: “As a CompSci/IT professional I understand this is a more subtle manifestation of ‘Garbage In/Garbage Out”. As an African American, I have significant concerns about say, well documented bias in say criminal sentencing being legitimized because ‘the algorithm said so’.” To read the report click here. To coincide with the release of survey results Packt is also running a $10 sale on all eBooks and videos across their website throughout September. Visit the Packt eCommerce store to start exploring.
Read more
  • 0
  • 0
  • 3310

article-image-akida-nsoc-hits-market-first-commercial-ai-chip-perform-neuromorphic-computing
Pavan Ramchandani
13 Sep 2018
2 min read
Save for later

Akida NSoC hits market: First commercial AI chip to perform neuromorphic computing using Spiking Neural Network Architecture

Pavan Ramchandani
13 Sep 2018
2 min read
BrainChip makes a major push in the AI-laden chip market by unveiling Akida Neuromorphic System on Chip (NSoC) that implements an advanced neural network architecture called Spiking Neural Network (SNNs). The Akida NSoC is considered to be the first commercial AI chip that can perform Neuromorphic computation, which holds significance in accelerating AI processes. Neuromorphic computation is inspired from the operations of neurons in human biology. Spiking Neural Networks (SNNs) is a 3rd generation Neural Network architecture that is promising for effectively processing edge applications. Edge computing finds its application in the latest technological advancements like Autonomous vehicle systems, Drones, Machine vision, among others that requires significantly complex backend to deal with huge data and perform heavy lifting tasks. SNNs are proven to be more powerful when compared to traditional Convolutional neural networks (CNNs) since they replace the computation-heavy algorithms like backpropagation and use the biologically inspired neuron operations. The SNNs use feedforward mechanism for training through a reinforcement style of learning. Features of Akida NSoC Designed to be used as a stand-alone processor and accelerator managing both the logic tasks and interface with other working parts. The chip includes sensor interface for pixel-based imaging, Lidar, audio, dynamic vision, and analogous signals. Includes a high speed data interface Embedded data-to-spike converters for converting the row data into spikes for training SNNs. The chip uses CMOS logic for lower power consumption. Scalable architecture to enable users to perform complex neural network training and interfacing. Both the advances - Neuromorphic computation and SNNs are seeing a rise in business use cases but with a challenge of scarce hardware platform that can harness the capabilities of SNN architecture. As such, the release of Akida NSoC is narrowing the gap between scientific research in AI acceleration and commercial reality. The industry was long missing the AI implementation for edge and this might just be the beginning of the road to more hardware that brings AI on Edge. To push the niche, BrainChip is collaborating with major global manufacturers to drive early adoption of the Akida NSoC. Baidu releases EZDL Intelligent Edge Analytics Introducing Intel’s OpenVINO computer vision toolkit for edge computing
Read more
  • 0
  • 0
  • 2995

article-image-introducing-timescaledb-1-0-the-first-os-time-series-database-with-full-sql-support
Natasha Mathur
13 Sep 2018
3 min read
Save for later

Introducing TimescaleDB 1.0, the first OS time-series database with full SQL support

Natasha Mathur
13 Sep 2018
3 min read
The Timescale team announced a release candidate, TimescaleDB 1.0, designed to support full SQL, yesterday. TimescaleDB is the first open-source time-series database that scales for fast ingest and complex queries while providing full SQL support. It also natively supports full SQL. TimescaleDB 1.0 comes with features such as native Grafana integration, first-class Prometheus support, and dozens of other new features. The timescale is the company behind the first open-source time-series database and is powered by PostgreSQL. TimescaleDB has helped businesses across the world with mission-critical applications such as industrial data analysis, complex monitoring systems, operational data warehousing, financial risk management, geospatial asset tracking, and more. TimescaleDB 1.0 key Features TimescaleDB 1.0 offers first-class Prometheus support for long-term storage along with native Grafana integration. First-class Prometheus Support There is now an added native support in TimescaleDB to act as a remote storage backend for Prometheus (a monitoring system and time-series database). This adds many benefits to Prometheus such as a full SQL interface, long-term replicated storage, support for late data, data updates, and the ability to JOIN monitoring data against other business data. Native Grafana Integration TimscaleDB 1.0 now comes with a graphical SQL query builder for Grafana and additional support. In addition to these two major features, there are other TimescaleDB features: TimescaleDB 1.0  is fast, flexible, and built to scale. It supports full SQL i.e. it is similar to PostgreSQL on the outside but is architected for time-series internally. TimescaleDB 1.0 provides the largest ecosystem of any time-series database such as Tableau, Grafana, Apache Kafka, Apache Spark, Prometheus, Zabbix support. It is now enterprise ready and offers reliability and tooling of PostgreSQL, enterprise-grade security, and production-ready SLAs. TimescaleDB 1.0 manages time-series data. It offers automatic space-time partitioning, a hypertable abstraction layer, adaptive chunk sizing, and other new functions for easier time-series analytics in SQL. It also comprises features such as geospatial analysis, JSON support, along with easy schema management. TimescaleDB has managed to achieve some significant milestones since its launch in April 2017. It managed to surpass 1 million downloads and 5,000 GitHub stars. It has Bloomberg, Comcast, Cray, Cree, and LAIKA as production users. “Based on all the adoption we’re seeing, it’s becoming clear to us that all data is essentially a time-series data. We’re building TimescaleDB to accommodate this growing need for a performant, easy-to-use, SQL-centric, and enterprise-ready time-series database,” says Ajay Kulkarni, Timescale founder on the TimeScale announcement page. To get started, download TimescaleDB (installation instructions). You can explore the first release candidate for TimescaleDB 1.0 at Github or on Docker. For more information, check out the official release notes. Introducing Watermelon DB: A new relational database to make your React and React Native apps highly scalable Say hello to FASTER: a new key-value store for large state management by Microsoft IBM Files Patent for “Managing a Database Management System using a Blockchain Database”
Read more
  • 0
  • 0
  • 4133
article-image-ebay-open-sources-headgaze-that-uses-head-motion-to-navigate-user-interface-on-iphone-x
Melisha Dsouza
12 Sep 2018
3 min read
Save for later

eBay Open-Sources ‘HeadGaze’ that uses Head Motion to Navigate User Interface on iPhone X

Melisha Dsouza
12 Sep 2018
3 min read
eBay’s new technology - ‘HeadGaze’ allows physically impaired users to interact with their iPhone X screen through head movements. HeadGaze was developed by eBay’s computer vision team and a team of eBay interns led by Muratcan Cicek. Cicek is a software engineer and PhD student who uses assistive technology as an aid for his motor impairment disability. Leveraging Apple’s ARKit platform and the iPhone X’s TrueDepth front-facing camera, HeadGaze allows applications to track user’s head motions for guiding the on-screen cursor. This technology isn’t available on eBay yet. However, the underlying technology is available on GitHub for general use. How does HeadGaze work? This augmented reality software includes a virtual stylus that follows the user’s head motions to move the cursor toward scrollbars and other interactive buttons. Head motions are tracked using the 3D sensors in the TrueDepth camera, which is the same hardware that enables Apple’s Face ID unlock feature. This helps users to transmit input commands by subtle head movements. For example, to register a click, the technology will detect how long the cursor has been in one spot and then triggers the desired action. Head-based controls and the new UI widgets implemented by the device makes hands-free interactions easy. The team has put this technology to test by developing the HeadSwipe app (also open-sourced via HeadGaze). This app allows users to swipe deals on eBay with the help of different head movements. Users can scroll through various pages on the app, browse deals, buy items and so on, without having to use finger movements. Similar to Google’s “wheelchair accessible” routes to transit navigation in Google Maps, HeadGaze is trying to impact the lives of millions of people worldwide by open sourcing its source code. This idea can be broadened to many disciplines- imagine having to take an urgent call while your hands are occupied or scroll through the Map while driving your car. The possibilities of this tech appear to be limitless. It wouldn’t be a surprise if this technology takes the market by storm! To read an in-depth coverage on the workings of this software, head over to eBay’s official blog. Apple announces a Special Event to reportedly launch new products including “iPhone XS” and OS updates Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality Magic Leap’s first augmented reality headset, powered by Nvidia Tegra X2, is coming this Summer  
Read more
  • 0
  • 0
  • 1534

article-image-mits-transparency-by-design-network-a-high-performance-model-that-uses-visual-reasoning-for-machine-interpretability
Natasha Mathur
12 Sep 2018
3 min read
Save for later

MIT’s Transparency by Design Network: A high performance model that uses visual reasoning for machine interpretability

Natasha Mathur
12 Sep 2018
3 min read
A team of researchers from MIT Lincoln Laboratory's Intelligence and Decision Technologies Group have created a neural network, named the Transparency by Design Network ( TbD-net). This network is capable of performing human-like reasoning to respond to questions about the contents of images. The Transparency by Design model visually renders its thought process to solve problems, thereby, helping human analysts analyze its decision-making process. The developers of Transparency by Design network built it with an aim to make the inner workings of the neural network transparent, meaning it focuses on finding out how the neural network works and thinks what it thinks.  One such example is finding out answers to questions like “what do the neural networks used in self-driving cars think the difference is between a pedestrian and stop sign?”, “when was the neural network able to come up with that difference?”, and so on.  Finding out these answers will help researchers teach the neural network to correct incorrect assumptions. Other than that, Transparency by Design Network closing the gap between performance and interpretability, which is a common problem with today’s neural networks. "Progress on improving performance in visual reasoning has come at the cost of interpretability,” says Ryan Soklaski, a TbD-net developer, as mentioned in the MIT blog post. The TbD-net comprises a collection of "modules," which are small neural networks specialized to perform specific subtasks. So, whenever a visual-reasoning question is asked to TbD-net about an image, it first breaks down a question into subtasks, then assigns the appropriate module to fulfill its part. According to Majumdar, another TbD-net developer, "Breaking a complex chain of reasoning into a series of smaller subproblems, each of which can be solved independently and composed, is a powerful and intuitive means for reasoning”. After this, each module learns from the module before it and eventually produces the final, correct answer. Each module's output is visually presented in an "attention mask” which shows heat-map blobs over objects within an image that the module considers an answer. Overall, for the entire process, TbD-net uses AI techniques such as Adam optimization to interpret the human language questions and break these sentences into subtasks. It also uses multiple computer vision AI techniques like convolution neural networks that help interpret the imagery and uses visual reasoning to share its decision-making process. When TbD-net was put to test, it achieved results surpassing the best-performing visual reasoning models. The model was evaluated using a visual question-answering dataset. This dataset consisted of 70,000 training images and 700,000 questions as well as test and validation sets of 15,000 images and 150,000 questions. The model managed to achieve a whopping 98.7 percent test accuracy on the dataset. But, the developers further improved this model’s result, achieving 99.1 % accuracy on the CLEVER dataset, with the help of regularization and increasing the spatial resolution. The attention masks produced by the modules helped the researchers figure out what went wrong, thereby, helping them refine the model. This further resulted in a performance of 99.1 percent accuracy. "Our model provides straightforward, interpretable outputs at every stage of the visual reasoning process,” says Mascharka. For more information, be sure to check out the official research paper. Optical training of Neural networks is making AI more efficient Diffractive Deep Neural Network (D2NN): UCLA-developed AI device can identify objects at the speed of light MIT’s Duckietown Kickstarter project aims to make learning how to program self-driving cars affordable
Read more
  • 0
  • 0
  • 3115