Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-samsung-opens-its-ai-based-bixby-voice-assistant-to-third-party-developers
Melisha Dsouza
08 Nov 2018
3 min read
Save for later

Samsung opens its AI based Bixby voice assistant to third-party developers

Melisha Dsouza
08 Nov 2018
3 min read
“Our goal is to offer developers a robust, scalable and open AI platform that makes it easy for them to launch and evolve the amazing experiences they create for our users,” -Kyunghak Hyun, Product Manager of the AI Product Management Group at Samsung. At Samsung Developer Conference 2018, the company announced the opening of Bixby Developer Studio, an Integrated Development Environment (IDE), to developers. This will allow third party developers to build functionalities for the Artificial Intelligence (AI) assistant. Viv Labs CEO/ Siri co-founder Dag Kittlaus told the crowd that their dev tools are “way ahead of the other guys”.  The company will also be introducing Bixby Marketplace, for users to understand the new functionality of their voice assistant. This will even help developers make money using the functionalities of this intelligent companion. Bixby, that started as a practical way to use voice to interact with the phone, will now evolve into a scalable, open AI platform to support watches, refrigerators, tablets, washing machine, and many more devices. Developers will gain access to the same development tools that Samsung’s internal developers use to create Bixby Capsules, which will be used by them to build to add features to Bixby. Just like Skills on Google Alexa, developers can create custom Bixby interactions that can be added to various devices in the future. Samsung said the move was in line to support the company’s goal of building a scalable, open Artificial Intelligence (AI) platform where developers and service providers can access tools to bring Bixby to more people and devices around the world. As another initiative to scale Bixby services, Samsung is planning to expand support to five new languages- including British English, French, German, Italian and Spanish - in the coming months. This will especially be crucial to source Bixby-enabled devices like the Galaxy Home and smart fridges all around the globe. Samsung also demonstrated the capabilities of Bixby at the conference. The demo included Bixby helping a user in booking a hotel by opening various portals used in the process. This move - along with expanding Bixby to more devices and expanding support for more languages - is seen as Samsungs effort to increase Bixby's recognition around the globe. You can read more about this news at Techcrunch. Cisco Spark Assistant: World’s first AI voice assistant for meetings Voice, natural language, and conversations: Are they the next web UI? 12 ubiquitous artificial intelligence powered apps that are changing lives
Read more
  • 0
  • 0
  • 2362

article-image-harvard-law-school-launches-its-caselaw-access-project-api-and-bulk-data-service-making-almost-6-5-million-cases-available
Savia Lobo
05 Nov 2018
3 min read
Save for later

Harvard Law School launches its Caselaw Access Project API and bulk data service making almost 6.5 million cases available

Savia Lobo
05 Nov 2018
3 min read
On October 31st, The Library Innovation Lab at the Harvard Law School Library announced the launch of its Caselaw Access Project API and bulk data service. The service makes available almost 6.5 million cases since 1600s till date, thus making the full corpus of published U.S. case law online for anyone to access for free. According to the Harvard Law Today, “Between 2013 and 2018, the Library digitized over 40 million pages of U.S. court decisions, transforming them into a dataset covering almost 6.5 million individual cases.” The Caselaw Access Project API and bulk data service puts this important dataset within easy reach of researchers, members of the legal community and the general public. Adam Ziegler, director of the Library Innovation Lab, in an article in Fortune Magazine, said, “the Caselaw Access Project will be a treasure trove for legal scholars, especially those who employ big data techniques to parse the corpus. It’s an opportunity to reconstruct the law as a data source, and write computer programs to peruse millions of cases.” The CAP API and the bulk data service The CAP API is available at api.case.law and offers open access to descriptive metadata for the entire corpus. API documentation is written in a way to make it easy for both experts and beginners to understand. Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School, and Vice Dean for Library and Information Resources said, “Libraries were founded as an engine for the democratization of knowledge, and the digitization of Harvard Law School’s collection of U.S. case law is a tremendous step forward in making legal information open and easily accessible to the public.” Real time implementation of the CAP API and the bulk data service John Bowers, a research associate at Harvard Library Innovation Lab, used the Caselaw Access Project API and bulk data service to uncover the story of Justice James H. Cartwright, the most prolific opinion writer on the Illinois Supreme Court, according to Bower's recent blog post. Bowers said, “In the hands of an interested researcher with questions to ask, a few gigabytes of digitized caselaw can speak volumes to the progress of American legal history and its millions of little stories.” By digitizing these materials, the Harvard Law School Library aimed to provide open, wide-ranging access to American case law, making its collection broadly accessible to nonprofits, academics, practitioners, researchers, and law students. Thus anyone with a smartphone or Internet connection can have an access to this data. Read more about this project in detail, on Caselaw Access Project. Data Theorem launches two automated API security analysis solutions – API Discover and API Inspect Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Twilio acquires SendGrid, a leading Email API Platform, to bring email services to its customers
Read more
  • 0
  • 0
  • 2654

article-image-eff-asks-california-supreme-court-to-hear-a-case-on-government-data-accessibility-and-anonymization-under-cpra
Bhagyashree R
05 Nov 2018
3 min read
Save for later

EFF asks California Supreme Court to hear a case on government data accessibility and anonymization under CPRA

Bhagyashree R
05 Nov 2018
3 min read
Last week, the Electronic Frontier Foundation (EFF) issued a letter to support the petition for review filed by Richard Sander and the First Amendment Coalition in Sander v. State Bar of California case. The opinion issued by First District Court of Appeal in August basically changes the California Public Records Act (CPRA) that could prevent California citizens from accessing public data that state and local agencies are generating. The court ruled that in order to de-identify personal information, the State Bar of California has to create “new records” to “recode its original data into new values.” EFF has raised a question that the California Supreme Court has to address: does anonymization of public data amount to a creation of new records under the CPRA? If the court’s opinion of creating new records becomes the standard across California, it will be against the purpose of CPRA. CPRA was signed in 1968, a result of a 15 year long effort to create a general records law for California. Under CPRA, on public request, the governmental records should be shared with the public, unless there is any reason not to do so. This act enables people to understand what the government is doing and prevents government inefficiencies. This act is very important today as a vast amount of digital data is produced and consumed by governments. In a previous hearing the California Supreme Court acknowledged that sharing this data to the public will prove useful: “It seems beyond dispute that the public has a legitimate interest in whether different groups of applicants, based on race, sex or ethnicity, perform differently on the bar examination and whether any disparities in performance are the result of the admissions process or of other factors.” However, when the case proceeded to trial, the petitioners were asked to show how it was possible to de-identify this data. But, according to CPRA, when government refuses to share the records requested by the public, they should show the court that it is not possible to release data and protect private information at the same time. EFF further pointed that in another case, Exide Technologies v. California Department of Public Health, a different superior court in California has ruled the opposite way. The court ruled that the government agency must share the investigations of blood lead levels. But it should be shared in a format that serves the public interest in government transparency while at the same time protecting the privacy interests of individual lead-poisoning patients. This requires California Supreme Court to settle how agencies should handle sensitive digital information under the CPRA. With the increase in the data collected by the state from and about the public, it is important that they give access to this data in order to maintain the transparency. Read the full announcement on EFF's official website. Senator Ron Wyden’s data privacy law draft can punish tech companies that misuse user data Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling” Is AT&T trying to twist data privacy legislation to its own favor?
Read more
  • 0
  • 0
  • 2101
Visually different images

article-image-a-new-data-breach-on-facebook-due-to-malicious-browser-extensions-reports-bbc-news
Bhagyashree R
05 Nov 2018
4 min read
Save for later

A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News

Bhagyashree R
05 Nov 2018
4 min read
Throughout this year, we saw many data breaches and security issues involving Facebook. Adding to this list, last week, some hackers were able to gain access to 120 million accounts and posted private posts of Facebook users. As reported by the BBC News, the hackers also put an advert selling access to these compromised accounts for 10 cents per account. What this Facebook hack was about? This case of data breach seems to be different from the ones we saw previously. While the previous attacks took advantage of vulnerabilities in Facebook’s code, this breach happened due to malicious extensions. This breach was first spotted in September, when a user nicknamed as “FBSaler” appeared on an English-language internet forum. This user was selling personal information of Facebook users: "We sell personal information of Facebook users. Our database includes 120 million accounts.” BBC contacted Digital Shadows, a cyber-security company to investigate the case. The cyber-security company confirmed that more than 81,000 of the profiles posted online contained private messages. Also, the data from 176,000 accounts were made available online, but BBC added that this data may have been scraped from members who had not hidden it. To confirm that these private posts and messages were actually of real users BBC also contacted five Russian Facebook users. These users confirmed that the posts were theirs. Who exactly is responsible for this hack? Going by Facebook’s statement to BBC, this hack happened because of malicious browser extensions. This malicious extension tracked victims’ activity on Facebook and shared their personal details and private conversations with the hackers. Facebook has not yet disclosed any information about the extension. One of the Facebook’s executive, Guy Rosen told BBC: "We have contacted browser-makers to ensure that known malicious extensions are no longer available to download in their stores. We have also contacted law enforcement and have worked with local authorities to remove the website that displayed information from Facebook accounts." On deeper investigation by BBC News, one of the websites where the data was published appeared to have been set up in St Petersburg. In addition to taking the website down, its IP address has also been flagged by the Cybercrime Tracker service. According to the Cybercrime Tracker service this address was also used to spread the LokiBot Trojan. This trojan allows attacker to gain access to user passwords. Cyber experts told BBC that if malicious extensions were the root cause of this data breach, then browsers are also responsible for it: “Independent cyber-experts have told the BBC that if rogue extensions were indeed the cause, the browsers' developers might share some responsibility for failing to vet the programs, assuming they were distributed via their marketplaces.” This news has led to a big discussion on Hacker News. One of the users on the discussion shared how these kind of attacks could be mitigated by browser policies: “Maybe it's time for the browsers to put more effort into extension network security. 1) Every extension has to declare up front what urls it needs to communicate to. 2) Every extension has to provide schema of any data it intends to send out of browser. 3) Browser locally logs all this comms. 4) Browser blocks anything which doesn't match strict key values & value values and doesn't leave browser in plain text.” We will have to wait and see how these browsers will be able to stop the use of malicious extensions and also, how Facebook makes itself much stronger against all these data breaches. Read the full report on this Facebook hack on BBC News. Facebook’s CEO, Mark Zuckerberg summoned for hearing by UK and Canadian Houses of Commons Facebook’s Machine Learning system helped remove 8.7 million abusive images of children Facebook says only 29 million and not 50 million users were affected by last month’s security breach
Read more
  • 0
  • 0
  • 3598

article-image-google-open-sources-bert-an-nlp-pre-training-technique
Prasad Ramesh
05 Nov 2018
2 min read
Save for later

Google open sources BERT, an NLP pre-training technique

Prasad Ramesh
05 Nov 2018
2 min read
Google open-sourced Bidirectional Encoder Representations from Transformers (BERT) last Friday for NLP pre-training. Natural language processing (NLP) consists of topics like sentiment analysis, language translation, question answering, and other language-related tasks. Large datasets for NLP containing millions, or billions, of annotated training examples is scarce. Google says that with BERT, you can train your own state-of-the-art question answering system in 30 minutes on a single Cloud TPU, or a few hours using a single GPU. The source code built on top of TensorFlow. A number of pre-trained language representation models are also included. BERT features BERT improves on recent work in pre-training contextual representations. This includes semi-supervised sequence learning, generative pre-training, ELMo, and ULMFit. BERT is different from these models, it is the first deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus - Wikipedia. Context-free models like word2vec generate a single word embedding representation for every word. Contextual models, on the other hand, generate a representation\ of each word based on the other words in the sentence. BERT is deeply bidirectional as it considers the previous and next words. Bidirectionality It is not possible to train bidirectional models by simply conditioning each word on words before and after it. Doing this would allow the word that’s being predicted to indirectly see itself in a multi-layer model. To solve this, Google researchers used a straightforward technique of masking out some words in the input and condition each word bidirectionally in order to predict the masked words. This idea is not new, but BERT is the first technique where it was successfully used to pre-train a deep neural network. Results On The Stanford Question Answering Dataset (SQuAD) v1.1, BERT achieved 93.2% F1 score surpassing the previous state-of-the-art score of 91.6% and human-level score of 91.2%. BERT also improves the state-of-the-art by 7.6% absolute on the very challenging GLUE benchmark, a set of 9 diverse Natural Language Understanding (NLU) tasks. For more details, visit the Google Blog. Intel AI Lab introduces NLP Architect Library FAT Conference 2018 Session 3: Fairness in Computer Vision and NLP Implement Named Entity Recognition (NER) using OpenNLP and Java
Read more
  • 0
  • 0
  • 5413

article-image-neo4j-rewarded-with-80m-series-e-plans-to-expand-company
Savia Lobo
02 Nov 2018
2 min read
Save for later

Neo4j rewarded with $80M Series E, plans to expand company

Savia Lobo
02 Nov 2018
2 min read
On 1st October, Neo4j was rewarded with an $80 million Series E to bring their products to a wider market. This could possibly be the companies last private fundraise. In 2016, the company got a $36 million Series D investment. Neo4j has been successful with around 200 enterprise customers to their credit including Walmart, UBS, IBM and NASA with customers from 20 of the top 25 banks and 7 of the top 10 retailers. The round for series E was led by One Peak Partners and Morgan Stanley Expansion Capital with participation from existing investors Creandum, Eight Roads and Greenbridge Partners. As reported in Techcrunch, this is what he has to say: “If your mental framework is around building a great company, you’re going to have all kinds of options along the way. So that’s what I’m completely focused on,” Eifrem explained. This year, the company was focussed on expanding into artificial intelligence. Since Graph databases help companies understand connections in large datasets and AI involves large amounts of data to drive the learning models, both of them used hand-in-hand will benefit the organization. Eifrem has expressed intentions to use the money to expand the company internationally. He also plans to provide localized service in terms of language and culture wherever their customers happen to be. This news seems to have gone down well with Neo4j users: Source: y combinator Head over to Techcrunch to know more about this news. Why Neo4j is the most popular graph database Neo4j 3.4 aims to make connected data even more accessible From Graph Database to Graph Company: Neo4j’s Native Graph Platform addresses evolving needs of customers
Read more
  • 0
  • 0
  • 2011
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-this-ai-generated-animation-can-dress-like-humans-using-deep-reinforcement-learning
Prasad Ramesh
02 Nov 2018
4 min read
Save for later

This AI generated animation can dress like humans using deep reinforcement learning

Prasad Ramesh
02 Nov 2018
4 min read
In a paper published this month, the human motions to wear clothes is synthesized in animation with reinforcement learning. The paper named Learning to Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning was published yesterday. The team is made up of two Ph.D. students from The Georgia Institute of Technology, two of its professors and a researcher from Google Brain. Understanding the dressing problem Dressing, putting on a t-shirt or a jacket is something we do every day. Yet it is a computationally costly and complex task for a machine to perform or be simulated by computers. Techniques in physics simulation and machine learning are used in this paper to simulate an animation. A physics engine is used to simulate character motion and cloth motion. On the other hand deep reinforcement learning on a neural network is used to produce character motion. Physics engine and reinforcement learning on a neural network The authors of the paper introduce a salient representation of haptic information to guide the dressing process. Then the haptic information is used in the reward function to provide learning signals when training the network. As the task is too complex to do perform in one go, the dressing task is separated into several subtasks for better control. A policy sequencing algorithm is introduced to match the distribution of output states from one task to the input distribution for the next task. The same approach is used to produce character controllers for various dressing tasks like wearing a t-shirt, wearing a jacket, and robot-assisted dressing of a sleeve. Dressing is complex, split into several subtasks The approach taken by the authors splits the dressing task into a sequence of subtasks. Then a state machine guides the between these tasks. Dressing a jacket, for example, consists of four subtasks: Pulling the sleeve over the first arm. Moving the second arm behind the back to get in position for the second sleeve. Putting hand in the second sleeve. Finally, returning the body back to a rest position. A separate reinforcement learning problem is formulated for each subtask in order to learn a control policy. The policy sequencing algorithm ensures that these individual control policies can lead to a successful dressing sequence on being executed sequentially. The algorithm matches the initial state of one subtask with the final state of the previous subtask in the sequence. A variety of successful dressing motions can be produced by applying the resultant control policies. Each subtask in the dressing task is formulated as a partially observable Markov Decision Process (POMDP). Character dynamics are simulated with Dynamic Animation and Robotics Toolkit (DART) and cloth dynamics with NVIDIA PhysX. Conclusion and room for improvement A system that learns to animate a character that puts on clothing is successfully created with the use of deep reinforcement learning and physics simulation. From the subtasks, the system learns each sub-task individually, then connects them with a state machine. It was found that carefully selecting the cloth observations and the reward functions were important factors for the success of the approach taken. This system currently performs only upper body dressing. For lower body, a balance into the controller would be required. The number of subtasks might reduce on using a control policy architecture with memory. This will allow for greater generalization of the skills learned. You can read the research paper at the Georgia Institute of Technology website. Facebook launches Horizon, its first open source reinforcement learning platform for large-scale products and services Deep reinforcement learning – trick or treat? Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system
Read more
  • 0
  • 0
  • 5920

article-image-ethereum-2-0-serenity-is-coming-with-better-speed-scalability-and-security-vitalik-buterin-at-devcon
Melisha Dsouza
02 Nov 2018
3 min read
Save for later

Ethereum’s 1000x Scalability Upgrade ‘Serenity’ is coming with better speed and security: Vitalik Buterin at Devcon

Melisha Dsouza
02 Nov 2018
3 min read
Ethereum 2.0 is coming really soon and it could increase the Ethereum network’s capacity to process transactions by a 1000 times. In the annual Ethereum developer conference- Devcon, Vitalik Buterin, the creator of this second largest blockchain announced that the update which was formerly known as Ethereum 2.0 is now called ‘Serenity’. Buterin also addressed the massive efforts that have been put to upgrade the network in the past, especially with issues like the DAO hack and  “super-quadratic sharding” that bogged the team down. What can we expect in Serenity? “We have been actively researching, building, and now, finally getting them all together” -Vitalik Buterin In the month of September, Danner Langley, senior blockchain developer at Rocket Pool revealed the roadmap for Ethereum 2.0. ‘Serenity’ will encompass multiple projects that Ethereum developers have been working on since 2014. It will see Ethereum finally switch from ‘proof-of-work’ to ‘proof-of-stake’. This is a model in which people and organizations holding ether will “stake” their own coins in order to maintain the network. They will earn block rewards for doing so. This will also help to achieve a sharded blockchain verifying data on the network, thus increasing overall efficiency. The new upgrade will also make the network much faster, more secure, less energy-intensive and capable of handling thousands of transactions per second. Serenity will include eWASM, which is a replacement to the existing Ethereum Virtual Machine (EVM) used to compile the smart-contracts. eWASM will double the transaction throughput rate as compared to EVM. He also added that before the official launch of Serenity, developers will make some final tweaks including stabilizing protocol specifications and cross-client testnets. Buterin believes Ethereum will soar with the Serenity upgrade. During the conference, Buterin said that Serenity will be introduced in 4 phases: Phase one will include an initial version with proof-of-stake beacon chain. This would co-exist alongside  Ethereum itself and will allow Casper validators to participate. Phase two will represent simplified version of Serenity with limited features. Excluding smart contracts or money transfers from one shard to another. Phase three will be an amplified version of Serenity with cross-shard communication where users can send funds and messages across different shards. Phase four will have the final tweaks and optimized features Is Vitalik Buterin taking a backseat? In a conversation with MIT Technology Review, Buterin said that  it’s time for him to start fading into the background as “a necessary part of the growth of the community.” Taking the cue from Ethereum being decentralized, where a single component failure could not bring down the whole system, Buterin is  “out of the decision-making in a lot of ways,” said Hudson Jameson of the Ethereum Foundation. This will pave the way for the community to thrive and become more decentralized. Buterin says that his involvement in the project has amounted to “a significantly smaller share of the work than I had two or three years ago,” also adding that downsizing his influence is “something we are definitely making a lot of progress on.” Ethereum’s development will not end with Serenity, since important issues such as transaction fees and governance are still yet to be addressed. Buterin and his team have already begun planning future tweaks along with more tech improvements. To know more about this news, head over to OracleTimes. Aragon 0.6 released on Mainnet allowing Aragon organizations to run on Ethereum Mainnet Vitalik Buterin’s new consensus algorithm to make Ethereum 99% fault tolerant  
Read more
  • 0
  • 0
  • 2011

article-image-facebook-launches-horizon-its-first-open-source-reinforcement-learning-platform-for-large-scale-products-and-services
Natasha Mathur
02 Nov 2018
3 min read
Save for later

Facebook launches Horizon, its first open source reinforcement learning platform for large-scale products and services

Natasha Mathur
02 Nov 2018
3 min read
Facebook launched Horizon, its first open source reinforcement learning platform for large-scale products and services, yesterday. The workflows and algorithms in Horizon have been built on open source frameworks such as PyTorch 1.0, Caffe2, and Spark. This is what makes Horizon accessible to anyone who uses RL at scale. “We developed this platform to bridge the gap between RL’s growing impact in research and its traditionally narrow range of uses in production. We deployed Horizon at Facebook over the past year, improving the platform’s ability to adapt RL’s decision-based approach to large-scale applications”, reads the Facebook blog. Facebook has already used this new platform to gain performance benefits such as delivering more relevant notifications, optimizing streaming video bit rates, and improving personalized suggestions in Messenger. However, given the Horizon’s open design and toolset, it will also be benefiting other organizations in RL. Harnessing reinforcement learning for large-scale production Horizon uses reinforcement learning to make decisions at scale by taking into account the issues specific to the production environments. These include feature normalization, distributed training, large-scale deployment, and data sets with thousands of varying feature types. Moreover, as per Facebook, applied RL models are more sensitive to noisy and unnormalized data as compared to the traditional deep networks. This is why Horizon preprocesses these state and action features in parallel with the help of Apache Spark. Once the training data gets preprocessed, PyTorch-based algorithms are used for normalization and training on the graphics processing unit. Also, Horizon’s design focuses mainly on large clusters, where distributed training on many GPUs at once allows engineers to solve the problems with millions of examples. Horizon supports algorithms such as Deep Q-Network (DQN), parametric DQN, and deep deterministic policy gradient (DDPG) models. Then comes the training process in Horizon where a Counterfactual policy evaluation (CPE) is run. CPE refers to a set of methods that are used to predict the performance of a newly learned policy. Once the evaluation is done, its results are logged to TensorBoard. Once the training gets done, Horizon exports the models using ONNX, so that these models can be efficiently served at scale. Now, usually, in many RL domains, the performance of a model is measured by trying it out. However, since Horizon performs large-scale production, it is important to ensure that the test models are tested thoroughly before deploying them at scale. To achieve this, Horizon solves policy optimization tasks, which in turn ensures that the training workflow also automatically runs state-of-the-art policy evaluation techniques. These techniques include sequential doubly robust policy evaluation and MAGIC. The evaluation is then combined with anomaly detection which automatically alerts engineers if a new iteration of the model performs radically different than the previous one before the policy gets deployed to the public. Facebook plans on adding new models & model improvements along with CPE integrated with real metrics to Horizon in the future. “We are leveraging the Horizon platform to discover new techniques in model-based RL and reward shaping, and using the platform to explore a wide range of additional applications at Facebook, such as data center resource allocation and video recommendations. Horizon could transform the way engineers and ML models work together”, says Facebook. For more information, check out the official Facebook blog. Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues Facebook open sources QNNPACK, a library for optimized mobile deep learning Facebook’s Child Grooming Machine Learning system helped remove 8.7 million abusive images of children
Read more
  • 0
  • 0
  • 2139

article-image-gnu-bison-3-2-got-rolled-out
Amrata Joshi
01 Nov 2018
2 min read
Save for later

GNU Bison 3.2 got rolled out

Amrata Joshi
01 Nov 2018
2 min read
On Monday, the team at Bison announced the release of GNU Bison 3.2, a general-purpose parser generator. It converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser, employing LALR(1) parser tables. This release is bootstrapped with the following tools, Gettext 0.19.8.1, Autoconf 2.69, Automake 1.16.1, Flex 2.6.4, and Gnulib v0.1-2176-ga79f2a287 GNU Bison, commonly known as Bison, is a parser generator that is part of the GNU Project. It is used to develop a wide range of language parsers, right from those used in simple desk calculators to complex programming languages. One has to be fluent in C or C++ programming in order to use Bison. Bison 3.2  comes with massive improvements to the deterministic C++ skeleton, Lalr1.cc, while maintaining compatibility with C++98. Move-only types can now be used for semantic values while working with Bison’s variants. In modern C++ (C++11 and later), one should always use 'std::move' with the values of the right-hand side symbols ($1, $2, etc.), as they will be popped from the stack anyway. Using 'std::move' is now mandatory for move-only types such as unique_ptr, and it provides a significant speedup for large types such as std::string, or std::vector, etc. A warning will be issued when automove is enabled, and a value is used several times. Major Changes in Bison 3.2 Support for DJGPP (DJ's GNU Programming Platform), which have been unmaintained and untested for years, is now termed obsolete. Unless there is an activity to revive it, it will be removed. To denote the output stream, printers should now use ‘yyo’ instead of ‘yyoutput’. The variant-based symbols in C++ should now use emplace() instead ofbuild(). In C++ parsers, the parser::operator() is now a synonym for the parser::parse. A ‘comment’ in the generated code now emphasizes that users should not depend on non-documented implementation details, such as macros starting with YY_. A new section named "A Simple C++ Example", is now a tutorial for parsers in C++. Bug Fixes in Bison 3.2 Major bug fixes in this release include Portability issues in MinGW and VS2015,  the test suite and with Flex. To know more about this release check out the official mailing list. Mio, a header-only C++11 memory mapping library, released! Google releases Oboe, a C++ library to build high-performance Android audio apps The 5 most popular programming languages in 2018
Read more
  • 0
  • 0
  • 3002
article-image-timescaledb-1-0-officially-released
Amrata Joshi
01 Nov 2018
3 min read
Save for later

TimescaleDB 1.0 officially released

Amrata Joshi
01 Nov 2018
3 min read
On Tuesday, the team at Timescale announced the official production release of TimescaleDB 1.0. Two months ago, the team released its initial release candidate. With the official release, TimescaleDB 1.0 is now the first enterprise-ready time-series database that supports full SQL and scale. This release has crossed over 1M downloads and production deployments at Comcast, Cray, Cree and more. Mike Freedman, Co-founder/CTO at TimescaleDB says, “Since announcing our first release candidate in September, Timescale’s engineering team has merged over 50 PRs to harden the database, improving stability and ease-of-use.” Major updates in TimescaleDB 1.0 TimescaleDB 1.0 comes with a cleaner management of multiple tablespaces that allows hypertables to elastically grow across many disks. Also, the information about the state of hypertables is easily available which includes their dimensions and chunks. It’s important to have a robust cross-operating system availability for better usability. This release brings improvements for supporting Windows, FreeBSD, and NetBSD. TimescaleDB 1.0 powers the foundation for a database scheduling framework that manages background jobs. Since TimescaleDB is implemented as an extension, a single PostgreSQL instance can have multiple, different versions of TimescaleDB running. TimescaleDB 1.0 manages edge cases related to the schema and tablespace modifications. It also provides cleaner permissions for backup/recovery in templated databases and includes additional test coverage. TimescaleDB 1.0 supports Prometheus Prometheus, a leading open source monitoring and alerting tool, is not arbitrarily scalable or durable in the face of disk or node outages. Whereas, TimescaleDB 1.0 is efficient and can easily handle terabytes of data, and supports high availability and replication which it makes it long-term data storage. It also provides advanced capabilities and features, such as full SQL, joins and replication, which are not available in Prometheus. All the metrics recorded in Prometheus are first written to the local node, and then written to TimescaleDB. So, the metrics are immediately backed up, in case of any disk failure on a Prometheus node, would be still safer. What’s the future like? The team at Timescale says that the upcoming releases of TimescaleDB will include more automation around capabilities like automatic data aggregation, retention, and archiving. They will also include automated data management techniques for improving query performance, such as non-blocking reclustering and reindexing of older data. Read more about this release on Timescale’s official website. Introducing TimescaleDB 1.0, the first OS time-series database with full SQL support Cockroach Labs announced managed CockroachDB-as-a-Service PipelineDB 1.0.0, the high performance time-series aggregation for PostgreSQL, released!
Read more
  • 0
  • 0
  • 2586

article-image-neuron-an-all-inclusive-data-science-extension-for-visual-studio
Prasad Ramesh
01 Nov 2018
3 min read
Save for later

Neuron: An all-inclusive data science extension for Visual Studio

Prasad Ramesh
01 Nov 2018
3 min read
A team of students from the Imperial College London developed a new Visual Studio extension called neuron. It is aimed to be an all-inclusive add-on for data science tasks in Visual Studio. Using neuron is pretty simple. You begin with regular Python or R code file in a window. Beside the code is neuron’s windows as shown in the following screenshot. It takes up half of the screen but is a blank page at the start. When you run your code snippets, the output starts showing up as interactive cards. Neuron can display outputs that are plain text, tables, images, graphs, or maps. Source: Microsoft Blog You can find neuron at the Visual Studio Marketplace. On installation, a button will be visible when you have a supported file open. Neuron uses the Jupyter Notebook in the background. Jupyter Notebook would already be installed in your computer considering it popularity, if not you will be prompted. Neron supports more output types than Jupyter Notebook. You can also generate 3D graphs, maps, LaTeX formulas, markdown, HTML, and static images with neuron. The output is displayed in a card on the right-hand side, it can be resized moved around or expanded into a separate window. Neuron also keeps a track of code snippets associated with each card. Why was neuron created? Data scientists come from various backgrounds and use a set of standard tools like Python, libraries, and the Jupyter Notebook. Microsoft approached the students from the Imperial College London to integrate the various set of tools into one single workspace. A single workspace being a Visual Studio extension that could enable users to run data analysis operations without breaking the current workflow. Neuron gets the advantage of an intelligent IDE, Visual Studio along with rapid execution and visualization of Jupyter Notebook all in a single window. It is not a new idea Although neuron is not a new idea. https://twitter.com/jordi_aranda/status/1057712899542654976 Comments on Reddit also suggest there are existing such tools in other IDEs. Reddit user kazi1 stated: “Seems more or less the same as Microsoft's current Jupyter extension (which is pretty meh). This seems like it's trying to reproduce the work already done by Atom's Hydrogen extension, why not contribute there instead." Another Redditor named procedural_ape said: “This looks like an awesome extension but shame on Microsoft for acting like this is their own fresh, new idea. Spyder has had this functionality for a while.” For more details, visit the Microsoft Blog and a demo is available on GitHub. Visual Studio code July 2018 release, version 1.26 is out! MIT plans to invest $1 billion in a new College of computing that will serve as an interdisciplinary hub for computer science, AI, data science Microsoft releases the Python Language Server in Visual Studio
Read more
  • 0
  • 0
  • 3403

article-image-richard-devaul-alphabet-executive-resigns-after-being-accused-of-sexual-harassment
Natasha Mathur
31 Oct 2018
2 min read
Save for later

Richard DeVaul, Alphabet executive, resigns after being accused of sexual harassment

Natasha Mathur
31 Oct 2018
2 min read
It was only last week when the New York Times reported shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. Now, Richard DeVaul, a director at unit X of Alphabet (Google’s parent company), resigned from the company, yesterday, after being accused of sexually harassing Star Simpson, a hardware engineer. DeVaul has not received any exit package on his resignation. As per the NY times report, Richard DeVaul interviewed Star Simpson for a job reporting to him. He then further invited her to a Burning Man, an annual festival in the Nevada desert, the next week. Mr. DeVaul then sexually harassed Simpson at his encampment at Burning Man, as DeVaul made inappropriate requests to Simpson. Later when Simpson reported to Google regarding DeVaul’s sexual misconduct two years later, one of the company officials shrugged her off by saying the story was “more likely than not” true and that appropriate corrective actions had been taken. DeVaul had apologized in a statement to the New York Times saying that the incident was "an error in judgment. Sundar Pichai, Google’s CEO, further apologized yesterday, saying that the “apology at TGIF didn't come through, and it wasn't enough” in an e-mail obtained by Axios. Pichai will also be supporting the women engineers at Google, who are organizing a “women’s walk” walkout tomorrow to protest. “I am taking in all your feedback so we can turn these ideas into action. We will have more to share soon. In the meantime, Eileen will make sure managers are aware of the activities planned for Thursday and that you have the support you need”, wrote Pichai. Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices
Read more
  • 0
  • 0
  • 2109
article-image-aragon-0-6-released-on-mainnet-allowing-aragon-organizations-to-run-on-ethereum-mainnet
Amrata Joshi
31 Oct 2018
3 min read
Save for later

Aragon 0.6 released on Mainnet allowing Aragon organizations to run on Ethereum Mainnet

Amrata Joshi
31 Oct 2018
3 min read
Yesterday, the team at Aragon, announced the release of Aragon 0.6, named Alba, on Ethereum Mainnet. It’s now possible to create Aragon organizations on the Ethereum Mainnet. Earlier, the organizations were running on Ethereum testnets, without real-world inferences. Aragon 0.5 was released seven months ago and since then, more than 2,500 organizations have been created with it. The total number of Aragon organizations have now crossed 15,000. Aragon 0.5 was the first release to get powered by AragonOS. This release was only deployed on the Rinkeby Ethereum Testnet. Major updates in Aragon 0.6 Permissions Permissions are a dynamic and powerful way to customize the organization. They manage who can access resources on your organization, and how. For example, one can create an organization in which, funds can be withdrawn only after the voting is done. The votes can be only created by a board of experts, allowing anyone in the organization to cast votes. Peers can also vote to create tokens to add new members. Possibilities are endless with ‘Permissions’ as any governance process could be now implemented. Source: Aragon 2. Voting gets easier Voting enables participation and collaborative decision-making. The team at Aragon have rebuilt the card-based voting interface from the ground up. This interface helps one to a look at the votes at a glance. Source: Aragon 3.  AragonOS 4 Aragon 0.6 features AragonOS 4, a smart contract framework for building DAOs, dapps and protocols. The AragonOS 4 is yet to be released but has managed to create some buzz. Its architecture is based on the idea of a decentralized organization as an aggregate of multiple applications. The architecture also involves the use of the Kernel which governs how these applications can talk to each other and also how other entities can interact with them. AragonOS 4 makes the interaction with Aragon organizations even more secure and stable. It’s easy to create your own decentralized organization now. You can start by choosing the network for your organization and follow the next steps on Aragon’s official website. Note: The official blog post suggests to not place large amounts of funds in Aragon 0.6 organizations at this point as there might be some unforeseen situations where user funds could be at risk. Read more about Aragon 0.6 on the Aragon’s official blog post. Stable version of OpenZeppelin 2.0, a framework for smart blockchain contracts, released! IBM launches blockchain-backed Food Trust network which aims to provide greater transparency on food supply chains 9 recommended blockchain online courses
Read more
  • 0
  • 0
  • 1915

article-image-cockroach-labs-announced-managed-cockroachdb-as-a-service
Amrata Joshi
31 Oct 2018
3 min read
Save for later

Cockroach Labs announced managed CockroachDB-as-a-Service

Amrata Joshi
31 Oct 2018
3 min read
This week, Cockroach Labs announced the availability of Managed CockroachDB. CockroachDB,  a geo-distributed database, is a fully hosted and managed service, created and run by Cockroach Labs. It is an open source tool that makes deploying, scaling, and managing CockroachDB effortless. Last year, the company announced version 1.0 of CockroachDB and $27 million in Series B financing, which was led by Redpoint along with the participation from Benchmark, GV, Index Ventures and FirstMark. Managed CockroachDB is also cloud agnostic and available on AWS and GCP. The goal is to allow development teams to focus on building highly scalable applications without worrying about infrastructure operations. CockroachDB’s design makes data easy by providing an industry-leading model for horizontal scalability and resilience to accommodate fast-growing businesses. It also improves the ability to move data closer to the customers depending upon their geo-location. [box type="shadow" align="" class="" width=""] Fun Fact: Why the name ‘Cockroach’? In a post, published by Cockroach Labs, three years back, Spencer Kimball, CEO at Cockroach Labs, said, “You’ve heard the theory that cockroaches will be the only survivors post-apocalypse? Turns out modern database systems have a lot to gain by emulating one of nature’s oldest and most successful designs. Survive, replicate, proliferate. That’s been the cockroach model for geological ages, and it’s ours too.” [/box] Features of  Managed CockroachDB Always on service: Managed CockroachDB is an always on service for critical applications as it automatically replicates data across three availability zones for single region deployments. As a globally scalable distributed SQL database, CockroachDB also supports geo-partitioned clusters at whatever scale the business demands. Cockroach Labs manages the hardware provisioning, setup and configuration for the managed clusters so that they are optimized for performance. Since CockroachDB is cloud agnostic, one can migrate from one cloud service provider to another at peak load with zero downtime. Automatic upgrades to the latest releases, and hourly incremental backups of the data makes the working more easier. The Cockroach Labs team provides, 24x7 monitoring, and enterprise grade security for all the customers. CockroachDB provides the capabilities for building ultra-resilient, high-scale and global applications. It features distributed SQL with ACID (Atomicity, Consistency, Isolation, Durability) transactions. Features like, cluster visualization, priority support, native JSON support and automated scaling makes it even more unique. Read more about this announcement on the Cockroach Labs official website. SQLite adopts the rule of St. Benedict as its Code of Conduct, drops it to adopt Mozilla’s community participation guidelines, in a week MariaDB acquires Clustrix to give database customers ‘freedom from Oracle lock-in’ Why Neo4j is the most popular graph database
Read more
  • 0
  • 0
  • 4255