Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-google-announces-new-artificial-intelligence-features-for-google-search
Sugandha Lahoti
25 Sep 2018
5 min read
Save for later

Google announces new Artificial Intelligence features for Google Search on its 20th birthday

Sugandha Lahoti
25 Sep 2018
5 min read
At the” Future of Search” event held at San Francisco yesterday, Google celebrated its 20th anniversary announcing a variety of new features to Google Search engine. Their proprietary search engine uses sophisticated machine learning, computer vision, and data science. The focus of this event was Artificial Intelligence and making new features available on a smartphone. Let’s look at what all was announced. Activity Cards on Google Discover Perhaps the most significant feature, is the Google Discover which is a completely revamped version of Google Feed. Google Feed is the content discovery news feed available in the dedicated Google App and on the Google’s homepage. Now, it has got a new look, brand, and feel. It features more granular controls over content that appears. A new feature called activity cards will show up in a user’s search results if they've done searches on one topic repetitively. This activity card will help users pick up from where they left off their Google Search. Users can retrace their steps to find useful information that they have found earlier and might not remember which sites had that. Google Discover starts with English and Spanish in the U.S. and will expand to more languages and countries soon. Collections in Google Search Collections in Google Search can help users keep track of content they have visited, such as a website, article, or an image, and quickly get back to it later. Users can now add content from an activity card directly to Collections. This makes it easy to keep track of and organize the content to be revisited. Dynamic organization with Knowledge graph Users will see more videos and fresh visual content, as well as evergreen content—articles and videos that aren’t new to the web, but are new to you. This feature uses the Topic Layer in the Knowledge graph to predict level of expertise on a topic for a user and help them further develop those interests. The Knowledge Graph can intelligently show relevant content, rather than prioritizing chronological order. The content will appear based on user engagement and browsing history. The Topic Layer is built by analyzing all the content that exists on the web for a given topic and develops hundreds and thousands of subtopics. It then looks for patterns to understand how these subtopics relate to each other, to explore the next content a user may want to view. AMP Stories Google will now use Artificial Intelligence to create AMP stories, which will appear in both Google Search and image search results. AMP Stories is Google’s open source library that enables publishers to build web-based flipbooks with smooth graphics, animations, videos, and streaming audio. Featured Videos The next enhancement is Featured Videos that will semantically link to subtopics of searches in addition to top-level content. Google will automatically generate preview clips for relevant videos, using AI to find the most relevant parts of the clip. Google Lens Google has also improved its image search algorithm due to which images will now be sorted by the relevance of the web results they correspond to. Image search results will also contain more information about the pages that they come from. They also announced Google Lens, its visual search tool for the web. Lens in Google Images will analyze and detect objects in snapshots and show relevant images. Better SOS Alerts Google is also updating their SOS Alerts on Google Search and Maps with AI. They will use AI and significant computational power to create better forecasting models that predict when and where floods will occur. This information is also intelligently incorporated into Google Public Alerts. Improve Job Search with Pathways They are also improving their job search with AI by introducing a new feature called Pathways. When someone searches for jobs on Google, they will be shown jobs available right now in their area, but also be provided with information about effective local training and education programs. To learn in detail about where Google is headed next, read their blog Google at 20. Policy changes in Google Chrome sign in, an unexpected surprise gift from Google The team also announced a policy change in Google's popular Chrome browser which was not taken well. Following this, the browser automatically logs users into Chrome using other Google services. This has got people worried about their privacy as this can lead to Google tracking their browsing history and collecting data to target them with ads. Prior to this unexpected change, it was possible to sign in to a Google service, such as Gmail, via the Chrome browser without actually logging in to the browser itself. Adrienne Porter Felt, engineer, and manager at Google Chrome has however clarified the issue. She said that the Chrome browser sign in does not mean that Chrome is automatically sending your browsing history to your Google account. She further added, “My teammates made this change to prevent surprises in a shared device scenario. In the past, people would sometimes sign out of the content area and think that meant they were no longer signed into Chrome, which could cause problems on a shared device.” Read the Google clarification report on the Reader app. The AMPed up web by Google. Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more! Pay your respects to Inbox, Google’s email innovation is getting discontinued.  
Read more
  • 0
  • 0
  • 3758

article-image-microsoft-announces-the-first-public-preview-of-sql-server-2019-at-ignite-2018
Amey Varangaonkar
25 Sep 2018
2 min read
Save for later

Microsoft announces the first public preview of SQL Server 2019 at Ignite 2018

Amey Varangaonkar
25 Sep 2018
2 min read
Microsoft made several key announcements at their Ignite 2018 event, which began yesterday in Orlando, Florida. The biggest announcement of them all was the public preview availability of SQL Server 2019. With this new release of SQL Server, businesses will be able to manage their relational and non-relational data workloads in a single database management system. What we can expect in SQL Server 2019 Microsoft SQL Server 2019 will run either on-premise, or on the Microsoft Azure stack Microsoft announced the Azure SQL Database Managed Instance, which will allow businesses to port their database to the cloud without any code changes Microsoft announced new database connectors that will allow organizations to integrate SQL Server with other databases such as Oracle, Cosmos DB, MongoDB and Teradata SQL Server 2019 will get built-in support for popular Open Source Big Data processing frameworks such as Apache Spark and Apache Hadoop SQL Server 2019 will have smart machine learning capabilities with support for SQL Server Machine Learning services and Spark Machine Learning Microsoft also announced support for Big Data clusters managed through Kubernetes - the Google-incubated container orchestration system With organizations slowly moving their operations to the cloud, Microsoft seems to have hit the jackpot with the integration of SQL Server and Azure services. Microsoft has claimed businesses can save upto 80% of their operational costs by moving their SQL database to Azure. Also, given the rising importance of handling Big Data workloads efficiently, SQL Server 2019 will now be able to ingest, process and analyze Big Data on its own with built-in capabilities of Apache Spark and Hadoop - the world’s leading Big Data processing frameworks. Although Microsoft hasn’t hinted at the official release date yet, it is expected that SQL Server 2019 will be generally available in the next 3-5 months. Of course, the duration can be extended or accelerated depending on the feedback received from the tool’s early adopters. You can try the public preview of SQL Server 2019 by downloading it from the official Microsoft website. Read more Microsoft announces the release of SSMS, SQL Server Management Studio 17.6 New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL Troubleshooting in SQL Server
Read more
  • 0
  • 0
  • 6807

article-image-safemessage-an-ai-based-biometric-authentication-solution-for-messaging-platforms
Savia Lobo
24 Sep 2018
2 min read
Save for later

SafeMessage: An AI-based biometric authentication solution for messaging platforms

Savia Lobo
24 Sep 2018
2 min read
Today, ID R&D, the biometric solutions provider offering proprietary AI-based behavioral, voice, and anti-spoofing user authentication capabilities, releases SafeMessage, the industry’s first biometric authentication technology for messaging. ID R&D will be demoing SafeMessage, as well as its other award-winning voice and behavioral biometric products, today at FinovateFall. SafeMessage, offers multi-layer continuous authentication of verified users when integrated across messaging platforms (WhatsApp, Telegram, Skype, Slack, and others) without impacting user’s experience. It combines voice, behavioral, and facial recognition, along with voice and facial “liveness” analysis to provide unmatched authentication and security. Requiring a mere 1-2 seconds of free speech, keystrokes, or facial scans, SafeMessage adds to ID R&D’s suite of comprehensive, frictionless biometric solutions. SafeMessage provides frictionless, yet continuous authentication, and can also differentiate between an authorized user and a voice recording or a computer-generated simulation (for voice input). Alexey Khitrov, CEO of ID R&D says, “ID R&D is thrilled to be the first to introduce frictionless authentication to messaging apps and platforms. Now secure and seamless communication is a possibility for all end users, whether over text, mobile app, instant message, chat, or the internet.” To know more about SafeMessage, visit ID R&D website. Microsoft Edge introduces Web Authentication for passwordless web security Machine learning APIs for Google Cloud Platform Google updates biometric authentication for Android P, introduces BiometricPrompt API
Read more
  • 0
  • 0
  • 2683
Visually different images

article-image-facebook-open-sources-logdevice-a-distributed-data-store-for-logs
Natasha Mathur
24 Sep 2018
3 min read
Save for later

Facebook open sources LogDevice, a distributed data store for logs

Natasha Mathur
24 Sep 2018
3 min read
Facebook made its distributed log system, called LogDevice available as an open source project, two weeks back. It is a highly scalable and fault-tolerant distributed data store for sequential data. LogDevice comes with features such as high write availability, consistency guarantees, non-deterministic record placement, and a local log store. LogDevice is widely used at Facebook. Existing use cases of LogDevice include stream processing pipelines, distribution of index updates in large distributed databases, machine learning pipelines, replication pipelines, and durable reliable task queues. LogDevice is designed from the ground up so that it can serve different logs with high reliability and efficiency at scale. It is also highly tunable which allows the use cases as mentioned above to be optimized for the right set of trade-offs when it comes to durability-efficiency and consistency-availability space. Let’s have a look at the key features of LogDevice. High write availability LogDevice comes with high write availability which is uncommon in most of the existing logging applications of Facebook. LogDevice efficiently separates record sequencing from record storage. It uses non-deterministic placement of records which improves the write availability and betters the tolerate temporary load imbalances caused by spikes in the write load on individual logs. Consistency Guarantees The consistency guarantees provided by a LogDevice log are similar to the ones provided by a record-oriented file system. It comes with built-in data loss detection and reporting. In case data loss occurs, the Log sequence numbers (LSNs) of all records that were lost gets reported to every reader attempting to read the affected log and range of LSNs. Although, there are no ordering guarantees provided for records of different logs. Non-deterministic record placement LogDevice has a different approach when it comes to record placement. First, the ordering of records in a log is decoupled from the actual storage of record copies. For each log in a LogDevice cluster, it runs a sequencer object whose only job is issuing the monotonically increasing sequence numbers as records. The sequencer runs either on a storage node or on a node which has been reserved for sequencing. After the record has been stamped with a sequence number, the copies of that record get stored on any storage node within a cluster. The LogDevice client library is capable of performing the reordering and occasional de-duplication of records. This makes sure that the records get delivered to the reader application in the order of their LSNs. Local Log Store The local log store of LogDevice is called LogsDB. LogsDB is a write-optimized datastore which has been designed to keep the number of disks seeks small and controlled. Also, the write and read IO patterns on the storage device are mostly sequential. The write-optimized data stores offer great performance when writing data, even if it belongs to multiple files or logs. Apart from that, LogsDB is quite efficient for log tailing workloads, which is a common pattern of log access where records are delivered to readers soon after they are written. These records are never read again except in rare cases such as massive backfills. LogsDB is built on top of RocksDB, which is an ordered durable key-value data store based on LSM trees. LogsDB act as a time-ordered collection of RocksDB column families. Each RocksDB instance is called a LogsDB partition. For more information, visit the official LogDevice website. Facebook researchers build a persona-based dialog dataset with 5M personas to train end-to-end dialogue systems Facebook Dating app to release as a test version in Colombia ACLU sues Facebook for enabling sex and age discrimination through targeted ads
Read more
  • 0
  • 0
  • 3320

article-image-alibaba-launches-an-ai-chip-company-named-ping-tou-ge-to-boost-chinas-semiconductor-industry
Savia Lobo
24 Sep 2018
3 min read
Save for later

Alibaba launches an AI chip company named ‘Ping-Tou-Ge’ to boost China’s semiconductor industry

Savia Lobo
24 Sep 2018
3 min read
Alibaba Group now enters the semiconductor industry by launching its new subsidiary named ‘Ping-Tou-Ge’ that will develop computer chips specifically designed for artificial intelligence. The company made this announcement last week at its Computing Conference in Hangzhou. Why the name, ‘Ping-Tou-Ge’? “Ping-Tou-Ge” is a Mandarin nickname for the Honey Badger, an animal native to Africa, Southwest Asia, and the Indian subcontinent. Alibaba chief technology officer Jeff Zhang says, “Many people know that the honey badger is a legendary animal: it’s not afraid of anything and has skillful hunting techniques and great intelligence”. He further added, “Alibaba’s semiconductor company is new; we’re just starting out. And so we hope to learn from the spirit [of the honey badger]. A chip is small [like the honey badger], and we hope that such a small thing will produce great power.” Ping-Tou-Ge is one of Alibaba’s efforts to improve China's semiconductor industry The main reason for the creation of ‘Ping-Tou-Ge’ was the US ban on Chinese telecom giant ZTE, which brought a realization as to how much China's semiconductor industry depends heavily on imported chipsets. Alibaba has constantly been increasing its footprint in the chip industry. DAMO Academy, which was established in 2017, focuses on areas such as machine intelligence and data computing. Alibaba had also acquired Chinese chipmaker Hangzhou C-SKY Microsystems in April to enhance its own chip production capacity. C-SKY Microsystems is a designer of a domestically developed embedded chipset. Zhang Jianfeng, head of Alibaba's DAMO Academy said in a statement that the Hangzhou-based company will produce its first neural network chip in the second half of next year with an internally developed technology platform and a synergized ecosystem. Ping-Tou-Ge will combine the powers of both DAMO’s chip business and C-Sky Microsystems. It will operate independently in the development of its embedded chip series CK902 and its neural network chip Ali-NPU. The Ali-NPU chip is designed for AI inferencing in the field of image processing, machine learning, etc. Some of its features include: It is expected to be around 40 times more cost-effective than conventional chips perform 10 times better than mainstream CPU and GPU architecture AI chips in the current market cut down power and manufacturing costs to half Pingtouge will also focus on customized AI chips and embedded processors to support Alibaba's developing cloud and Internet of Things (IoT) business. These chips could be used in various industries such as vehicles, home appliances, and manufacturing. To know more about Ping-Tou-Ge in detail, visit MIT Technology Review blog. OpenSky is now a part of the Alibaba family Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Why Alibaba cloud could be the dark horse in the public cloud race
Read more
  • 0
  • 0
  • 4050

article-image-tencent-creates-two-artificial-intelligence-agents-tstarbot-that-defeat-starcraft-iis-cheater-ai
Sugandha Lahoti
24 Sep 2018
4 min read
Save for later

Tencent creates two Artificial Intelligence agents TSTARBOT that defeat StarCraft II's cheater AI

Sugandha Lahoti
24 Sep 2018
4 min read
AI researchers have always been super thrilled about building Artificial Intelligence bots that can play games as smartly as a human. Back in June, OpenAI Five, the artificial intelligence bot team, had smashed amateur humans in the video game Dota 2. In August, OpenAI Five bots beat a team of former pros at Dota 2. Now, researchers from Chinese tech giant Tencent have recently developed a pair of AI agents that were successful in beating the Cheating Level Built-in AI in StarCraft II in the full game. Starcraft II is widely considered as the most challenging Real Time Strategy game, due to large observation space, huge action space, partial observation, multi-player simultaneous game model, and long time horizon decision. The two AI Agents Their research paper describes two AI agents - TSTARBOT 1 and TSTARBOT 2. The first is a macro-level controller agent based on deep reinforcement learning over flat action structure. It oversees several specific algorithms designed to handle lower level functions. TSTARBOT 1: Overview of macro actions and reinforcement learning TSTARBOT2, the more robust of the two, is a macro-micro controller consisting of several modules that handle entire facets of the gameplay independently. TSTARBOT 2: Overview of macro-micro hierarchical actions The gameplay Tencent's AI played the game using methods similar to a mouse click and macros and played exactly the same way as a human player would. The AI saw the game by interpreting video output in a frame-to-frame basis, and translating the information into data it can work with. Tencent's AI played StarCraft II with the "fog-of-war" turned on. This means that the AI can’t see the enemy AI's units and base until it scouts the map. The TSTARBOTs were designed to imitate the human thought process.   The agents were tested in a 1v1 Zerg-vs-Zerg full game. They played against built-in AI ranging from level 1 (the easiest) to level 10 (the hardest). The training used the Abyssal Reef, a map known to have thwarted neural network AIs from winning against StarCraft II's built-in AIs. Interestingly, Tencent trained the agents using only a single CPU. This, however, took a large number of processors to process the data it takes to train the bots on billions of frames of video. The researchers took 1920 parallel actors (with 3840 CPUs across 80 machines) to generate the replay transitions, at the speed of about 16,000 frames per second. Results Win-rate (in %) of TSTARBOT 1 and TSTARBOT 2 agents, against built-in AIs of various difficulty levels. Each reported win-rate is obtained by taking the mean of 200 games with different random seeds, where a tie is counted as 0.5 when calculating the win-rate. TSTARBOT1 and TSTARBOT2 also played against several human players ranging from Platinum to Diamond level players in the ranking system of SCII Battle.net League. TSTARBOTs vs. Human Players Each entry means how many games TStarBot1/TStarBot2 wins and loses. The agent is able to consistently defeat built-in AIs in all levels, showing the effectiveness of the hierarchical action modeling. In another informal test, the researchers also let the two TSTARBOT play against each other. TSTARBOT 1 always defeated TSTARBOT 2. This is because TSTARBOT 1 tends to use the Zergling Rush strategy. In StarCraft, a Zerg rush is a strategy where a player using the Zerg race tries to overwhelm the opponent through large numbers of smaller units before the enemy is fully prepared for battle. TSTARBOT 2 lacks anti-rush strategy and henceforth always loses. In the future, the team plans to build a more carefully hand-tuned action hierarchy to enable the reinforcement learning algorithms to develop better strategies for full StarCraft II games. If you want to dive a little deeper into how the bots work, you can read the research paper. AI beats human again – this time in a team-based strategy game. OpenAI Five loses against humans in Dota 2 at The International 2018. OpenAI set their eyes to beat Professional Dota 2 team at The International.
Read more
  • 0
  • 0
  • 3853
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-postgis-2-5-0-is-here
Natasha Mathur
24 Sep 2018
2 min read
Save for later

PostGIS 2.5.0 is here!

Natasha Mathur
24 Sep 2018
2 min read
The PostGIS development team released version 2.5.0 of PostGIS, a spatial database extender for PostgreSQL, yesterday. PostGIS 2.5.0 explores new features, breaking changes, improvements, and fixes.  PostGIS is an open source software program which provides support for geographic objects to the PostgreSQL object-relational database. PostGIS comprises simple features for SQL specification from the Open Geospatial Consortium (OGC). New Features ST_OrientedEnvelope (Returns a minimum rotated rectangle enclosing a geometry) and ST_QuantizeCoordinates  (Sets least significant bits of coordinates to zero) have been added in PostGIS 2.5.0. Apart from that, other new features such as ST_FilterByM (Filters vertex points based on their m-value) and ST_ChaikinSmoothing (Returns a "smoothed" version of the given geometry with the help of Chaikin algorithm) have also been added in PostGIS 2.5.0. Breaking Changes The Version number has been removed from address_standardize lib file in PostGIS 2.5.0. The raster support functions can be loaded only in the same schema with core PostGIS functions. The dummy pgis_abs type has been removed from aggregate/collect routines. Support has been removed for drop support GEOS < 3.5 and PostgreSQL < 9.4. Improvements and bug Fixes There’s been performance improvement for sorting POINT geometries in PostGIS 2.5.0. An external raster band index has been added to ST_BandMetaData. Also, there’s an added Raster Tips section in Documentation for information about Raster behavior (e.g. Out-DB performance, maximum open files). The use of GEOS in topology implementation has been reduced in PostGIS 2.5.0. A bug that created MVTs with incorrect property values under parallel plans has been fixed. Geometry in PostGIS 2.5.0 has been simplified using map grid cell size before generating MVT.  BTree sort order has now been defined on collections of EMPTY and same-prefix geometries in PostGIS 2.5.0. Hashable geometry feature enables direct use in CTE signatures in PostGIS 2.5.0. PostGIS 2.5.0 will not be accepting EMPTY points as topology nodes. ST_GeometricMedian now provides support for point weights in PostGIS 2.5.0. Duplicated code in lwgeom_geos has been removed. For more information on other updates and fixes in PostGIS 2.5.0, check out the official release notes. Writing PostGIS functions in Python language [Tutorial] Adding PostGIS layers using QGIS [Tutorial] PostGIS extension: pgRouting for calculating driving distance [Tutorial]
Read more
  • 0
  • 0
  • 2301

article-image-mariadb-acquires-clustrix-to-give-database-customers-freedom-from-oracle-lock-in
Savia Lobo
21 Sep 2018
2 min read
Save for later

MariaDB acquires Clustrix to give database customers ‘freedom from Oracle lock-in’

Savia Lobo
21 Sep 2018
2 min read
Today, MariaDB Corp. announced that it has acquired a San Francisco-based database company, Clustrix. With this acquisition, MariaDB plans to add a scale-out capability that runs on-premises with commodity hardware or in any cloud environment. This, in turn, will provide greater scalability and higher availability than other traditional distributed options such as Oracle RAC. MariaDB is an adaptable solution because of its architecture that supports pluggable and purpose-built storage engines. Clustrix is a provider of relational database engineered for cloud and datacenter. Known for the scalability it provides, the ClustrixDB is an ideal database solution for high-transaction, high-value applications. How will Clustrix acquisition benefit MariaDB? Michael Howard, CEO, MariaDB Corporation, says, “With Clustrix, MariaDB can provide a better solution for our customers that have challenging scale-out enterprise environments. Our distributed solution will satisfy the most extreme requirements of our largest customers and gives them the freedom to break from Oracle’s lock-in.” The company believes that the acquisition of Clustrix will allow MariaDB Labs to tackle the challenges in the database field. These challenges are extreme in distributed computing, machine learning, next-generation chips, and memory and storage environments. Both Clustrix and MariaDB share the same vocabulary and work on similar problems. Also, both aim to be compatible with MySQL. This makes Clustrix a good fit for MariaDB. Howard says, “the database was always built to accommodate external database storage engines. MariaDB will have to make some changes to its APIs to be ready for the clustering features of Clustrix. It’s not going to be a 1-2-3 effort, it’s going to be a heavy-duty effort for us to do this right. But everyone on the team wants to do it because it’s good for the company and our customers.” This simply means integrating the Clustrix database technology into MariaDB won’t be trivial. To know more about this acquisition in detail, visit MariaDB blog MariaDB 10.3.7 releases Building a Web Application with PHP and MariaDB – Introduction to caching Installing MariaDB on Windows and Mac OS X
Read more
  • 0
  • 0
  • 3049

article-image-facebook-researchers-build-a-persona-based-dialog-dataset-with-5m-personas-to-train-end-to-end-dialogue-systems
Natasha Mathur
21 Sep 2018
3 min read
Save for later

Facebook researchers build a persona-based dialog dataset with 5M personas to train end-to-end dialogue systems

Natasha Mathur
21 Sep 2018
3 min read
Facebook researchers have collected and compiled a new dataset providing 5 million personas and 700 million persona-based dialogues. The aim with this dataset is to enhance the performance of their end-to-end dialogue systems by training them using personas. This will result in increased and improved engagement between human beings and computer agents. Generally, end-to-end dialogue systems are mostly based on neural architectures like bidirectional LSTMs or Memory Networks. These are trained directly by gradient descent on dialogue logs and have been showing promising performance in multiple contexts. One of their major advantages lies in the fact that they rely on large data sources of existing dialogues to learn various domains without any expert knowledge. However, these dialogue systems show limited engagement and lack of consistency. To solve this issue, a team of researchers at the Montreal Institute for Learning Algorithms (MILA) and Facebook AI introduced the PERSONACHAT dataset. This dataset comprises dialogues between different pairs of agents with text profiles, or personas, attached to each of them. This leads the path for end-to-end personalized chatbots as the personas of the bots are short texts that could be easily edited by most users. “However, the PERSONA-CHAT dataset was created using an artificial data collection mechanism based on Mechanical Turk. As a result, neither dialogs nor personas can be fully representative of real user-bot interactions and the dataset coverage remains limited, containing a bit more than 1k different personas” reads the research paper. So, the researchers have built another large-scale persona-based dialogue dataset using conversations that were previously extracted from the REDDIT dataset. ”With simple heuristics, we create a corpus of over 5 million personas spanning more than 700 million conversations. We train persona-based end-to-end dialogue models on this dataset” mentions researchers in the paper. Read also: Best Machine Learning Datasets for beginners The goal is to learn to predict responses based on a persona for a wide range of personas. Researchers have built a dataset using the data from Reddit of following examples: Persona: [“I like sport”, “I work a lot”] Context: “I love running.” Response: “Me too! But only on weekends.”                                            Persona-based Network Architecture The persona consists of a set of sentences which represent the personality of the responding agent, the context refers to the utterance that it responds to, and the response is an answer which is to be predicted. The researchers then went ahead and trained the persona-based end-to-end dialogue systems using their newly developed dataset. Systems that were trained on this dataset outperformed other conversational agents (which were not trained using personas) and held far more engaging conversations. “As pretraining leads to a considerable improvement in performance, future work could be done fine-tuning this model for various dialog systems. Future work may also entail building more advanced strategies to select a limited number of personas for each user while maximizing the prediction performance,” say researchers in the paper. For more details, check out the official research paper. Google launches a Dataset Search Engine for finding Datasets on the Internet How to create and prepare your first dataset in Salesforce Einstein 25 Datasets for Deep Learning in IoT
Read more
  • 0
  • 0
  • 4110

article-image-opencv-4-0-alpha-release-out
Pravin Dhandre
21 Sep 2018
2 min read
Save for later

OpenCV 4.0 alpha release out!

Pravin Dhandre
21 Sep 2018
2 min read
With more than 3 years from the time of previous version release OpenCV 3.0, the team happily announced the alpha release of the most awaited OpenCV 4.0. The new version is set to encompass exclusive features such as 3D dense reconstruction algorithm, newest improvements and bug fixes to recent maintenance release of OpenCV 3.4. Key Features: OpenCV is a C++11 library now Default features include lambda functions, convenient iteration, and initialization of cv::Mat Added new chessboard detector Added exclusive HPX parallel backend and basic FP16 support Standard std::string and std::shared_ptr replaced hand-crafted cv::String and cv::Ptr parallel_for can now use the pool of std::threads as backend Major improvements and bug fixes: ONNX parser added to existing OpenCV DNN module Support to various classification networks - AlexNet, Inception v2, Resnet, VGG Partial support to YOLO v2 object detection network Faster object detection using Intel Inference Engine, part of Intel OpenVINO Added stability improvements in the OpenCL backend Fast QR code detector with support to add QR code decoder soon SSE4-, AVX2- and NEON-optimized kernels expanded Legacy C API from OpenCV 1.x partially excluded This alpha release appears to be a massive version with 85 patches including 28 merge requests. This release is assumed to be quite stable although few changes in OpenCV API and implementation are expected before 4.0 final release. For more information on the detailed list of features and improvements, please read official documentation. Image filtering techniques in OpenCV 3 ways to deploy a QT and OpenCV application OpenCV and Android: Making Your Apps See
Read more
  • 0
  • 0
  • 2234
article-image-kotlin-1-3-rc1-is-here-with-compiler-and-ide-improvements
Natasha Mathur
21 Sep 2018
2 min read
Save for later

Kotlin 1.3 RC1 is here with compiler and IDE improvements

Natasha Mathur
21 Sep 2018
2 min read
The Kotlin team has come out with a release candidate 1.3 of the Kotlin Language. Kotlin 1.3 RC1 comes with improvements and changes to its compiler and the IDE, IntelliJ IDEA. Let’s discuss key updates in Kotlin 1.3 RC1. Compiler Changes Improvements Support has been added for main entry point without arguments in the frontend, IDE and JVM in Kotlin 1.3 RC1. Other than that, there is added support for suspend fun main function in JVM. The boxing technique has been changed. Now, instead of calling valueOf, a new wrapper type will be allocated. Bug Fixes The invoke function that kept getting called with lambda parameter on a field named suspend has been fixed. With Kotlin 1.3 RC1, correct WhenMappings code is generated in case of mixed enum classes in when conditions. The use of  KSuspendFunctionN and SuspendFunctionN as supertypes has been forbidden. Also, the suspend functions are annotated with @kotlin.test.Test have been forbidden. Use of kotlin.Result as a return type and with special operators has been prohibited. The constructors containing inline classes as parameter types will be now generated as private with synthetic accessors. An inline class that was missing unboxing when using indexer into an ArrayList has been fixed. IDE Changes Support has been added for type parameters in where clause (multiple type constraints). Bug Fixes The issue where @Language prefix and suffix were getting ignored for function arguments has been fixed. Coroutine migrator has been renamed to buildSequence/buildIterator to their new names. Deadlock in databinding with AndroidX which led to Android Studio hanging has been fixed. The issue of Android module in a multiplatform project not being recognized earlier as a multiplatform module has been fixed. Multiplatform projects without Android target were not being imported properly into Android Studio, this has been fixed with Kotlin 1.3 RC1. IDEA used to hang when Kotlin bytecode tool window remained open while editing a class with a secondary constructor. This is fixed now. IDE Multi-Platform: Old multi-platform modules templates have been removed from New Project/New Module wizard. ConcurrentModificationException, an actual type alias has been introduced in the JVM library. There are more changes and improvements in Kotlin 1.3RC1. Check out Kotlin 1.3RC official release notes for the complete list. Building RESTful web services with Kotlin Kotlin 1.3 M1 arrives with coroutines and new experimental features like unsigned integer types IntelliJ IDEA 2018.3 Early Access Program is now open!
Read more
  • 0
  • 0
  • 4433

article-image-bitcoin-core-escapes-a-collapse-from-a-denial-of-service-vulnerability
Savia Lobo
21 Sep 2018
2 min read
Save for later

Bitcoin Core escapes a collapse from a Denial-of-Service vulnerability

Savia Lobo
21 Sep 2018
2 min read
A few days back, Bitcoin Core developers discovered a vulnerability in its Bitcoin Core software that would have allowed a miner to insert a ‘poisoned block’ in its blockchain. This would have crashed the nodes running the Bitcoin software around the world. The software patch notes state, “A denial-of-service vulnerability (CVE-2018-17144) exploitable by miners has been discovered in Bitcoin Core versions 0.14.0 up to 0.16.2.” The developers further recommended users to upgrade any of the vulnerable versions to 0.16.3 as soon as possible. CVE-2018-17144: The denial-of-service vulnerability The vulnerability was introduced in Bitcoin Core version 0.14.0, which was first released in March 2017. But the issue wasn't found until just two days ago, prompting contributors of the codebase to take action and ultimately release a tested fix within 24 hours. In a report by The Next Web, “The bug relates to its consensus code. It meant that some miners had the option to send transaction data twice, causing the Bitcoin network to crash when attempting to validate them. As such invalid blocks need to be mined anyway, only those willing to disregard block reward of 12.5BTC ($80,000) could actually do any real damage.” Also, the bug was not only in the Bitcoin protocol but also in its most popular software implementation. Some cryptocurrencies built using Bitcoin Core’s code were also affected. For example, Litecoin patched the same vulnerability on Tuesday. However, the bitcoin is far too decentralized to be brought down by any single entity. TNW also states, “While never convenient, responding appropriately to such potential dangers is crucial to maintaining the integrity of blockchain tech – especially when reversing transactions is not an option.” This vulnerability discovery, however, was a great escape from the Bitcoin collapse. To read about this news in detail, head over to The Next Web’s full coverage. A Guide to safe cryptocurrency trading Apple changes app store guidelines on cryptocurrency mining Crypto-ML, a machine learning powered cryptocurrency platform
Read more
  • 0
  • 0
  • 2938

article-image-pytorch-org-revamps-for-pytorch-1-0-with-design-changes-and-added-static-graph-support
Natasha Mathur
21 Sep 2018
2 min read
Save for later

Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support

Natasha Mathur
21 Sep 2018
2 min read
The Pytorch team updated their official website “Pytorch.org” for PyTorch 1.0 yesterday. The new update comprises minor changes to the overall look and feel of the website. In addition to that, more information has been added under the tutorials section for converting your PyTorch models to a static graph. PyTorch is a Python-based scientific computing package which uses the power of graphics processing units. It is also one of the preferred deep learning research platforms built to offer maximum flexibility and speed. Key Updates Design Changes The layout of the webpage is still the same. But color changes have been made with additional tabs included on top of the webpage. Revamped Pytorch.org Previously, there were only five tabs, namely, get started, about, support, discuss and docs. Now, there are eight tabs included namely, Get Started, Features, Ecosystem, Blog, Tutorials, Docs, Resources, and Github. Older Python.org Updated Tutorials With new tutorial tab, additional information has been provided for users to convert their models into a static graph, which is a feature in the upcoming PyTorch 1.0 version. Added static graph support One of the main differences between TensorFlow and PyTorch is that TensorFlow uses static computational graphs while PyTorch uses dynamic computational graphs. In TensorFlow we first set up the computational graph, then execute the same graph many times. There has been an additional section under tutorials on static graphs. This implementation makes use of basic TensorFlow operations to set up a computational graph, then executes the graph many times to actually train a fully-connected ReLU network. For more details on the changes, visit the official PyTorch website. What is PyTorch and how does it work? Can a production ready PyTorch 1.0 give TensorFlow a tough time? PyTorch 0.3.0 releases, ending stochastic functions
Read more
  • 0
  • 0
  • 3064
article-image-facebook-dating-app-to-release-as-a-test-version-in-colombia
Sugandha Lahoti
21 Sep 2018
3 min read
Save for later

Facebook Dating app to release as a test version in Colombia

Sugandha Lahoti
21 Sep 2018
3 min read
Facebook first announced at it’s F8 conference that it is testing a new dating feature within the proprietary Facebook app. This dating and relationships feature will allow  people to create a dating profile that is separate from their Facebook profile. Potential matches in Facebook Dating will be recommended based on dating preferences, your location, shared page likes, similarities between your Facebook profiles, and mutual friends. Starting today, Facebook is rolling out a country-wide test in Colombia of its new Dating feature. Users can browse for matches within a 100km radius. Users have to input their location, which Facebook will verify through their phone’s GPS. Although, users won’t be able to change their virtual location to browse for matches elsewhere. For now Facebook Dating is mobile-only. The app will be available entirely within Facebook’s Android and iOS apps to users 18 and older, free to use. Per Facebook guidelines, the Dating app will be available only if users opt for it. They need to create a separate dating profile which will include a more limited amount of personal information. Once you start browsing, Facebook’s recommendation algorithm will allow you to see friends of friends or people with whom you have fewer mutual friends. Your existing friends, and the people you have blocked will be excluded from the dating pool. However, you can still find a potential soulmate, in a person you’ve unfriended. When browsing, you’ll see a selection of people you can “express interest” in by tapping a photo or an answer to a question prompt. You can then add a note and send it to the person you’re interested in. The recipient can then either send a message back or ignore it. Users can send as many messages they want, but can visit only upto 100 profiles are day. Messages are limited to text and emoji for which Facebook Dating uses its own messaging interface, not Facebook Messenger. Facebook dating also allows users to expand their dating pool with the integration of Facebook Groups and Events. Users can opt in to showing their Dating profile to members of a group, they expect to find good matches in. The same goes for Facebook Events, where the feature is enabled for both upcoming and past events. If the test goes well, Facebook may roll Dating to more countries shortly in its mission to create meaningful connections. According to Product Manager, Nathan Sharp, “If Dating takes off in Colombia, it could be promoted to a more prominent place within the app, or even to an app of its own. Our goal is to make Facebook the single best place to start a relationship. It could get there eventually — but it will need to evolve along the way.” F8 AR Announcements. ACLU sues Facebook for enabling sex and age discrimination through targeted ads. Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms
Read more
  • 0
  • 0
  • 2110

article-image-rxdb-8-0-0-a-reactive-offline-first-multiplatform-database-for-javascript-released
Bhagyashree R
20 Sep 2018
2 min read
Save for later

RxDB 8.0.0, a reactive, offline-first, multiplatform database for JavaScript released!

Bhagyashree R
20 Sep 2018
2 min read
After the release of RxDB 8.0.0-beta.1 earlier this month, the RxDB community released RxDB 8.0.0 yesterday. The focus of this release is better defaults and improved performance with broadcast-channel for communication. RxDB is a reactive, offline-first, multiplatform database for JavaScript. What’s new in RxDB 8.0.0? Breaking changes RxDB has upgraded to pouchdb 7.0.0, its latest version As disableKeyCompression was not used by many users, it is now disabled by default and has been renamed as keyCompression RxDatabase.collection() now only takes the json-schema as schema-attribute In order to comply with the json-schema-standard, it is not allowed to set the required fields using required: true, instead you can use required: ['myfield'] Setters and save() are no more allowed on non-temporary documents. To change document-data, use RxDocument.atomicUpdate(), RxDocument.atomicSet(), or RxDocument.update(). The document methods, RxDocument.synced$ and RxDocument.resync() are removed middleware-hooks contain plain json as first parameter and RxDocument as second You can now set QueryChangeDetection by adding the boolean field queryChangeDetection: true when creating the database Additional Improvements RxDocument.atomicSet() RxCollection.awaitPersistence() Option for CORS to server-plugin All methods of RxDocument are bound to the instance Added RxReplicationState.denied$, which emits when a document failed to replicate Added RxReplicationState.alive$, which emits true or false depending if the replication is alive - data is transmitting properly between databases Miscellaneous changes Performance is improved by enabling cross-instance communication with broadcast-channel Upgraded to eslint 5 and babel 7 To read the full list of changes, check out RxDB’s GitHub repository. Introducing TimescaleDB 1.0, the first OS time-series database with full SQL support Introducing Watermelon DB: A new relational database to make your React and React Native apps highly scalable MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more
Read more
  • 0
  • 0
  • 4212