Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Hands-On Graph Analytics with Neo4j

You're reading from   Hands-On Graph Analytics with Neo4j Perform graph processing and visualization techniques using connected data across your enterprise

Arrow left icon
Product type Paperback
Published in Aug 2020
Publisher Packt
ISBN-13 9781839212611
Length 510 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
 Scifo Scifo
Author Profile Icon Scifo
Scifo
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: Graph Modeling with Neo4j
2. Graph Databases FREE CHAPTER 3. The Cypher Query Language 4. Empowering Your Business with Pure Cypher 5. Section 2: Graph Algorithms
6. The Graph Data Science Library and Path Finding 7. Spatial Data 8. Node Importance 9. Community Detection and Similarity Measures 10. Section 3: Machine Learning on Graphs
11. Using Graph-based Features in Machine Learning 12. Predicting Relationships 13. Graph Embedding - from Graphs to Matrices 14. Section 4: Neo4j for Production
15. Using Neo4j in Your Web Application 16. Neo4j at Scale 17. Other Books You May Enjoy

Neurons, layers, and forward propagation

In order to understand how neural networks work, let's consider a simple classification problem with two input features, x1 and x2, and a single output class O, whose value can be 0 or 1.

A simple neural network that can be used to solve this problem is illustrated in the following diagram:

The neural network has a single hidden layer comprising three neurons (the middle layer) and the output layer (right-most layer). Each neuron of the hidden layer has a vector of weights, wi, whose size is equal to the size of the input layer. These weights are the model parameter that we will try to optimize so that the output is close to the truth.

When an observation is fed into the network through the input layer, the following happens:

  1. Each neuron of the hidden layer receives all the input features.
  2. Each neuron of the hidden layer computes the weighted sum of these features, using its weight vector and bias. As an example, the output of the first...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime
Visually different images