Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
TensorFlow Machine Learning Cookbook

You're reading from   TensorFlow Machine Learning Cookbook Over 60 recipes to build intelligent machine learning systems with the power of Python

Arrow left icon
Product type Paperback
Published in Aug 2018
Publisher Packt
ISBN-13 9781789131680
Length 422 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Nick McClure Nick McClure
Author Profile Icon Nick McClure
Nick McClure
Sujit Pal Sujit Pal
Author Profile Icon Sujit Pal
Sujit Pal
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Title Page
Copyright and Credits
Dedication
Packt Upsell
Contributors
Preface
1. Getting Started with TensorFlow FREE CHAPTER 2. The TensorFlow Way 3. Linear Regression 4. Support Vector Machines 5. Nearest-Neighbor Methods 6. Neural Networks 7. Natural Language Processing 8. Convolutional Neural Networks 9. Recurrent Neural Networks 10. Taking TensorFlow to Production 11. More with TensorFlow 1. Other Books You May Enjoy Index

Working with gates and activation functions


Now that we can link together operational gates, we want to run the computational graph output through an activation function. In this section, we will introduce common activation functions.

 

 

Getting ready

In this section, we will compare and contrast two different activation functions: sigmoid and rectified linear unit (ReLU). Recall that the two functions are given by the following equations:

In this example, we will create two one-layer neural networks with the same structure, except that one will feed through the sigmoid activation and one will feed through the ReLU activation. The loss function will be governed by the L2 distance from the value 0.75. We will randomly pull batch data from a normal distribution (Normal(mean=2, sd=0.1)) and then optimize the output towards 0.75.

How to do it...

We proceed with the recipe as follows:

  1. We will start by loading the necessary libraries and initializing a graph. This is also a good point at which we can...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime
Visually different images