Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Hands-On Convolutional Neural Networks with TensorFlow

You're reading from   Hands-On Convolutional Neural Networks with TensorFlow Solve computer vision problems with modeling in TensorFlow and Python

Arrow left icon
Product type Paperback
Published in Aug 2018
Publisher Packt
ISBN-13 9781789130331
Length 272 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (5):
Arrow left icon
 Araujo Araujo
Author Profile Icon Araujo
Araujo
 Zafar Zafar
Author Profile Icon Zafar
Zafar
 Tzanidou Tzanidou
Author Profile Icon Tzanidou
Tzanidou
 Burton Burton
Author Profile Icon Burton
Burton
 Patel Patel
Author Profile Icon Patel
Patel
+1 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
1. Setup and Introduction to TensorFlow FREE CHAPTER 2. Deep Learning and Convolutional Neural Networks 3. Image Classification in TensorFlow 4. Object Detection and Segmentation 5. VGG, Inception Modules, Residuals, and MobileNets 6. Autoencoders, Variational Autoencoders, and Generative Adversarial Networks 7. Transfer Learning 8. Machine Learning Best Practices and Troubleshooting 9. Training at Scale 1. References 2. Other Books You May Enjoy Index

Creating TensorFlow graphs


Now that our data is all set up, we can construct our model that will learn how to classify iris flowers. We'll construct one of the simplest machine learning models—a linear classifier, as follows:

A linear classifier works by calculating the dot product between an input feature vector x and a weight vector w. After calculating the dot product, we add a value to the result called a bias term b. In our case, we have three possible classes any input feature vector could belong to, so we need to compute three different dot products with w1, w2, and w3 to see which class it belongs to. But, rather than writing out three separate dot products, we can just do one matrix multiply between a matrix of weights of shape [3,4] and our input vector. In the following figure, we can see more clearly what it looks like:

We can also just simplify this equation down to the more compact form as follows, where our weight matrix is W, bias is b, x is our input feature vector and the resulting output is s:

Variables

How do we write this all out in TensorFlow code? Let's start by creating our weights and biases. In TensorFlow, if we want to create some Tensors that can be manipulated by our code, then we need to use TensorFlow variables. TensorFlow variables are instances of the tf.Variable class. A tf.Variable class represents a tf.Tensor object that can have its values changed by running TensorFlow operations on it. Variables are Tensor-like objects, so they can be passed around in the same ways Tensors can and any operation that can be used with a Tensor can be used with a variable.

To create a variable, we can use tf.get_variable(). When you call this function, you must supply a name for your variable. This function will first check that there is no other variable with the same name already on the graph, and if there isn't, then it will create and add a new one to the TensorFlow graph.

You must also specify the shape that you want your variable to have, or alternatively, you can initialize your variable using a tf.constant Tensor. The variable will take the value of your constant Tensor and the shape will be automatically inferred. For example, the following will produce a 1x2 Tensor containing the values 21 and 25:

my_variable = tf.get_variable(name= "my_variable", initializer=tf.constant([21, 25]))

Operations

It's all well and good having variables in our graph, but we also want to do something with them. We can use TensorFlow ops to manipulate our variables.

As explained, our linear classifier is just a matrix multiply so the first op you will use is funnily enough going to be the matrix multiply op. Simply call tf.matmul() on two Tensors you want to multiply together and the result will be the matrix multiplication of the two Tensors you passed in. Simple!

Throughout this book, you will learn about many different TensorFlow ops that you will need to use.

Now that you hopefully have a little understanding about variables and ops, let's construct our linear model. We'll define our model within a function. The function will take as input N lots of our feature vectors or to be more precise a batch of size N. As our feature vectors are of length 4, our batch will be an [N, 4] shape Tensor. The function will then return the output of our linear model. In the following code, we have written our linear model function, it should be self explanatory but keep reading if you have not completely understood it yet.

def linear_model(input):
# Create variables for our weights and biases 
my_weights = tf.get_variable(name="weights", shape=[4,3]) 
my_bias = tf.get_variable(name="bias", shape=[3]) 
 
# Create a linear classifier. 
linear_layer = tf.matmul(input, my_weights)  
linear_layer_out = tf.nn.bias_add(value=linear_layer, bias=my_bias) 
return linear_layer_out 

In the code here, we create variables that will store our weights and biases. We give them names and supply the required shapes. Remember we are using variables as we want to manipulate their values using operations.

 

Next, we create a tf.matmul node that takes as argument our input feature matrix and our weight matrix. The result of this op can be accessed through our linear_layer Python variable. This result is then passed to another op, tf.nn.bias_add. This op comes from the NN (neural network) module and is used when we wish to add a bias vector to the result of a calculation. A bias has to be a one-dimensional Tensor.

You have been reading a chapter from
Hands-On Convolutional Neural Networks with TensorFlow
Published in: Aug 2018
Publisher: Packt
ISBN-13: 9781789130331
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime
Visually different images