Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Hands-On Convolutional Neural Networks with TensorFlow

You're reading from   Hands-On Convolutional Neural Networks with TensorFlow Solve computer vision problems with modeling in TensorFlow and Python

Arrow left icon
Product type Paperback
Published in Aug 2018
Publisher Packt
ISBN-13 9781789130331
Length 272 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (5):
Arrow left icon
 Araujo Araujo
Author Profile Icon Araujo
Araujo
 Zafar Zafar
Author Profile Icon Zafar
Zafar
 Tzanidou Tzanidou
Author Profile Icon Tzanidou
Tzanidou
 Burton Burton
Author Profile Icon Burton
Burton
 Patel Patel
Author Profile Icon Patel
Patel
+1 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
1. Setup and Introduction to TensorFlow FREE CHAPTER 2. Deep Learning and Convolutional Neural Networks 3. Image Classification in TensorFlow 4. Object Detection and Segmentation 5. VGG, Inception Modules, Residuals, and MobileNets 6. Autoencoders, Variational Autoencoders, and Generative Adversarial Networks 7. Transfer Learning 8. Machine Learning Best Practices and Troubleshooting 9. Training at Scale 1. References 2. Other Books You May Enjoy Index

Residual Networks


In previous sections it was shown that the depth of a network is a crucial factor that contributes in accuracy improvement (see VGG). It was also shown in Chapter 3Image Classification in TensorFlow, that the problem of vanishing or exploding gradients in deep networks can be alleviated by correct weight initialization and batch normalization. Does this mean however, that the more layers we add the more accurate the system we get is? The authors in Deep Residual Learning for Image Recognition form Microsoft research Asia  have found that accuracy gets saturated as soon as the network gets 30 layers deep. To solve this problem they introduced a new block of layers called the residual block, which adds the output of the previous layer to the output of the next layer (refer to the figure below). The Residual Net or ResNet has shown excellent results with very deep networks (greater than even 100 layers!), for example the 152-layer ResNet which won the 2015 LRVC image recognition...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £13.99/month. Cancel anytime
Visually different images