Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Mastering Hadoop

You're reading from   Mastering Hadoop Go beyond the basics and master the next generation of Hadoop data processing platforms

Arrow left icon
Product type Paperback
Published in Dec 2014
Publisher
ISBN-13 9781783983643
Length 374 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
 Karanth Karanth
Author Profile Icon Karanth
Karanth
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Mastering Hadoop
Credits
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
1. Hadoop 2.X FREE CHAPTER 2. Advanced MapReduce 3. Advanced Pig 4. Advanced Hive 5. Serialization and Hadoop I/O 6. YARN – Bringing Other Paradigms to Hadoop 7. Storm on YARN – Low Latency Processing in Hadoop 8. Hadoop on the Cloud 9. HDFS Replacements 10. HDFS Federation 11. Hadoop Security 12. Analytics Using Hadoop Hadoop for Microsoft Windows Index

Handling data joins


Joins are commonplace in Big Data processing. They occur on the value of a join key and on a data type in the datasets that participate in a join. In this book, we will refrain from explaining the different join semantics such as inner joins, outer joins, and cross joins, and focus on inner join processing using MapReduce and the optimizations involved in it.

In MapReduce, joins can be done in either the Map task or the Reduce task. The former is called a Map-side join and the latter is called a Reduce-side join.

Reduce-side joins

Reduce-side joins are meant for more general purposes and do not impose too many conditions on the datasets that participate in the join. However, the shuffle step is very heavy on resources.

The basic idea involves tagging each record with a data source tag and extracting the join key in the Map tasks. The Reduce task receives all the records with the same join key and does the actual join. If one of the datasets participating in the join is very...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime
Visually different images