This document provides an introduction to the Hadoop ecosystem. It discusses why Hadoop is used, how Hadoop stores and processes large amounts of data through its core components HDFS and MapReduce. It also introduces other popular Hadoop projects that provide additional functionality, such as Pig for scripting MapReduce jobs, Hive for SQL-like queries, and HBase for column-oriented storage. Finally, it discusses how Hadoop can be used for applications like online ad serving that require processing large volumes of user data.