Dealing with Files in Hadoop

Use Case: We have 1 million files to process and provide option to download.

Hadoop is meant to bring process to data. We can store processed file content or meta data in HBase to support easy search. Upon successful search, user want to see original document. During that time we can download file from NAS easily.

HDFS: This is not meant to store large files. 16MB is block size. We can configure to support to store small files. But not supposed to be.

HBase: Default block size is 100kb. We can tweak, but not meant to store proprietary data formats.

NAS: Network Attached Storage is easy to store/retrieve original files, When we don’t have map reduce nature of jobs.

HBase, HDFS and Hive

HBase with Java API:
HBase web site,
HBase wiki,
HBase Reference Guide
HBase: The Definitive Guide,
Google Bigtable Paper,
Hadoop web site,
Hadoop: The Definitive Guide,
Fallacies of Distributed Computing,
HBase lightning talk slides,
Sample code,


Datawarehouse implementation using Hadoop+Hbase+Hive+SpringBatch – Part 1




Hive Manual:


What is hive?: Hive is a data warehousing infrastructure based on Hadoop
What is Hbase?: Its a distributed, versioned, column-oriented NoSQL data store, modeled after Googles Bigtable. used to host very large tables — billions of rows *times* millions of columns.
What is hadoop?: Hadoop provides massive scale out and fault tolerance capabilities for data storage and processing on commodity hardware using map-reduce programming paradigm.

Hbase, Hive and HDFS


Learn this to solve Big Problems
Hadoop Platform and Application Framework
by University of California, San Diego
M101J: MongoDB for Java Developers



Hortonworks Data Platform


Hive Modeling / Hive Queries

Spark Scala API (Scaladoc)
Spark Java API (Javadoc)
Spark Python API (Sphinx)
Spark R API (Roxygen2)
Apache Twill is an abstraction over Apache Hadoop® YARN that reduces the complexity of developing distributed applications, allowing developers to focus instead on their application logic. Apache Twill allows you to use YARN’s distributed capabilities with a programming model that is similar to running threads.
The Apache Tika™ toolkit detects and extracts metadata and text from over a thousand different file types (such as PPT, XLS, and PDF). All of these file types can be parsed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more.
An easy to use, powerful, and reliable system to process and distribute data.

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.
Used to package PY/C++ into docker.

Domain Knowledge:
Digital Asset Management (DAM)


Apache Kafka: A Distributed Streaming Platform.
Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application.


Free Articles to Test BigData System

Problem Statement: We want to build knowledge graphs, search repos, data classification, ….etc in BigData
How to get test data?

160,000+ peer-reviewed articles are free to access, reuse and redistribute.
Anytime, Anywhere.

We can use as test data, where it required

We need to create our account in to browse and download articles
This shows nicely how they are organizing document. We can download this and use it.

Programmatic Access to Articles​

PLOS articles can be accessed programmatically through our API, via PubMed Central, or using Europe PMC’s RESTful Web Service and SOAP Web Service. Detailed information about our Search API, including examples, is available at If you have any questions or require assistance with our API, please contact
9,356 Journals
6,790 searchable at Article level
129 Countries
2,457,588 Articles

Huge collection available

These APIs are not working.



HDFS Notes

HDFS Architecture:

HDFS Command Guide:

HDFS is not POSIX compliant.
The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems.

HDFS User Guide:




Common HDFS Commands: