Use Case: We have 1 million files to process and provide option to download.
Hadoop is meant to bring process to data. We can store processed file content or meta data in HBase to support easy search. Upon successful search, user want to see original document. During that time we can download file from NAS easily.
HDFS: This is not meant to store large files. 16MB is block size. We can configure to support to store small files. But not supposed to be.
HBase: Default block size is 100kb. We can tweak, but not meant to store proprietary data formats.
NAS: Network Attached Storage is easy to store/retrieve original files, When we don’t have map reduce nature of jobs.
HBase with Java API: https://dzone.com/articles/handling-big-data-hbase-part-4
HBase web site, http://hbase.apache.org/
HBase wiki, http://wiki.apache.org/hadoop/Hbase
HBase Reference Guide http://hbase.apache.org/book/book.html
HBase: The Definitive Guide, http://bit.ly/hbase-definitive-guide
Google Bigtable Paper, http://labs.google.com/papers/bigtable.html
Hadoop web site, http://hadoop.apache.org/
Hadoop: The Definitive Guide, http://bit.ly/hadoop-definitive-guide
Fallacies of Distributed Computing, http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing
HBase lightning talk slides, http://www.slideshare.net/scottleber/hbase-lightningtalk
Sample code, https://github.com/sleberknight/basic-hbase-examples
Hive Manual: https://cwiki.apache.org/confluence/display/Hive/LanguageManual
What is hive?: Hive is a data warehousing infrastructure based on Hadoop
What is Hbase?: Its a distributed, versioned, column-oriented NoSQL data store, modeled after Googles Bigtable. used to host very large tables — billions of rows *times* millions of columns.
What is hadoop?: Hadoop provides massive scale out and fault tolerance capabilities for data storage and processing on commodity hardware using map-reduce programming paradigm.
Hadoop Platform and Application Framework
by University of California, San Diego
M101J: MongoDB for Java Developers
Hortonworks Data Platform
Hive Modeling / Hive Queries
Spark Scala API (Scaladoc)
Spark Java API (Javadoc)
Spark Python API (Sphinx)
Spark R API (Roxygen2)
Apache Twill is an abstraction over Apache Hadoop® YARN that reduces the complexity of developing distributed applications, allowing developers to focus instead on their application logic. Apache Twill allows you to use YARN’s distributed capabilities with a programming model that is similar to running threads.
The Apache Tika™ toolkit detects and extracts metadata and text from over a thousand different file types (such as PPT, XLS, and PDF). All of these file types can be parsed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more.
An easy to use, powerful, and reliable system to process and distribute data.
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.
Used to package PY/C++ into docker.
Digital Asset Management (DAM)
PRISM – https://polimetlase.wordpress.com/2017/03/10/categorize-and-search-documents/
Apache Kafka: A Distributed Streaming Platform.
Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application.
Problem Statement: We want to build knowledge graphs, search repos, data classification, ….etc in BigData
How to get test data?
160,000+ peer-reviewed articles are free to access, reuse and redistribute.
We can use as test data, where it required
We need to create our account in plos.org to browse and download articles
This shows nicely how they are organizing document. We can download this and use it.
Programmatic Access to Articles
PLOS articles can be accessed programmatically through our API, via PubMed Central, or using Europe PMC’s RESTful Web Service and SOAP Web Service. Detailed information about our Search API, including examples, is available at http://api.plos.org/solr/faq/. If you have any questions or require assistance with our API, please contact email@example.com.
6,790 searchable at Article level
Huge collection available
These APIs are not working.
HDFS Command Guide:
HDFS is not POSIX compliant.
The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems.
HDFS User Guide:
JAVA API: http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html
Common HDFS Commands:
WebHDFS Rest API
How to write group by and order by query in CDAP?
Working Hive Query:
select it, count(result) from (
SELECT from_unixtime(insert_time,’yyyy-MM-dd’) it, result
FROM default.dataset_table1) t1 group by it sort by it
In Oracle we have Order By. In Hive we have Sort By.
from_unittime() function takes seconds. Not milliseconds.
While writing data into datasets, we need to use
This wont work with from_unixtime
Column values must be of a primitive type. A primitive type is one of boolean, int, long, float, double, bytes, or string.
Column names must be valid Hive column names. This means they cannot be reserved keywords such as drop. Please refer to the Hive language manual for more information about Hive.
Data types are from Avro.
Data is stored in Hive. Supporting Hive Query.
This imposes constraints on how to design datasets and how to write queries.
Also this will impact performance of queries, because of date and time conversions.
Following are references for some of the material covered:
Pig Latin basics:
HIVE Language Manual:
HBase Reference Guide: