Table of Contents
1 Introduction
2. Installation
1. HBase depends on Zookeeper, JDK, and Hadoop (HDFS). Please make sure you have completed the previous steps.
2. [node1 execution] Download the HBase installation package
3. [node1 execution], modify the configuration file, modify the conf/hbase-env.sh file
4. [node1 execution], modify the configuration file, modify the conf/hbase-site.xml file
5. [node1 execution], modify the configuration file, modify the conf/regionservers file
6. [Node1 execution], distribute hbase to other machines
7. [node2, node3 execution], configure soft links
8. [Node1, node2, node3 execution], configure environment variables
9. [node1 execution] Start HBase
10. Verify HBase
11. Simple test using HBase
1. Introduction
HBase
is a kind of
Distributed
, scalable, supporting massive data storage
NoSQL
database.
and
Redis
Same,
HBase
is a
KeyValue
type storage database.
But and
Redis
Different design directions
Redis
Designed for small amounts of data, ultra-fast retrieval
HBase
Designed for massive data and fast retrieval
HBase
It is widely used in the field of big data. Now we are going to
node1
,
node2
,
node3
Deploy an HBase cluster on it.
2. Installation
1. HBase depends on Zookeeper, JDK, and Hadoop (HDFS), please make sure you have completed the previous steps
1) Clustering software pre-preparation (
JDK)
2)
Zookeeper
3)
Hadoop
Jump link:
Preparation for clustered environment_Shiguang Chen’s Blog-CSDN Blog
Zookeeper cluster installation and deployment, Kafka cluster installation and deployment_Shiguangのchen’s blog-CSDN blog
Big Data Cluster (Hadoop Ecosystem) Installation and Deployment_Shiguangのchen’s Blog-CSDN Blog
2. [Node1 Execute] Download the HBase installation package
# Download wget http://archive.apache.org/dist/hbase/2.1.0/hbase-2.1.0-bin.tar.gz # Unzip tar -zxvf hbase-2.1.0-bin.tar.gz -C /export/server # Configure soft links ln -s /export/server/hbase-2.1.0 /export/server/hbase
3. [Execute node1], modify the configuration file, modify the conf/hbase-env.sh file
# Configure JAVA_HOME on line 28 export JAVA_HOME=/export/server/jdk # Configure on line 126: # This means that instead of using the Zookeeper that comes with HBase, use an independent Zookeeper export HBASE_MANAGES_ZK=false # In any line, such as line 26, add the following: export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true"
4. [node1 execution], modify the configuration file, modify the conf/hbase-site.xml file
1 # Replace the entire content of the file with the following content: 2 <configuration> 3 - The path where HBase data is stored in HDFS -> 4 <property> 5 <name>hbase.rootdir/name> 6 <value>hdfs:/node1:8020/hbase/value> 7/property> 8 - Hbase operating mode. false means stand-alone mode, true means split cloth mode. If false, Hbase and Zookeeper will run in the same JVM. > 9 <property> 10 <name>hbase.cluster.distributed/name> 11 <value>true/value> 12/property> 13 - ZooKeeper's address -> 14 <property> 15 <name>hbase.zookeeper.quorum/name> 16 <value>node1,node2,node3/value> 17/property> 18 - Where ZooKeeper snapshots are stored -> 19 <property> 20 <name>hbase.zookeeper.property.dataDir/name> 21 <value>/export/server/apache-zookeeper- 3.6.0-bin/data/value> 22/property> 23 - V2.1 version, in distributed case, set to false -> 24 <property> 25 <name>hbase.unsafe.stream.capability.enforce/name> 26 <value>false/value> 27/property> 28/configuration>
An error occurred during some format conversions, picture demonstration:
5. [node1 execution], modify the configuration file, modify the conf/regionservers file
# Fill in the following content node1 node2 node3
6. [node1 execution], distribute hbase to other machines
scp -r /export/server/hbase-2.1.0 node2:/export/server/ scp -r /export/server/hbase-2.1.0 node3:/export/server/
7. [node2, node3 execution], configure soft link
ln -s /export/server/hbase-2.1.0 /export/server/hbase
8. [Execute node1, node2, node3], configure environment variables
# Configure in /etc/profile, add the following two lines export HBASE_HOME=/export/server/hbase export PATH=$HBASE_HOME/bin:$PATH source /etc/profile
9. [node1 execution] Start HBase
please ensure:
Hadoop HDFS
,
Zookeeper
It’s already started
start-hbase.sh # If you need to stop, you can use stop-hbase.sh
Since we configured the environment variables
export PATH=$PATH:$HBASE_HOME/bin start-hbase.sh is now
$HBASE_HOME/bin
within, so it can be executed directly no matter where the current directory is.
10. Verify HBase
Browser opens:
http: //node1:16010
, you can see
HBase
of
WEB UI
Page
11. Simple test using HBase
【
node1
implement】
hbase shell #Create table create 'test', 'cf' #Insert data put 'test', 'rk001', 'cf:info', 'itheima' # Query data get 'test', 'rk001' #Scan table data scan 'test'
The knowledge points of the article match the official knowledge files, and you can further learn related knowledge. Java Skill TreeHomepageOverview 137353 people are learning the system