HBase advanced features: filters (1)

Level 1: Use filters to query data in specified rows Knowledge points 1. Steps to use filters: (1) Create a filter: RowFilter(CompareOperator op,ByteArrayComparable rowComparator). The first parameter receives the comparison operation object, and the second parameter receives the condition. The first parameter has many values to match various scenarios. The value table is as follows: […]

Hbase java api operation

1. Table exists package org.example; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import java.io.IOException; public class TestDemo { public static Connection connection=null; public static Admin admin=null; static { try { Configuration configuration = HBaseConfiguration.create(); configuration.set(“hbase.rootdir”, “hdfs://192.168.170.80:8020/hbase”); configuration.set(“hbase.zookeeper.quorum”,”hadooplyf316″); connection= ConnectionFactory.createConnection(configuration); admin = connection.getAdmin(); } catch (IOException e) { e.printStackTrace(); } } […]

HBase database design RowKey

Level 1: Financial RowKey Design Knowledge points 1.RowKey design principles: uniqueness principle, sorting principle, length principle (the shorter the better), hashing principle Programming requirements Follow the prompts and add code in the editor on the right to complete the following requirements: Design RowKey Complete query of a seller’s transaction records within a certain period of […]

HBase and Hadoop integration

1. Start the hadoop service process and hbase service process [Command 001]: start-all.sh start-hbase.sh 2.Create the directory /root/experiment/datas on HDFS [Command 002]: hadoop fs -mkdir -p /root/experiment/hbase/file1.txt /root/experiment/datas 3. Upload the local directory /root/experiment/datas/hbase/file1.txt file to the /root/experiment/datas directory of HDFS [Command 003]: hadoop fs -put /root/experiment/datas/hbase/file1.txt /root/experiment/datas 2) Experimental process 1. Double-click the “IDEA” […]

HBase Java API development: batch operations

Level 1: Obtain data in batches Knowledge points 1.table.get(gets) will return a Result[] result array, which stores all the data of this query. You can use this array to traverse what we need data. 2.result.rawCells(), result is a single result. All the data in a row is stored here. rowCells( of result ) method will […]

Deploy hadoop-3.3.6, hbase-2.5.6, apache-zookeeper-3.8.1 cluster on linux

1. Introduction to hadoop Hadoop is a distributed system infrastructure developed by the Apache Foundation. Users can develop distributed programs without understanding the underlying details of distribution. Make full use of the power of clusters for high-speed computing and storage. Hadoop implements a distributed file system (Distributed File System), one of which is HDFS (Hadoop […]

HBase development: Using Java to operate HBase

Level 1: Create a table mission details related information How to connect to HBase database using Java HBaseConfiguration ConnectionFactory Create table HBase2.X creates table Programming requirements Test instruction Task Description Task for this level: Create a table in HBase using Java code. Related knowledge In order to complete this task, you need to master: 1. […]

HBase-1.2.4LruBlockCache implementation analysis (1)

1. Introduction BlockCache is an important feature in HBase. Compared with Memstore when writing data, the cache when reading data is BlockCache. LruBlockCache is the default implementation of BlockCache in HBase, which uses a strict LRU algorithm to eliminate blocks. 2. Cache level There are currently three cache levels, defined in BlockPriority, as follows: [java] […]

Java implements hbase data export

1. HBase-client implementation 1.1 Dependencies <!–HBase dependent coordinates–> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-client</artifactId> <version>1.2.6</version> </dependency> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-server</artifactId> <version>1.2.6</version> <exclusions><!–Exclude dependencies: If you don’t add this sentence, an error will be reported–> <exclusion> <groupId>*</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> </dependency> 1.2 Configuration and code 1.2.1 get method public class HBaseService {<!– –> private static final Logger logger = LoggerFactory.getLogger(HBaseService.class); […]

Kafka To HBase To Hive

Table of Contents 1. Create a table in HBase 2. Write API 2.1 Write to hbase in normal mode (write one by one) 2.2 Writing to hbase in normal mode (buffer writing) 2.3 Design mode writing to hbase (buffer writing) 3. HBase table is mapped to Hive 1. Create a table in HBase hbase(main):003:0> create_namespace […]