The situation when the Hadoop cluster DataNode cannot be started under Linux

1. After formatting the file, restart the hadoop cluster and find that the DataNode cannot be started.

Here too There is no node data, but it is empty. This graph is normal. After searching for the reason for a long time, I found that the datanode was not started.

Then query the log. Found this problem.

java.io.IOException: Incompatible clusterIDs in /local/bigdata/hadoop-3.3.6/data/datanode: namenode clusterID = CID-589c10c0-f245-44cd-8e82-728857dbab93; datanode clusterID = CID-b09fd669-d1d4 -43e0-b230-5e36c89b192c
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:746)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:296)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:389)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:561)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:2059)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1995)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:394)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:312)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:891)
        at java.lang.Thread.run(Thread.java:750)
2023-11-11 17:12:26,242 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid 46d415da-00d9-459f-9991-dd1889651a5a) service to node1/ 192.168.42.139:9000. Exiting.
java.io.IOException: All specified directories have failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:562)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:2059)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1995)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:394)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:312)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:891)
        at java.lang.Thread.run(Thread.java:750)

Finally, restart and delete the data in the namenode and datanode. Reformat it and start the hadoop cluster again, and the problem is solved.

Note: The server time needs to be unified. Under normal circumstances, multiple formatting is not allowed. This is only possible in a test environment. The production environment needs to copy the data in VERSION to the datanode. After normal startup, it looks like this

2023-11-11 17:16:54,918 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-738434729-192.168.42.139-1699694199117 : 32ms
2023-11-11 17:16:54,919 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-738434729-192.168.42.139-1699694199117 on volume /usr/ local/bigdata/hadoop-3.3.6/data/datanode...
2023-11-11 17:16:54,919 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice: Replica Cache file: /usr/local/bigdata/hadoop-3.3.6/data/datanode/ current/BP-738434729-192.168.42.139-1699694199117/current/replicas doesn't exist
2023-11-11 17:16:54,955 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-738434729-192.168.42.139-1699694199117 on volume / usr/local/bigdata/hadoop-3.3.6/data/datanode: 36ms
2023-11-11 17:16:54,955 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map for block pool BP-738434729-192.168.42.139-1699694199117: 37ms
2023-11-11 17:16:54,956 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for /usr/local/bigdata/hadoop-3.3.6/data/datanode
2023-11-11 17:16:54,987 INFO org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker: Scheduled health check for volume /usr/local/bigdata/hadoop-3.3.6/data/datanode
2023-11-11 17:16:54,993 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Now scanning bpid BP-738434729-192.168.42.139-1699694199117 on volume /usr/local/bigdata/hadoop-3.3. 6/data/datanode
2023-11-11 17:16:54,994 WARN org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
2023-11-11 17:16:54,994 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting in 19374231ms with interval of 21600000ms and throttle limit of -1ms/s
2023-11-11 17:16:54,994 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/bigdata/hadoop-3.3.6/data/datanode, DS-53d37fc1-536f- 4732-8e16-2fcc032838af): finished scanning block pool BP-738434729-192.168.42.139-1699694199117
2023-11-11 17:16:55,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-738434729-192.168.42.139-1699694199117 (Datanode Uuid 31203931-5c6b-4f85-b7ba -969b97a14854) service to node1/192.168.42.139:9000 beginning handshake with NN
2023-11-11 17:16:55,004 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/bigdata/hadoop-3.3.6/data/datanode, DS-53d37fc1-536f- 4732-8e16-2fcc032838af): no suitable block pools found to scan. Waiting 1814399987 ms.
2023-11-11 17:16:55,051 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-738434729-192.168.42.139-1699694199117 (Datanode Uuid 31203931-5c6b-4f85-b7ba -969b97a14854) service to node1/192.168.42.139:9000 successfully registered with NN
2023-11-11 17:16:55,052 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode node1/192.168.42.139:9000 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
2023-11-11 17:16:55,054 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting IBR Task Handler.
2023-11-11 17:16:55,092 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: After receiving heartbeat response, updating state of namenode node1:9000 to active
2023-11-11 17:16:55,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x15efe4f510e6c2c2 with lease ID 0x943e06b7110c03c3 to namenode: node1/192.168.42.139:9000, containing 1 storage report( s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 5 msecs to generate and 14 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2023-11-11 17:16:55,112 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-738434729-192.168.42.139-1699694199117
[root@node1 current]# cat /usr/local/bigdata/hadoop-3.3.6/data/datanode/current/VERSION
#Sat Nov 11 17:16:54 CST 2023
storageID=DS-53d37fc1-536f-4732-8e16-2fcc032838af
clusterID=CID-4b675e4e-9cb8-460c-b4c0-09457a19aa68
cTime=0
datanodeUuid=31203931-5c6b-4f85-b7ba-969b97a14854
storageType=DATA_NODE
layoutVersion=-57
[root@node1 current]# cat /usr/local/bigdata/hadoop-3.3.6/data/namenode/current/VERSION
#Sat Nov 11 17:16:39 CST 2023
namespaceID=1932319601
clusterID=CID-4b675e4e-9cb8-460c-b4c0-09457a19aa68
cTime=1699694199117
storageType=NAME_NODE
blockpoolID=BP-738434729-192.168.42.139-1699694199117
layoutVersion=-66
[root@node1 current]# jps
35922 NameNode
37156 NodeManager
36168 DataNode
36924 ResourceManager
51885Jps