windows install hadoop3.1.0

1. Prepare java environment

Enter java -version in cmd, it is recommended to use 8, the higher version may not be compatible

2. Download and install hadoop required files

hadoop3.1.0 installation package: Apache Hadoop, download and decompress

Bin required for Windows environment installation: GitHub – s911415/apache-hadoop-3.1.0-winutils: HADOOP 3.1.0 winutils

3. Download and decompress the file

4. Replace the bin folder in the hadoop-3.1.0 package

Open the apache-hadoop-3.1.0-winutils-master package and find that there is only one bin folder

Use this bin folder to replace the bin folder in the hadoop-3.1.0 package

5. Configure hadoop environment variables

Create a new system variable, the variable name is HADOOP_HOME, and the variable value corresponds to the installation path of hadoop. Mine is D:\software-pro\hadoop\hadoop-3.1.0

Then edit the Path variable and add %HADOOP_HOME%\bin

6. Check whether the environment variable is configured successfully

Enter hadoop version

7. Configure hadoop configuration file

Go to D:\software-pro\hadoop\hadoop-3.1.0\etc\hadoop folder

1. Configure the core-site.xml file

Add at the end of the document

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
    <name>hadoop.http.staticuser.user</name>
    <value>hadoop</value>
</property>
</configuration>

2. Configure mapred-site.xml

Add at the end of the document

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

3. Configure yarn-site.xml

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

4. Create a new data directory for users to store namenode folders and datanode folders later

Create a new data directory in the installation package

5. Configure hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/D:/software-pro/hadoop/hadoop-3.1.0/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/D:/software-pro/hadoop/hadoop-3.1.0/data/datanode</value>
</property>
<property>
   <name>dfs.permissions</name>
   <value>false</value>
</property>
</configuration>

  • The values of the two folders dfs.namenode.name.dir and dfs.datanode.data.dir in it correspond to your own data directory. Note that file: starts with
  • Whether dfs.permissions enables permission checking in HDFS, the default is true, here is set to false, to ensure that subsequent creation of folders and uploading files have permissions

6. Configure hadoop-env.sh

Configure the jdk path, pay attention to bring bin

export JAVA_HOME=C:\PROGRA~1\Java\jdk1.8.0_141\bin

Note: If the jdk path is placed under the Program File path, because there is a space in the middle, it needs to be replaced with PROGRA~1

7. Configure hadoop-env.cmd

Configure jdk path

set JAVA_HOME=C:\PROGRA~1\Java\jdk1.8.0_141

8. Start hadoop service

  • Enter the D:\software-pro\hadoop\hadoop-3.1.0\bin directory, execute in the cmd window
hdfs namenode -format
  • Double-click start-all.cmd

4 windows appear, be careful not to close it

9. Access page

Open http://localhost:9870/

10. Problem solving

Create a folder, or upload a file, the permissions are insufficient, open F12, find the domain name on the interface url, here is windows10.microdone.cn

Finally configure hosts to solve