文章目录
常用命令:遇到问题:
一般使用
bin/hadoop fs 或 bin/hdfs dfs 命令
dfs是fs的实现类。两个命令都是操作HDFS.
常用命令:
hadoop fs -ls /
hadoop fs -mkdir -p /zx/file
hadoop fs -moveFromLocal ./tt.txt /zx/file
hadoop fs -copyFromLocal ./a.txt /zx/file
hadoop fs -put
...
hadoop fs -cat /zx/file/tt.txt
hadoop fs -appendToFile ./t.txt /zx/file/tt.txt
hadoop fs -chmod g+w /zx/file/tt.txt
hadoop fs -copyToLocal /zx/file/tt.txt ./aa/
hadoop fs -get
...
hadoop fs -cp /sanguo/shuguo/kongming.txt /zhuge.txt
hadoop fs -mv /zhuge.txt /sanguo/shuguo/
hadoop fs -getmerge /user/atguigu/test/* ./zaiyiqi.txt
[atguigu@hadoop102 hadoop-2.7.2
]$ hadoop fs -setrep 10 /sanguo/shuguo/kongming.txt
这里设置的副本数只是记录在NameNode的元数据中,是否真的会有这么多副本,还得看DataNode的数量。因为目前只有3台设备,最多也就3个副本,只有节点数的增加到10台时,副本数才能达到10
遇到问题:
2020-06-19 19:52:45,732 INFO sasl
.SaslDataTransferClient
: SASL encryption trust check
: localHostTrusted
= false, remoteHostTrusted
= false
2020-06-19 19:52:45,910 INFO hdfs
.DataStreamer
: Exception in createBlockOutputStream blk_1073741825_1001
java
.io
.IOException
: Got error
, status
=ERROR
, status message
, ack with firstBadLink as
192.168.43.12:9866
at org
.apache
.hadoop
.hdfs
.protocol
.datatransfer
.DataTransferProtoUtil
.checkBlockOpStatus(DataTransferProtoUtil
.java
:134)
at org
.apache
.hadoop
.hdfs
.protocol
.datatransfer
.DataTransferProtoUtil
.checkBlockOpStatus(DataTransferProtoUtil
.java
:110)
at org
.apache
.hadoop
.hdfs
.DataStreamer
.createBlockOutputStream(DataStreamer
.java
:1778)
at org
.apache
.hadoop
.hdfs
.DataStreamer
.nextBlockOutputStream(DataStreamer
.java
:1679)
at org
.apache
.hadoop
.hdfs
.DataStreamer
.run(DataStreamer
.java
:716)
需要关闭datanode的防火墙