[root@server01 /]# hdfs dfs -help ls
-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...] : List the contents that match the specified file pattern. If path is not specified, the contents of /user/<currentUser> will be listed. For a directory a list of its direct children is returned (unless -d option is specified). Directory entries are of the form: permissions - userId groupId sizeOfDirectory(in bytes) modificationDate(yyyy-MM-dd HH:mm) directoryName and file entries are of the form: permissions numberOfReplicas userId groupId sizeOfFile(in bytes) modificationDate(yyyy-MM-dd HH:mm) fileName -C Display the paths of files and directories only. -d Directories are listed as plain files. -h Formats the sizes of files in a human-readable fashion rather than a number of bytes. -q Print ? instead of non-printable characters. -R Recursively list the contents of directories. -t Sort files by modification time (most recent first). -S Sort files by size. -r Reverse the order of the sort. -u Use time of last access instead of modification for display and sorting.
[root@server01 /]# hdfs dfs -ls /cloudDisk
Found 1 items drwxr-xr-x - cloudera-scm supergroup 0 2018-07-25 10:15 /cloudDisk/smallFile
只显示文件和目录的路径: -ls -C
[root@server01 /]# hdfs dfs -ls -C /cloudDisk
/cloudDisk/smallFile
递归地列出目录的内容: -ls -R
[root@server01 /]# hdfs dfs -ls -R /cloudDisk
drwxr-xr-x - cloudera-scm supergroup 0 2018-07-25 10:15 /cloudDisk/smallFile -rw-r--r-- 3 cloudera-scm supergroup 530352 2018-07-24 09:15 /cloudDisk/smallFile/1021564417134678016 -rw-r--r-- 3 cloudera-scm supergroup 88002 2018-07-24 11:19 /cloudDisk/smallFile/1021595649553838080 -rw-r--r-- 3 cloudera-scm supergroup 107925 2018-07-24 16:18 /cloudDisk/smallFile/1021670959947255808 -rw-r--r-- 3 cloudera-scm supergroup 8725 2018-07-25 10:15 /cloudDisk/smallFile/1021941940498526208
-ls -R 和 -ls -t(-s|-r|-u) 连用不生效
[hdfs@server01 /]$ hdfs dfs -du -h /cloudDisk/smallFile
带下 磁盘占用 名称(完整路径) 517.9 K 1.5 M /cloudDisk/smallFile/1021564417134678016 85.9 K 257.8 K /cloudDisk/smallFile/1021595649553838080 105.4 K 316.2 K /cloudDisk/smallFile/1021670959947255808 8.5 K 25.6 K /cloudDisk/smallFile/1021941940498526208
参数 -s 表示总和
[hdfs@server01 /]$ hdfs dfs -count -h /cloudDisk/smallFile
目录数 文件数 字节数 1 4 717.8 K /cloudDisk/smallFile
[hdfs@server01 /]# hdfs dfs -mkdir /home
mkdir: Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
此处因为使用 root 用户,权限不足导致创建目录失败 使用命令
[hdfs@server01 /]# su hdfs
进入hdfs用户即可,重新执行命令
[hdfs@server01 /]# hdfs dfs -mkdir /home [hdfs@server01 /]$ hdfs dfs -ls /
Found 6 items drwxr-xr-x - cloudera-scm supergroup 0 2018-07-24 09:15 /cloudDisk drwxr-xr-x - hbase hbase 0 2018-07-24 08:44 /hbase drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:10 /home drwxrwxr-x - solr solr 0 2018-07-20 09:44 /solr drwxrwxrwt - hdfs supergroup 0 2018-07-20 09:46 /tmp drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:09 /user
当需要创建不存在的父路径时可以使用参数 -p
[hdfs@server01 /]$ hdfs dfs -mkdir -p /home/t2/t3
[hdfs@server01 /]$ hdfs dfs -ls -R /home
drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:34 /home/t1 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2/t3
[hdfs@server01 /] hdfs dfs -ls -R /home
drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:35 /home/t1 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:35 /home/t1/cloudDisk drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:35 /home/t1/cloudDisk/smallFile -rw-r--r-- 3 hdfs supergroup 530352 2018-08-22 15:35 /home/t1/cloudDisk/smallFile/1021564417134678016 -rw-r--r-- 3 hdfs supergroup 88002 2018-08-22 15:35 /home/t1/cloudDisk/smallFile/1021595649553838080 -rw-r--r-- 3 hdfs supergroup 107925 2018-08-22 15:35 /home/t1/cloudDisk/smallFile/1021670959947255808 -rw-r--r-- 3 hdfs supergroup 8725 2018-08-22 15:35 /home/t1/cloudDisk/smallFile/1021941940498526208 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2/t3
[hdfs@server01 /] hdfs dfs -ls -R /home
drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:43 /home/t2/t3 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:35 /home/t2/t3/t1 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk/smallFile -rw-r--r-- 3 hdfs supergroup 530352 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk/smallFile/1021564417134678016 -rw-r--r-- 3 hdfs supergroup 88002 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk/smallFile/1021595649553838080 -rw-r--r-- 3 hdfs supergroup 107925 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk/smallFile/1021670959947255808 -rw-r--r-- 3 hdfs supergroup 8725 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk/smallFile/1021941940498526208
[hdfs@server01 /]$ hdfs dfs -rm /home/t2/t3/t1/cloudDisk/smallFile/1021941940498526208
18/08/22 15:46:06 INFO fs.TrashPolicyDefault: Moved: 'hdfs://server01:8020/home/t2/t3/t1/cloudDisk/smallFile/1021941940498526208' to trash at: hdfs://server01:8020/user/hdfs/.Trash/Current/home/t2/t3/t1/cloudDisk/smallFile/1021941940498526208
[hdfs@server01 /]$ hdfs dfs -ls -R /home
drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:43 /home/t2/t3 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:35 /home/t2/t3/t1 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:46 /home/t2/t3/t1/cloudDisk/smallFile -rw-r--r-- 3 hdfs supergroup 530352 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk/smallFile/1021564417134678016 -rw-r--r-- 3 hdfs supergroup 88002 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk/smallFile/1021595649553838080 -rw-r--r-- 3 hdfs supergroup 107925 2018-08-22 15:35 /home/t2/t3/t1/cloudDisk/smallFile/1021670959947255808
[hdfs@server01 /]$ hdfs dfs -rm -r /home/t2/t3/t1 可以使用 -rmr 代替 -rm -r
18/08/22 15:46:37 INFO fs.TrashPolicyDefault: Moved: 'hdfs://server01:8020/home/t2/t3/t1' to trash at: hdfs://server01:8020/user/hdfs/.Trash/Current/home/t2/t3/t11534923997974
[hdfs@server01 /]$ hdfs dfs -ls -R /home
drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:46 /home/t2/t3
[hdfs@server01 ~] ls
test.txt
[hdfs@server01 ~] hdfs dfs -ls -R /home
drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 16:01 /home/t2/t3 -rw-r--r-- 3 hdfs supergroup 0 2018-08-22 16:01 /home/t2/t3/test.txt
[hdfs@server01 ~] hdfs dfs -ls -R /home
-rw-r--r-- 3 hdfs supergroup 18 2018-08-22 16:08 /home/t1.txt drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 16:01 /home/t2/t3 -rw-r--r-- 3 hdfs supergroup 0 2018-08-22 16:01 /home/t2/t3/test.txt
[hdfs@server01 ~]$ hdfs dfs -moveFromLocal ~/t1.txt /home/t2/t3/t1.txt
[hdfs@server01 ~] hdfs dfs -get /home/t2/t3/t1.txt .
[hdfs@server01 ~]$ ls
t1.txt
[hdfs@server01 ~] cat data
test the hdfs.... this is the test hdfs 002
为每个文件添加换行符:-getmerge -nl
[hdfs@server01 ~] cat data
test the hdfs.... this is the test hdfs 002
[hdfs@server01 ~]$ ls
data t1.txt t2.txt
[hdfs@server01 ~] ls
data t1.txt t2.txt t3.txt
[hdfs@server01 ~] hdfs dfs -ls -R /home
-rw-r--r-- 3 hdfs supergroup 18 2018-08-22 16:08 /home/t1.txt drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 17:00 /home/t2/t3 -rw-r--r-- 3 hdfs supergroup 18 2018-08-22 16:31 /home/t2/t3/t1.txt -rw-r--r-- 3 hdfs supergroup 26 2018-08-22 16:49 /home/t2/t3/t2.txt -rw-r--r-- 3 hdfs supergroup 0 2018-08-22 16:01 /home/t2/t3/test.txt -rw-r--r-- 3 hdfs supergroup 0 2018-08-22 17:00 /home/t2/t3/test.zz
[hdfs@server01 ~]$ hdfs dfs -text /home/t2/t3/t1.txt
test the hdfs....
[hdfs@server01 ~]$ hdfs dfs -cat /home/t2/t3/t1.txt test the hdfs....
[hdfs@server01 ~]$ hdfs dfs -cat /home/t2/t3/data
test the hdfs.... this is the test hdfs 002
[hdfs@server01 ~]$ hdfs dfs -cat /home/t2/t3/data|grep hdfs
test the hdfs.... this is the test hdfs 002
[hdfs@server01 ~]$ hdfs dfs -cat /home/t2/t3/data|grep this
this is the test hdfs 002 [hdfs@server01 ~]$
[hdfs@server01 ~]$ hdfs dfs -find /home -name t1.txt
/home/t1.txt /home/t2/t3/t1.txt
[root@server01 ~]# hdfs dfs -ls -R /home
-rw-r--r-- 3 hdfs supergroup 18 2018-08-22 16:08 /home/t1.txt drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 17:06 /home/t2/t3 -rw-r--r-- 3 hdfs supergroup 47 2018-08-22 17:06 /home/t2/t3/data -rw-r--r-- 3 hdfs supergroup 18 2018-08-22 16:31 /home/t2/t3/t1.txt -rw-r--r-- 3 hdfs supergroup 26 2018-08-22 16:49 /home/t2/t3/t2.txt -rw-r--r-- 3 hdfs supergroup 0 2018-08-22 16:01 /home/t2/t3/test.txt -rw-r--r-- 3 hdfs supergroup 0 2018-08-22 17:00 /home/t2/t3/test.zz
[hdfs@server01 root]$ hdfs dfs -chmod -R 755 /home/t2/t3
[root@server01 ~]# hdfs dfs -ls -R /home
-rw-r--r-- 3 hdfs supergroup 18 2018-08-22 16:08 /home/t1.txt drwxr-xr-x - hdfs supergroup 0 2018-08-22 15:15 /home/t2 drwxr-xr-x - hdfs supergroup 0 2018-08-22 17:06 /home/t2/t3 -rwxr-xr-x 3 hdfs supergroup 47 2018-08-22 17:06 /home/t2/t3/data -rwxr-xr-x 3 hdfs supergroup 18 2018-08-22 16:31 /home/t2/t3/t1.txt -rwxr-xr-x 3 hdfs supergroup 26 2018-08-22 16:49 /home/t2/t3/t2.txt -rwxr-xr-x 3 hdfs supergroup 0 2018-08-22 16:01 /home/t2/t3/test.txt -rwxr-xr-x 3 hdfs supergroup 0 2018-08-22 17:00 /home/t2/t3/test.zz
[hdfs@server01 root]$ hdfs dfs -ls /home/t2/t3/t1.txt
-rwxr-xr-x 3 hdfs supergroup 18 2018-08-22 16:31 /home/t2/t3/t1.txt
[hdfs@server01 root] hdfs dfs -ls /home/t2/t3/t1.txt
-rwxr-xr-x 3 root supergroup 18 2018-08-22 16:31 /home/t2/t3/t1.txt
[hdfs@server01 ~]$ hdfs dfsadmin -report
Configured Capacity: 169982572956 (158.31 GB) Present Capacity: 149058642916 (138.82 GB) DFS Remaining: 147186349028 (137.08 GB) DFS Used: 1872293888 (1.74 GB) DFS Used%: 1.26% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (4): Name: 192.168.242.52:50010 (server02) Hostname: server02 Rack: /default Decommission Status : Normal Configured Capacity: 42495643239 (39.58 GB) DFS Used: 490999808 (468.25 MB) Non DFS Used: 4575563367 (4.26 GB) DFS Remaining: 37026427129 (34.48 GB) DFS Used%: 1.16% DFS Remaining%: 87.13% Configured Cache Capacity: 2364538880 (2.20 GB) Cache Used: 0 (0 B) Cache Remaining: 2364538880 (2.20 GB) Cache Used%: 0.00% Cache Remaining%: 100.00% Xceivers: 8 Last contact: Wed Aug 22 17:20:11 CST 2018 Name: 192.168.242.53:50010 (server03) Hostname: server03 Rack: /default Decommission Status : Normal Configured Capacity: 42495643239 (39.58 GB) DFS Used: 440930304 (420.50 MB) Non DFS Used: 4487175783 (4.18 GB) DFS Remaining: 37030666572 (34.49 GB) DFS Used%: 1.04% DFS Remaining%: 87.14% Configured Cache Capacity: 2364538880 (2.20 GB) Cache Used: 0 (0 B) Cache Remaining: 2364538880 (2.20 GB) Cache Used%: 0.00% Cache Remaining%: 100.00% Xceivers: 10 Last contact: Wed Aug 22 17:20:12 CST 2018 Name: 192.168.242.54:50010 (server04) Hostname: server04 Rack: /default Decommission Status : Normal Configured Capacity: 42495643239 (39.58 GB) DFS Used: 439042048 (418.70 MB) Non DFS Used: 5806202471 (5.41 GB) DFS Remaining: 35981963430 (33.51 GB) DFS Used%: 1.03% DFS Remaining%: 84.67% Configured Cache Capacity: 2364538880 (2.20 GB) Cache Used: 0 (0 B) Cache Remaining: 2364538880 (2.20 GB) Cache Used%: 0.00% Cache Remaining%: 100.00% Xceivers: 6 Last contact: Wed Aug 22 17:20:12 CST 2018 Name: 192.168.242.55:50010 (server05) Hostname: server05 Rack: /default Decommission Status : Normal Configured Capacity: 42495643239 (39.58 GB) DFS Used: 501321728 (478.10 MB) Non DFS Used: 4444376679 (4.14 GB) DFS Remaining: 37147291897 (34.60 GB) DFS Used%: 1.18% DFS Remaining%: 87.41% Configured Cache Capacity: 2364538880 (2.20 GB) Cache Used: 0 (0 B) Cache Remaining: 2364538880 (2.20 GB) Cache Used%: 0.00% Cache Remaining%: 100.00% Xceivers: 8 Last contact: Wed Aug 22 17:20:11 CST 2018
重新读取hosts和exclude文件,使新的节点或需要退出集群的节点能够被NameNode重新识别。这个命令在新增节点或注销节点时用到。
示例:hdfs dfsadmin -refreshNodes
[hdfs@server01 root]$ hdfs dfs -setrep 2 /home/t2/t3/t1.txt
即使副本设置数量多于节点数,比如3个节点,设置5个副本数,仍旧只会有3个副本,但是这时候新加入节点,就会增加副本数,直至节点数等于或多于副本数则会保存为5个副本
本文作者:Yui_HTT
本文链接:
版权声明:本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!