Hadoop(一)----HDFS的API操作

    技术2022-07-11  140

    HDFS文件上传

    1、源代码`

    @Test public void testCopyFromLocalFile() throws IOException,InterruptedException,URISyntaxException{ //1、获取文件系统 Configuration configuration = new Configuration(); configuration.set("dfs.replication","2");//副本数 FileSystem fs = FileSystem.get(new URI("hdfs://hadoop101:9000"),configuration,"root");//写自己的hadoop机器和用户名 //2、上传文件 fs.copyFromLocalFile(new Path("d:/test.txt"),new Path("/test.txt")); //本地路径和hdfs路径 //3、关闭 fs.close(); System.out.println("上传结束"); }

    2、将hdfs-site.xml拷贝到项目的根目录下

    HDFS文件下载

    @Test public void testCopyToLocalFile() throws IOException,InterruptedException,URISyntaxException{ //1、获取文件系统 Configuration configuration = new Configuration(); FileSystem fs = FileSystem.get(new URI("hdfs://hadoop101:9000"),configuration,"root");//写自己的hadoop机器和用户名 //2、下载文件 fs.copyToLocalFile(false,new Path("/test.txt"),new Path("d:/test.txt"),true); //3、关闭 fs.close(); System.out.println("下载结束"); }

    HDFS文件夹删除

    @Test public void testDelete() throws IOException,InterruptedException,URISyntaxException{ //1、获取文件系统 Configuration configuration = new Configuration(); FileSystem fs = FileSystem.get(new URI("hdfs://hadoop101:9000"),configuration,"root");/ //2、删除文件 fs.delete(new Path("/honglou.txt"),true); //3、关闭 fs.close(); System.out.println("删除结束"); }

    HDFS文件名更改

    @Test public void rename() throws IOException,InterruptedException,URISyntaxException{ //1、获取文件系统 Configuration configuration = new Configuration(); FileSystem fs = FileSystem.get(new URI("hdfs://hadoop101:9000"),configuration,"root");//写自己的hadoop机器和用户名 //2、修改文件名 fs.rename(new Path("/1.txt"),new Path("/2.txt")); //3、关闭 fs.close(); }

    HDFS文件详情查看

    @Test public void testListFiles() throws IOException,InterruptedException,URISyntaxException{ //1、获取文件系统 Configuration configuration = new Configuration(); FileSystem fs = FileSystem.get(new URI("hdfs://hadoop101:9000"),configuration,"root"); //2、文件详情 RemoteIterator<LocatedFileStatus> listFiles = fs.listFiles(new Path("/"),true); while(listFiles.hasNext()){ LocatedFileStatus status = listFiles.next(); //输出详情 //文件名称 System.out.println(status.getPath().getName()); //长度 System.out.println(status.getLen()); //权限 System.out.println(status.getPermission()); //分组 System.out,println(status.getGroup()); //获取存储的块信息(注意:文件夹没有block) BlockLocation[] blockLocations = status.getBlockLocation(); for(BlockLocation blockLocation:blockLocations){ //获取块存储的主机节点 String[] hosts = blockLocation.getHosts(); for(String host :hosts){ System.out.println(host); } } System.out.println("====分割线===="); } //3、关闭 fs.close(); }

    HDFS文件和文件夹判断

    @Test public void testListStatus() throws IOException,InterruptedException,URISyntaxException{ //1、获取文件系统 Configuration configuration = new Configuration(); FileSystem fs = FileSystem.get(new URI("hdfs://hadoop101:9000"),configuration,"root"); //2、判断是文件还是文件夹 FileStatus[] fileStatus = fs.listStatus(new Path("/")); for(FileStatus fileStatus : listStatus){ //如果是文件 if(fileStatus.isFile()){ System.out.println("f:"+fileStatus.getPath().getName()); }else{ System.out.println("d:"+fileStatus.getPath().getName()); } } //3、关闭 fs.close(); }
    Processed: 0.010, SQL: 9