博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
[zz]hdfs-over-ftp安装
阅读量:6915 次
发布时间:2019-06-27

本文共 2676 字,大约阅读时间需要 8 分钟。

原文见:http://nubetech.co/accessing-hdfs-over-ftp
这个程序是通过hdfs的9000端口访问的。听说还有hadoop自己的拓展包,需要重新编译hadoop。有机会的话安装一次来对比一下效率。
下载压缩包:hdfs-over-ftp-0.20.0.tar.gz(我的hadoop是0.20.2)
 
1.解压之后在目录下执行
./register-user.sh username password >> users.conf
这会在users.conf中生成新的ftp账户配置。
废话一句,xxxx.homedirectory=/,这里的/就是你的hadoop的根目录。
2.修改hdfs-over-ftp.conf,有两个要注意的地方:
hdfs-uri = hdfs://localhost:9000###确认localhost:9000可以访问到你的hadoop。
superuser = hadoop#####hadoop是不是你起hadoop服务的用户。
3.修改log4j.conf的
log4j.appender.R.File=xxxxxxx
 
4.启动/关闭:
sudo ./hdfs-over-ftp.sh start
###必须使用sudo,没有sudo权限的话,修改/etc/sudoers ,在root下一行添加。
username    ALL=(ALL)       ALL
sudo ./hdfs-over-ftp.sh stop
 
 

The Hadoop Distributed File System provides different interfaces so that clients can interact with it. Besides the HDFS shell, the file system exposes itself through WebDAV, Thrift, FTP and FUSE. In this post, we access HDFS over FTP. We have used Hadoop 0.20.2.

1. Download the hdfs-over-ftp tar from https://issues.apache.org/jira/secure/attachment/12409518/hdfs-over-ftp-0.20.0.tar.gz

2. Untar hdfs-over-ftp-0.20.0.tar.gz.

3. We now need to create the configuration with ftp username and password.

./register-user.sh username password >> users.conf

# the username user

ftpserver.user.username.userpassword=0238775C7BD96E2EAB98038AFE0C4279
ftpserver.user.username.homedirectory=/
ftpserver.user.username.enableflag=true
ftpserver.user.username.writepermission=true
ftpserver.user.username.maxloginnumber=0
ftpserver.user.username.maxloginperip=0
ftpserver.user.username.idletime=0
ftpserver.user.username.uploadrate=0
ftpserver.user.username.downloadrate=0
ftpserver.user.username.groups=users

4. Configure log4j.conf so that you can diagnose whats happening.

5. Now make changes according to your requirement in hdfs-over-ftp.conf

#uncomment this to run ftp server
port = 21
data-ports = 20

#uncomment this to run ssl ftp server

#ssl-port = 990
#ssl-data-ports = 989

# hdfs uri

hdfs-uri = hdfs://localhost:9000

# max number of login

max-logins = 1000

# max number of anonymous login

max-anon-logins = 1000

# have to be a user which runs HDFS

# this allows you to start ftp server as a root to use 21 port
# and use hdfs as a superuser
superuser = hadoop
Please provide hdfs-uri according to your requirement.

6. Now start ftp server:

sudo ./hdfs-over-ftp.sh start
7. To login to hdfs as ftp client
ftp {ip address of namenode machine}
(Note- Use username and password which you registered in user.conf)

8. To put file or write in any folder of hdfs you need to provide permission to your user through hadoop ‘chown’ command

bin/hadoop fs -chown -R group:username {path}
10. You can stop the ftp server by
sudo ./hdfs-over-ftp.sh stop

转载地址:http://qdacl.baihongyu.com/

你可能感兴趣的文章
flask从文本中提取出ip
查看>>
专业术语收集
查看>>
【转】这些年,我收集的JavaScript代码(二)
查看>>
python datetime简单使用
查看>>
Hello 大家好,欢迎来到我的新博客
查看>>
博客,我们是写给谁看的
查看>>
linux系统调优及安全设置
查看>>
MediaInfo源代码分析 4:Inform()函数
查看>>
Spread for Windows Forms高级主题(2)---理解单元格类型
查看>>
使用kickstart软件自动化安装CentOS 6.X操作系统
查看>>
2015新题型
查看>>
二进制与十进制整数,浮点数相互转换
查看>>
zabbix监控(二)添加新主机、自定义监控
查看>>
支付宝升级延时到账功能
查看>>
静默安装oracle
查看>>
Java SE 7新特性:创建泛型实例时自动类型推断
查看>>
一个C++程序
查看>>
【小松教你手游开发】【unity实用技能】线性差值计算实现
查看>>
Linux操作系统日志管理方法
查看>>
Elasticsearch - 使用kibana
查看>>