我已经准备好了。我用ssh连接到主节点。我想把一个文件复制到hdfs系统。在我的程序中,这行代码是:
os.system('/home/hadoop/bin/hdfs dfs -put %s PATH_to_HADOOP' % tmp_output)
我想输入hdfs文件系统的路径。
我愿意
[ec2-user@ip-172-31-0-185 input]$ /home/hadoop/bin/hdfs dfs -ls /
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2014-04-14 22:21 /hbase
drwxrwx--- - hadoop supergroup 0 2014-04-14 22:19 /tmp
我尽力了
[ec2-user@ip-172-31-0-185 input]$ /home/hadoop/bin/hdfs dfs -mkdir /tmp/stockmarkets
mkdir: Permission denied: user=ec2-user, access=EXECUTE, inode="/tmp":hadoop:supergroup:drwxrwx---
因此,要添加ec2用户以使用hadoop,我遵循以下说明:
http://cloudcelebrity.wordpress.com/2013/06/05/handling-permission-denied-error-on-hdfs/
但在我写完之后(我用ubuntu代替ec2用户)
sudo adduser ec2-user hadoop
我没有收到add消息,而是得到:
Usage: useradd [options] LOGIN
Options:
-b, --base-dir BASE_DIR base directory for the home directory of the
new account
-c, --comment COMMENT GECOS field of the new account
-d, --home-dir HOME_DIR home directory of the new account
-D, --defaults print or change default useradd configuration
-e, --expiredate EXPIRE_DATE expiration date of the new account
-f, --inactive INACTIVE password inactivity period of the new account
-g, --gid GROUP name or ID of the primary group of the new
account
-G, --groups GROUPS list of supplementary groups of the new
account
-h, --help display this help message and exit
-k, --skel SKEL_DIR use this alternative skeleton directory
-K, --key KEY=VALUE override /etc/login.defs defaults
-l, --no-log-init do not add the user to the lastlog and
faillog databases
-m, --create-home create the user's home directory
-M, --no-create-home do not create the user's home directory
-N, --no-user-group do not create a group with the same name as
the user
-o, --non-unique allow to create users with duplicate
(non-unique) UID
-p, --password PASSWORD encrypted password of the new account
-r, --system create a system account
-s, --shell SHELL login shell of the new account
-u, --uid UID user ID of the new account
-U, --user-group create a group with the same name as the user
-Z, --selinux-user SEUSER use a specific SEUSER for the SELinux user mapping
所以我很困惑,搞砸了。。请帮助>。。。。
2条答案
按热度按时间rdlzhqv91#
ssh作为hadoop@(publicip)用于amazon emr。
从那里你可以用hdfs做任何你想做的事情,而不必“su”。我只是做了一个mkdir,运行了distcp和流媒体作业。我做的一切都是hadoop@,按照emr的指令。
relj7zay2#
如果查看hdfs目录/tmp的权限,可以看到/tmp由用户hadoop和
ec2-user
没有在/tmp中创建文件/目录的权限为目录/tmp分配适当的权限,使用以下命令
现在尝试在/tmp hdfs位置内创建目录