Hadoop用户重新部署HDFS-创新互联
前言:
在这篇文章中https://www.jianshu.com/p/eeae2f37a48c
我们使用的是root用户来部署的,在生产环境中,一般某个组件是由某个用户来启动的,本篇文章介绍下怎样用hadoop用户来重新部署伪分布式(HDFS)
1.前期准备
创建hadoop用户,及配置ssh免密登录
参考:https://www.jianshu.com/p/589bb43e02822.停止root启动的HDFS进程并删除/tmp目录下的存储文件
[root@hadoop000 hadoop-2.8.1]# pwd /opt/software/hadoop-2.8.1 [root@hadoop000 hadoop-2.8.1]# jps 32244 NameNode 32350 DataNode 32558 SecondaryNameNode 1791 Jps [root@hadoop000 hadoop-2.8.1]# sbin/stop-dfs.sh Stopping namenodes on [hadoop000] hadoop000: stopping namenode localhost: stopping datanode Stopping secondary namenodes [0.0.0.0] 0.0.0.0: stopping secondarynamenode [root@hadoop000 hadoop-2.8.1]# jps 2288 Jps [root@hadoop000 hadoop-2.8.1]# rm -rf /tmp/hadoop-* /tmp/hsperfdata_*
3.更改文件属主
[root@hadoop000 software]# pwd /opt/software [root@hadoop000 software]# chown -R hadoop:hadoop hadoop-2.8.1
4.进入hadoop用户 修改相关配置文件
#第一步: [hadoop@hadoop000 hadoop]$ pwd /opt/software/hadoop-2.8.1/etc/hadoop [hadoop@hadoop000 hadoop]$ vi hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>192.168.6.217:50090</value> </property> <property> <name>dfs.namenode.secondary.https-address</name> <value>192.168.6.217:50091</value> </property> </configuration> #第二步: [hadoop@hadoop000 hadoop]$ vi core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.6.217:9000</value> </property> </configuration> #第三步: [hadoop@hadoop000 hadoop]# vi slaves 192.168.6.217
5.格式化和启动
[hadoop@hadoop000 hadoop-2.8.1]$ pwd /opt/software/hadoop-2.8.1 [hadoop@hadoop000 hadoop-2.8.1]$ bin/hdfs namenode -format [hadoop@hadoop000 hadoop-2.8.1]$ sbin/start-dfs.sh Starting namenodes on [hadoop000] hadoop000: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop000.out 192.168.6.217: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop000.out Starting secondary namenodes [hadoop000] hadoop000: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop000.out [hadoop@hadoop000 hadoop-2.8.1]$ jps 3141 Jps 2806 DataNode 2665 NameNode 2990 SecondaryNameNode #至此发现HDFS三个进程都是以hadoop000启动,
另外有需要云服务器可以了解下创新互联scvps.cn,海内外云服务器15元起步,三天无理由+7*72小时售后在线,公司持有idc许可证,提供“云服务器、裸金属服务器、高防服务器、香港服务器、美国服务器、虚拟主机、免备案服务器”等云主机租用服务以及企业上云的综合解决方案,具有“安全稳定、简单易用、服务可用性高、性价比高”等特点与优势,专为企业上云打造定制,能够满足用户丰富、多元化的应用场景需求。
网站题目:Hadoop用户重新部署HDFS-创新互联
URL网址:http://ybzwz.com/article/dseeso.html