VM9+Debian6+hadoop0.23.9如何实现单点安装

这篇文章主要介绍VM9+Debian6+hadoop0.23.9如何实现单点安装,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!

成都创新互联主营宁化网站建设的网络公司,主营网站建设方案,App定制开发,宁化h5微信平台小程序开发搭建,宁化网站营销推广欢迎宁化等地区企业咨询

一、环境准备

1.1 Debian 6,安装时根据提示安装SSH;(如果是window中模拟,可先安装VMware,本人选择的是VMware workstation 9)

1.2 jdk1.7,hadoop0.23.9:下载位置http://mirror.esocc.com/apache/hadoop/common/hadoop-0.23.9/hadoop-0.23.9.tar.gz

二、安装过程

2.1 为Debian安装sudo

root@debian:apt-get install sudo

2.2 安装jdk1.7

先通过SSH客户端将jdk-7u45-linux-i586.tar.gz传到/root/路径下,然后执行下面命令

root@debian~:tar -zxvf jdk-7u45-linux-i586.tar.gz -C /usr/java/

2.3 hadoop下载&安装

root@debian~:wget http://mirror.esocc.com/apache/hadoop/common/hadoop-0.23.9/hadoop-0.23.9.tar.gz
root@debian~:tar zxvf hadoop-0.23.9.tar.gz -C /opt/
root@debian~:cd /opt/
root@debian:/opt/# ln -s hadoop-0.23.9/ hadoop

      ----------这里做了个hadoop0.23.9的映射,相当于windows下面的.link

2.4 添加hadoop用户权限

root@debian~:groupadd hadoop
root@debian~:useradd -g hadoop hadoop
root@debian~:passwd hadoop
root@debian~:vi /etc/sudoers

sudoers中添加hadoop用户权限
root ALL=(ALL) ALL下方添加

hadoop ALL=(ALL:ALL) ALL

2.5 配置SSH登录

root@debian:su – hadoop
root@debian:ssh-keygen -t rsa -P "自己的密码" 可以是无密码
root@debian:cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
root@debian:chmod 600 ~/.ssh/authorized_keys

测试登录

root@debian:ssh localhost

  如果想设置空密码登录,还是提示输入密码的话,确认本机sshd的配置文件(需要root权限)
  root @debian :vi /etc/ssh/sshd_config
  找到以下内容,并去掉注释符”#“
     RSAAuthentication yes
     PubkeyAuthentication yes
     AuthorizedKeysFile     .ssh/authorized_keys
  然后重启sshd,不想设置空密码登录的不用重启
  root @debian :servicesshd restart

2.6 配置hadoop用户

root@debian:chown -R hadoop:hadoop /opt/hadoop
root@debian:chown -R hadoop:hadoop /opt/hadoop-0.23.9
root@debian:su – hadoop
hadoop@debian6-01:~#:vi .bashrc

添加以下部分
export JAVA_HOME=/usr/java//usr/java/jdk1.7.0_45
export JRE_HOME=${JAVA_HOME}/jre
export HADOOP_HOME=/opt/hadoop
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$HADOOP_HOME/bin:$PATH
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

root@debian:cd /opt/hadoop/etc/hadoop/
root@debian6-01:/opt/hadoop/etc/hadoop# vi yarn-env.sh

追加以下
export HADOOP_FREFIX=/opt/hadoop
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop

root@debian6-01:/opt/hadoop/etc/hadoop# vi core-site.xml


 
  fs.defaultFS
  hdfs://localhost:12200
 

 
  hadoop.tmp.dir
  /opt/hadoop/hadoop-root
 

 
  fs.arionfs.impl
  org.apache.hadoop.fs.pvfs2.Pvfs2FileSystem
  The FileSystem for arionfs.
 


root@debian6-01:/opt/hadoop/etc/hadoop# vi hdfs-site.xml


 
  dfs.namenode.name.dir
  file:/opt/hadoop/data/dfs/name
  true
 

 
  dfs.namenode.data.dir
  file:/opt/hadoop/data/dfs/data
  true
 

 
  dfs.replication
  1
 

 
  dfs.permission
  false
 


root@debian6-01:/opt/hadoop/etc/hadoop#cp mapred-site.xml.templatemapred-site.xml
root@debian6-01:/opt/hadoop/etc/hadoop# vi mapred-site.xml


   
        mapreduce.framework.name
        yarn
   

   
        mapreduce.job.tracker
        hdfs://localhost:9001
        true
   

   
        mapreduce.map.memory.mb
        1536
   

   
        mapreduce.map.java.opts
        -Xmx1024M
   

   
        mapreduce.reduce.memory.mb
        3072
   

   
             mapreduce.reduce.java.opts
        -Xmx2560M
   

   
        mapreduce.task.io.sort.mb
        512
   

   
        mapreduce.task.io.sort.factor
        100
   
    
   
        mapreduce.reduce.shuffle.parallelcopies
        50
   

   
        mapreduce.system.dir
        file:/opt/hadoop/data/mapred/system
   

   
        mapreduce.local.dir
        file:/opt/hadoop/data/mapred/local
        true
   



root@debian6-01:/opt/hadoop/etc/hadoop# vi yarn-site.xml



 
    yarn.nodemanager.aux-services
    mapreduce.shuffle
 

 
    yarn.nodemanager.aux-services.mapreduce.shuffle.class
    org.apache.hadoop.mapred.ShuffleHandler
 

 
    mapreduce.framework.name
    yarn
 

 
    user.name
    hadoop
 

 
    yarn.resourcemanager.address
    localhost:54311
 

 
    yarn.resourcemanager.scheduler.address
    localhost:54312
 

 
    yarn.resourcemanager.webapp.address
    localhost:54313
 

 
    yarn.resourcemanager.resource-tracker.address
    localhost:54314
 

 
    yarn.web-proxy.address
    localhost:54315
 

 
    mapred.job.tracker
    localhost
      

2.7 启动并运行wordcount程序

设置JAVA_HOME

root@debian6-01:vi /opt/hadoop/libexec/hadoop-config.sh

# Attempt to set JAVA_HOME if it is not set
export JAVA_HOME=/usr/java/jdk1.7.0_45    -添加
if [[ -z $JAVA_HOME ]]; then  -------:wq!保存退出

格式化namenode

root@debian6-01:/opt/hadoop/lib# hadoop namenode -format

启动

root@debian6-01:/opt/hadoop/sbin/start-dfs.sh
root@debian6-01:/opt/hadoop/sbin/start-yarn.sh

检查

root@debian6-01:jps

6365 SecondaryNameNode
7196 ResourceManager
6066 NameNode
7613 Jps
6188 DataNode
7311 NodeManager

以上是“VM9+Debian6+hadoop0.23.9如何实现单点安装”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注创新互联行业资讯频道!


本文标题:VM9+Debian6+hadoop0.23.9如何实现单点安装
URL地址:http://ybzwz.com/article/gjosce.html