您好,欢迎来到三六零分类信息网!老站,搜索引擎当天收录,欢迎发信息

Hbase+Hadoop安装部署

2024/5/26 17:38:21发布25次查看
vmware安装多个redhat linux操作系统,摘抄了不少网上的资料,基本上按照顺序都能安装好 ? 1、建用户 groupadd bigdata useradd -g bigdata hadoop passwd hadoop ? 2、建jdk vi /etc/profile ? export java_home=/usr/lib/java-1.7.0_07 export classpath=.
vmware安装多个redhat linux操作系统,摘抄了不少网上的资料,基本上按照顺序都能安装好
?
1、建用户
groupadd bigdata
useradd -g bigdata hadoop
passwd hadoop
?
2、建jdk
vi /etc/profile
?
export java_home=/usr/lib/java-1.7.0_07
export classpath=.
export hadoop_home=/home/hadoop/hadoop
export hbase_home=/home/hadoop/hbase?
export hadoop_mapared_home=${hadoop_home}
export hadoop_common_home=${hadoop_home}
export hadoop_hdfs_home=${hadoop_home}
export yarn_home=${hadoop_home}
export hadoop_conf_dir=${hadoop_home}/etc/hadoop
export hdfs_conf_dir=${hadoop_home}/etc/hadoop
export yarn_conf_dir=${hadoop_home}/etc/hadoop
export hbase_conf_dir=${hbase_home}/conf
export zk_home=/home/hadoop/zookeeper
export path=$java_home/bin:$hadoop_home/bin:$hbase_home/bin:$hadoop_home/sbin:$zk_home/bin:$path
?
?
?
?
?
source /etc/profile
chmod 777 -r /usr/lib/java-1.7.0_07
?
?
3、修改hosts
vi /etc/hosts
加入
172.16.254.215 ? master
172.16.254.216 ? salve1
172.16.254.217 ? salve2
172.16.254.218 ? salve3
?
3、免ssh密码
215服务器
su -root
vi /etc/ssh/sshd_config
确保含有如下内容
rsaauthentication yes
pubkeyauthentication yes
authorizedkeysfile ? ? ?.ssh/authorized_keys
重启sshd
service sshd restart
?
su - hadoop
ssh-keygen -t rsa
cd /home/hadoop/.ssh
cat id_rsa.pub >> authorized_keys
chmod 600 authorized_keys
?
在217 ?218 ?216 分别执行?
mkdir /home/hadoop/.ssh
chmod 700 /home/hadoop/.ssh
?
在215上执行
scp id_rsa.pub hadoop@salve1:/home/hadoop/.ssh/
scp id_rsa.pub hadoop@salve2:/home/hadoop/.ssh/
scp id_rsa.pub hadoop@salve3:/home/hadoop/.ssh/
?
在217 ?218 ?216 分别执行?
cat /home/hadoop/.ssh/id_rsa.pub >> /home/hadoop/.ssh/authorized_keys?
chmod 600 /home/hadoop/.ssh//authorized_keys
?
?
4、建hadoop与hbase、zookeeper
su - hadoop
mkdir /home/hadoop/hadoop
mkdir /home/hadoop/hbase
mkdir /home/hadoop/zookeeper
?
cp -r /home/hadoop/soft/hadoop-2.0.1-alpha/* /home/hadoop/hadoop/
cp -r /home/hadoop/soft/hbase-0.95.0-hadoop2/* /home/hadoop/hbase/
cp -r /home/hadoop/soft/zookeeper-3.4.5/* /home/hadoop/zookeeper/
?
?
1) hadoop 配置
?
vi /home/hadoop/hadoop/etc/hadoop/hadoop-env.sh?
修改?
export java_home=/usr/lib/java-1.7.0_07
export hbase_manages_zk=true
?
?
vi /home/hadoop/hadoop/etc/hadoop/core-site.xml
加入
hadoop.tmp.dir
/home/hadoop/hadoop/tmp
a base for other temporary directories.
fs.default.name
hdfs://172.16.254.215:9000
hadoop.proxyuser.root.hosts
172.16.254.215
hadoop.proxyuser.root.groups
*
?
vi /home/hadoop/hadoop/etc/hadoop/slaves ?
加入(不用master做salve)
salve1
salve2
salve3
?
vi /home/hadoop/hadoop/etc/hadoop/hdfs-site.xml
加入
dfs.replication
3
?
dfs.namenode.name.dir
file:/home/hadoop/hdfs/name
true
?
dfs.federation.nameservice.id
ns1
?
dfs.namenode.backup.address.ns1
172.16.254.215:50100
?
dfs.namenode.backup.http-address.ns1
172.16.254.215:50105
?
dfs.federation.nameservices
ns1
?
dfs.namenode.rpc-address.ns1
172.16.254.215:9000
dfs.namenode.rpc-address.ns2
172.16.254.215:9000
?
dfs.namenode.http-address.ns1
172.16.254.215:23001
?
dfs.namenode.http-address.ns2
172.16.254.215:13001
?
dfs.dataname.data.dir
file:/home/hadoop/hdfs/data
true
?
dfs.namenode.secondary.http-address.ns1
172.16.254.215:23002
?
dfs.namenode.secondary.http-address.ns2
172.16.254.215:23002
?
dfs.namenode.secondary.http-address.ns1
172.16.254.215:23003
?
dfs.namenode.secondary.http-address.ns2
172.16.254.215:23003
?
?
vi /home/hadoop/hadoop/etc/hadoop/yarn-site.xml
加入
yarn.resourcemanager.address
172.16.254.215:18040
?
yarn.resourcemanager.scheduler.address
172.16.254.215:18030
?
yarn.resourcemanager.webapp.address
172.16.254.215:18088
?
yarn.resourcemanager.resource-tracker.address
172.16.254.215:18025
?
yarn.resourcemanager.admin.address
172.16.254.215:18141
?
yarn.nodemanager.aux-services
mapreduce.shuffle
?
2) hbase配置
?
vi /home/hadoop/hbase/conf/hbase-site.xml
加入
?
dfs.support.append?
true?
?
?
hbase.rootdir?
hdfs://172.16.254.215:9000/hbase?
?
?
hbase.cluster.distributed?
true?
?
?
hbase.config.read.zookeeper.config?
true
?
hbase.master?
master?
?
?
hbase.zookeeper.quorum?
salve1,salve2,salve3?
?
zookeeper.session.timeout
60000
hbase.zookeeper.property.clientport
2181
hbase.tmp.dir
/home/hadoop/hbase/tmp
temporary directory on the local filesystem.
hbase.client.keyvalue.maxsize
10485760
?
vi /home/hadoop/hbase/conf/regionservers
加入
salve1
salve2
salve3
?
vi /home/hadoop/hbase/conf/hbase-env.sh
修改
export java_home=/usr/lib/java-1.7.0_07
export hbase_manages_zk=false
?
?
?
3) zookeeper配置
?
vi /home/hadoop/zookeeper/conf/zoo.cfg
加入
ticktime=2000
initlimit=10
synclimit=5
datadir=/home/hadoop/zookeeper/data
clientport=2181
server.1=salve1:2888:3888
server.2=salve2:2888:3888
server.3=salve3:2888:3888
?
将/home/hadoop/zookeeper/conf/zoo.cfg拷贝到/home/hadoop/hbase/
?
?
4) 同步master和salve
scp -r /home/hadoop/hadoop ?hadoop@salve1:/home/hadoop ?
scp -r /home/hadoop/hbase ?hadoop@salve1:/home/hadoop ?
scp -r /home/hadoop/zookeeper ?hadoop@salve1:/home/hadoop
?
scp -r /home/hadoop/hadoop ?hadoop@salve2:/home/hadoop ?
scp -r /home/hadoop/hbase ?hadoop@salve2:/home/hadoop ?
scp -r /home/hadoop/zookeeper ?hadoop@salve2:/home/hadoop
?
scp -r /home/hadoop/hadoop ?hadoop@salve3:/home/hadoop ?
scp -r /home/hadoop/hbase ?hadoop@salve3:/home/hadoop ?
scp -r /home/hadoop/zookeeper ?hadoop@salve3:/home/hadoop
?
设置 salve1 salve2 salve3 的zookeeper
?
echo 1 > /home/hadoop/zookeeper/data/myid
echo 2 > /home/hadoop/zookeeper/data/myid
echo 3 > /home/hadoop/zookeeper/data/myid
?
?
?
5)测试
测试hadoop
hadoop namenode -format -clusterid clustername
?
start-all.sh
hadoop fs -ls hdfs://172.16.254.215:9000/?
hadoop fs -mkdir hdfs://172.16.254.215:9000/hbase?
//hadoop fs -copyfromlocal ./install.log hdfs://172.16.254.215:9000/testfolder?
//hadoop fs -ls hdfs://172.16.254.215:9000/testfolder
//hadoop fs -put /usr/hadoop/hadoop-2.0.1-alpha/*.txt hdfs://172.16.254.215:9000/testfolder
//cd /usr/hadoop/hadoop-2.0.1-alpha/share/hadoop/mapreduce
//hadoop jar hadoop-mapreduce-examples-2.0.1-alpha.jar wordcount hdfs://172.16.254.215:9000/testfolder hdfs://172.16.254.215:9000/output
//hadoop fs -ls hdfs://172.16.254.215:9000/output
//hadoop fs -cat ?hdfs://172.16.254.215:9000/output/part-r-00000
?
启动 salve1 salve2 salve3 的zookeeper
zkserver.sh start
?
启动 start-hbase.sh
进入 hbase shell
测试 hbase?
list
create 'student','name','address' ?
put 'student','1','name','tom'
get 'student','1'
?
已有 0 人发表留言,猛击->> 这里
iteye推荐
—软件人才免语言低担保 赴美带薪读研!—
原文地址:hbase+hadoop安装部署, 感谢原作者分享。
该用户其它信息

VIP推荐

免费发布信息,免费发布B2B信息网站平台 - 三六零分类信息网 沪ICP备09012988号-2
企业名录 Product