之前已经有了namenode和datanode1,现在要新增节点datanode2
第一步:修改将要增加节点的主机名
hadoop@datanode1:~$ vim /etc/hostname
datanode2
第二步:修改host文件
hadoop@datanode1:~$ vim /etc/hosts
192.168.8.4 datanode2
127.0.0.1 localhost
127.0.1.1 ubuntu
192.168.8.2 namenode
192.168.8.3 datanode1
192.168.8.4 datanode2(增加了这个)
# the following lines are desirable for ipv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
第三步:修改ip
第四步:重启
第五步:ssh免密码配置
1.生成密钥
hadoop@datanode2:~$ ssh-keygen -t rsa -p
generating public/private rsa key pair.
enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
/home/hadoop/.ssh/id_rsa already exists.
overwrite (y/n)? y
your identification has been saved in /home/hadoop/.ssh/id_rsa.
your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
the key fingerprint is:
34:45:84:85:6e:f3:9e:7a:c0:f1:a4:ef:bf:30:a6:74 hadoop@datanode2
the key's randomart image is:
+--[ rsa 2048]----+
| *= |
| o. |
| .o |
| .=.. |
| osb |
| + o |
| .+e. |
| . +=o |
| o+..o. |
+-----------------+
2.把公钥传给namenode
hadoop@datanode2:~$ cd ~/.ssh
hadoop@datanode2:~/.ssh$ ls
authorized_keys id_rsa id_rsa.pub known_hosts
hadoop@datanode2:~/.ssh$ scp ./id_rsa.pub hadoop@namenode:/home/hadoop
hadoop@namenode's password:
id_rsa.pub 100% 398 0.4kb/s 00:00
3.把公钥追加到authorized_keys
hadoop@namenode:~/.ssh$ cat ../id_rsa.pub >> authorized_keys
hadoop@namenode:~/.ssh$ cat authorized_keys
ssh-rsa aaaab3nzac1yc2eaaaadaqabaaabaqduood8r7ofnsuhgpzhqwcfc0ytem6+txwso3lijjewzbh512ymkieinrjcztirjleqwgadapvbip3jluohfpk89v7d6q8qh4ilbjltsavxmhb77w3ygrxlhj8+g3qts8vmjgeyz86oem5f9um8f8qmk9mxxowhqt3xvufetr7o7acv3apehh1hvvkfimim2st/ini/nxsch176byus6y86gotgznvh8oix8mdmdksljqwpsctrpvxpeslzvplm4ysn2cyokaxcedaynzohxgac0gldq1k07efmeruwpbt+xtztrjpquyawk+mpf6+lnlm89u+bewdbzldunckhbck3 hadoop@ubuntu3
ssh-rsa aaaab3nzac1yc2eaaaadaqabaaabaqcssqndzo5uhpn93bvqj+nepzgqbipc1wgasoefqv7ljynlfhhopvs6g3ohpvsrbjg3ak1mqxmcw0vokuuo5eohwqh0alqw46eemunzrnwuhhfpau9v4t7lj5pyuxzoioxbsjkxcetoy6g2lkrmyk2z/mimppw+ufebt150+oyxckkysbbjolmthh3bww2cesaokie8gcq3riyshfa8rnuwxenrl8fc2xlwodtahjhd5bymbo4rd3uijxutv7/r243t0hrimjhj7uuiypcirydchpmmo9dfvebtylolmqqqs/zoxdix7gf+yk7kc7ayo1kl8vuwp90dqihpajmp96zv hadoop@ubuntu2
ssh-rsa aaaab3nzac1yc2eaaaadaqabaaabaqdbetmrotmz8gurjyzosvfpjbtxzuydelxjcfm0o+frpigxoiepphiqc5vi7kabnlsiev+94ydmclxzpxfjr0txz6ijovdpxfpqovy+gzryvxexj3hhbbwkc4sfuvgfgszr8rm3r5oe2wyizzokdx9c6ak5uie7busuxzaifctyxivu37tobyz44vdqgv9/mpsqp4qnyx4cztld1vmoeuha5iqtklt4k0hne3i+a3meebmxbwetui/6dcmvtxjee7cy48ypadr5ut0/xgtub/odmkbfvft6fpdvlhtrp5jqifapfyzl/bxiobqkslrjblkwtczs8j6sfskwsszfopzl hadoop@datanode2
4.把公钥传给其节点
hadoop@namenode:~$ scp ./.ssh/authorized_keys hadoop@datanode1:/home/hadoop/.ssh/authorized_keys
authorized_keys 100% 1190 1.2kb/s 00:00
hadoop@namenode:~$ scp ./.ssh/authorized_keys hadoop@datanode2:/home/hadoop/.ssh/authorized_keys
authorized_keys 100% 1190 1.2kb/s 00:00
5.一个错误
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ warning: unprotected private key file! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
permissions 0644 for '/home/jiangqixiang/.ssh/id_dsa' are too open.
it is recommended that your private key files are not accessible by others.
this private key will be ignored.
bad permissions: ignore key: /home/youraccount/.ssh/id_dsa 解决方法:
chmod 700 id_rsa第六步:修改namenode的配置文件
hadoop@namenode:~$ cd hadoop-1.2.1/conf
hadoop@namenode:~/hadoop-1.2.1/conf$ vim slaves
datanode1
datanode2
第七步:负载均衡
hadoop@namenode:~/hadoop-1.2.1/conf$ start-balancer.sh
warning: $hadoop_home is deprecated.
starting balancer, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-balancer-namenode.out
以下摘自其他博客
1)如果不balance,那么cluster会把新的数据都存放在新的node上,这样会降低map reduce的工作效率
2)threshold是平衡阈值,默认是10%,值越低各节点越平衡,但消耗时间也更长
/app/hadoop/bin/start-balancer.sh -threshold 0.1
3)在namenode的配置文件 hdfs-site.xml 可以加上balance的带宽(默认值就是1m):
dfs.balance.bandwidthpersec
1048576
specifies the maximum amount of bandwidth that each datanode
can utilize for the balancing purpose in term of
the number of bytes per second.
第八步:测试是否有效
1.启动hadoop
hadoop@namenode:~/hadoop-1.2.1$ start-all.sh
warning: $hadoop_home is deprecated.
starting namenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-namenode.out
datanode2: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-datanode2.out
datanode1: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-datanode1.out
namenode: starting secondarynamenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-namenode.out
starting jobtracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-namenode.out
datanode2: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-datanode2.out
datanode1: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-datanode1.out
hadoop@namenode:~/hadoop-1.2.1$
2.错误
运行wordcount程序时出现错误
hadoop@namenode:~/hadoop-1.2.1$ hadoop jar hadoop-examples-1.2.1.jar wordcount in out
warning: $hadoop_home is deprecated.
14/09/12 08:40:39 error security.usergroupinformation: priviledgedactionexception as:hadoop cause:org.apache.hadoop.ipc.remoteexception: org.apache.hadoop.mapred.safemodeexception: jobtracker is in safe mode
at org.apache.hadoop.mapred.jobtracker.checksafemode(jobtracker.java:5188)
at org.apache.hadoop.mapred.jobtracker.getstagingareadir(jobtracker.java:3677)
at sun.reflect.nativemethodaccessorimpl.invoke0(native method)
at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)
at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)
at java.lang.reflect.method.invoke(method.java:606)
at org.apache.hadoop.ipc.rpc$server.call(rpc.java:587)
at org.apache.hadoop.ipc.server$handler$1.run(server.java:1432)
at org.apache.hadoop.ipc.server$handler$1.run(server.java:1428)
at java.security.accesscontroller.doprivileged(native method)
at javax.security.auth.subject.doas(subject.java:415)
at org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1190)
at org.apache.hadoop.ipc.server$handler.run(server.java:1426)
org.apache.hadoop.ipc.remoteexception: org.apache.hadoop.mapred.safemodeexception: jobtracker is in safe mode
at org.apache.hadoop.mapred.jobtracker.checksafemode(jobtracker.java:5188)
at org.apache.hadoop.mapred.jobtracker.getstagingareadir(jobtracker.java:3677)
at sun.reflect.nativemethodaccessorimpl.invoke0(native method)
at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)
at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)
at java.lang.reflect.method.invoke(method.java:606)
at org.apache.hadoop.ipc.rpc$server.call(rpc.java:587)
at org.apache.hadoop.ipc.server$handler$1.run(server.java:1432)
at org.apache.hadoop.ipc.server$handler$1.run(server.java:1428)
at java.security.accesscontroller.doprivileged(native method)
at javax.security.auth.subject.doas(subject.java:415)
at org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1190)
at org.apache.hadoop.ipc.server$handler.run(server.java:1426)
at org.apache.hadoop.ipc.client.call(client.java:1113)
at org.apache.hadoop.ipc.rpc$invoker.invoke(rpc.java:229)
at org.apache.hadoop.mapred.$proxy2.getstagingareadir(unknown source)
at sun.reflect.nativemethodaccessorimpl.invoke0(native method)
at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)
at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)
at java.lang.reflect.method.invoke(method.java:606)
at org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler.java:85)
at org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler.java:62)
at org.apache.hadoop.mapred.$proxy2.getstagingareadir(unknown source)
at org.apache.hadoop.mapred.jobclient.getstagingareadir(jobclient.java:1309)
at org.apache.hadoop.mapreduce.jobsubmissionfiles.getstagingdir(jobsubmissionfiles.java:102)
at org.apache.hadoop.mapred.jobclient$2.run(jobclient.java:942)
at org.apache.hadoop.mapred.jobclient$2.run(jobclient.java:936)
at java.security.accesscontroller.doprivileged(native method)
at javax.security.auth.subject.doas(subject.java:415)
at org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1190)
at org.apache.hadoop.mapred.jobclient.submitjobinternal(jobclient.java:936)
at org.apache.hadoop.mapreduce.job.submit(job.java:550)
at org.apache.hadoop.mapreduce.job.waitforcompletion(job.java:580)
at org.apache.hadoop.examples.wordcount.main(wordcount.java:82)
at sun.reflect.nativemethodaccessorimpl.invoke0(native method)
at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)
at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)
at java.lang.reflect.method.invoke(method.java:606)
at org.apache.hadoop.util.programdriver$programdescription.invoke(programdriver.java:68)
at org.apache.hadoop.util.programdriver.driver(programdriver.java:139)
at org.apache.hadoop.examples.exampledriver.main(exampledriver.java:64)
at sun.reflect.nativemethodaccessorimpl.invoke0(native method)
at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)
at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)
at java.lang.reflect.method.invoke(method.java:606)
at org.apache.hadoop.util.runjar.main(runjar.java:160)
解决方法:
hadoop@namenode:~/hadoop-1.2.1$ hadoop dfsadmin -safemode leave
warning: $hadoop_home is deprecated.
safe mode is off
3.再次测试
hadoop@namenode:~/hadoop-1.2.1$ hadoop jar hadoop-examples-1.2.1.jar wordcount in out
warning: $hadoop_home is deprecated.
14/09/12 08:48:26 info input.fileinputformat: total input paths to process : 2
14/09/12 08:48:26 info util.nativecodeloader: loaded the native-hadoop library
14/09/12 08:48:26 warn snappy.loadsnappy: snappy native library not loaded
14/09/12 08:48:28 info mapred.jobclient: running job: job_201409120827_0003
14/09/12 08:48:29 info mapred.jobclient: map 0% reduce 0%
14/09/12 08:48:47 info mapred.jobclient: map 50% reduce 0%
14/09/12 08:48:48 info mapred.jobclient: map 100% reduce 0%
14/09/12 08:48:57 info mapred.jobclient: map 100% reduce 33%
14/09/12 08:48:59 info mapred.jobclient: map 100% reduce 100%
14/09/12 08:49:02 info mapred.jobclient: job complete: job_201409120827_0003
14/09/12 08:49:02 info mapred.jobclient: counters: 30
14/09/12 08:49:02 info mapred.jobclient: job counters
14/09/12 08:49:02 info mapred.jobclient: launched reduce tasks=1
14/09/12 08:49:02 info mapred.jobclient: slots_millis_maps=27285
14/09/12 08:49:02 info mapred.jobclient: total time spent by all reduces waiting after reserving slots (ms)=0
14/09/12 08:49:02 info mapred.jobclient: total time spent by all maps waiting after reserving slots (ms)=0
14/09/12 08:49:02 info mapred.jobclient: rack-local map tasks=1
14/09/12 08:49:02 info mapred.jobclient: launched map tasks=2
14/09/12 08:49:02 info mapred.jobclient: data-local map tasks=1
14/09/12 08:49:02 info mapred.jobclient: slots_millis_reduces=12080
14/09/12 08:49:02 info mapred.jobclient: file output format counters
14/09/12 08:49:02 info mapred.jobclient: bytes written=48
14/09/12 08:49:02 info mapred.jobclient: filesystemcounters
14/09/12 08:49:02 info mapred.jobclient: file_bytes_read=104
14/09/12 08:49:02 info mapred.jobclient: hdfs_bytes_read=265
14/09/12 08:49:02 info mapred.jobclient: file_bytes_written=177680
14/09/12 08:49:02 info mapred.jobclient: hdfs_bytes_written=48
14/09/12 08:49:02 info mapred.jobclient: file input format counters
14/09/12 08:49:02 info mapred.jobclient: bytes read=45
14/09/12 08:49:02 info mapred.jobclient: map-reduce framework
14/09/12 08:49:02 info mapred.jobclient: map output materialized bytes=110
14/09/12 08:49:02 info mapred.jobclient: map input records=2
14/09/12 08:49:02 info mapred.jobclient: reduce shuffle bytes=110
14/09/12 08:49:02 info mapred.jobclient: spilled records=18
14/09/12 08:49:02 info mapred.jobclient: map output bytes=80
14/09/12 08:49:02 info mapred.jobclient: total committed heap usage (bytes)=248127488
14/09/12 08:49:02 info mapred.jobclient: cpu time spent (ms)=8560
14/09/12 08:49:02 info mapred.jobclient: combine input records=9
14/09/12 08:49:02 info mapred.jobclient: split_raw_bytes=220
14/09/12 08:49:02 info mapred.jobclient: reduce input records=9
14/09/12 08:49:02 info mapred.jobclient: reduce input groups=7
14/09/12 08:49:02 info mapred.jobclient: combine output records=9
14/09/12 08:49:02 info mapred.jobclient: physical memory (bytes) snapshot=322252800
14/09/12 08:49:02 info mapred.jobclient: reduce output records=7
14/09/12 08:49:02 info mapred.jobclient: virtual memory (bytes) snapshot=1042149376
14/09/12 08:49:02 info mapred.jobclient: map output records=9
hadoop@namenode:~/hadoop-1.2.1$ hadoop fs -cat out/*
warning: $hadoop_home is deprecated.
heheh 1
hello 2
it's 1
ll 1
the 2
think 1
why 1
cat: file does not exist: /user/hadoop/out/_logs
