# INDemo
**Repository Path**: hetaos/INDemo
## Basic Information
- **Project Name**: INDemo
- **Description**: 整合 bootstrapUI IN+框架 借鉴H+TAB效果 , 实现 AJAX DIV 内容加载。。。
- **Primary Language**: Java
- **License**: Apache-2.0
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 1
- **Forks**: 1
- **Created**: 2016-01-29
- **Last Updated**: 2020-12-19
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
## centos7系统CM(Cloudera Manager)及CDH(Cloudera Distribution Hadoop)安装文档
> 说明:改篇文章采用Cloudera Manager Tarball进行手动安装模式。官方提供3种安装模式:cloudera-manager-installer.bin一键安装,rpm、yum、apt-get方式在线安装,TarBalls手动安装,推荐使用第2、3种模式,其他安装方式可点击查看:[CM/CDH官方安装文档](https://www.cloudera.com/documentation/enterprise/latest/topics/installation_installation.html)
### - CM是什么
简单来说,Cloudera Manager是一个拥有集群自动化安装、中心化管理、集群监控、报警功能的一个工具(软件),使得安装集群从几天的时间缩短在几个小时内,运维人员从数十人降低到几人以内,极大的提高集群管理的效率。
cloudera manager有四大功能:
(1)管理:对集群进行管理,如添加、删除节点等操作。
(2)监控:监控集群的健康情况,对设置的各种指标和系统运行情况进行全面监控。
(3)诊断:对集群出现的问题进行诊断,对出现的问题给出建议解决方案。
(4)集成:对hadoop的多组件进行整合。
### - CDH是什么
hadoop是一个开源项目,所以很多公司在这个基础进行商业化,Cloudera对hadoop做了相应的改变。
Cloudera公司的发行版,我们将该版本称为CDH(Cloudera Distribution Hadoop)。截至目前为止,CDH共有5个版本,其中,前两个已经不再更新,最近的两个分别是CDH4在Apache Hadoop 2.0.0版本基础上演化而来的)和CDH5,它们每隔一段时间便会更新一次。
### - 版本兼容
Cloudera Manager小版本必须始终等于或大于CDH次要版本。较旧版本的Cloudera Manager可能不支持较新版本的CDH中的功能。例如,要升级到CDH 5.7.1,您必须先升级到Cloudera Manager 5.7.0。
### - 操作系统要求
在RHEL-compatible, SLES, Ubuntu, and Debian操作系统安装CM/CDH 5.x 版本,应选择64-bit packages
CDH and Cloudera Manager 5.12.x Supported Operating Systems
| Operating System |Version (bold=new) |
| ------|:----:|
| RHEL / CentOS
Max SE Linux support: 7.2 | 7.3, 7.2, 7.1
6.9 , 6.8, 6.7, 6.6, 6.5, 6.4
5.11, 5.10, 5.7 |
| Oracle Linux (OL) | 7.3, 7.2, 7.1 (UEK default)
6.9, 6.8 (UEK R2, R4)
6.7, 6.6, 6.5 (UEK R2, R3)
6.4 (UEK R2)
5.11, 5.10, 5.7 (UEK R2) |
| Ubuntu | 16.04 LTS (Xenial)
14.04 LTS (Trusty)
12.04 LTS (Precise) |
| Debian | 8.2, 8.4 (Jessie)
7.0, 7.1, 7.8 (Wheezy) |
### - 数据库要求
`Cloudera Manager`和CDH自带一个嵌入式PostgreSQL数据库,用于非生产环境。嵌入式PostgreSQL数据库在生产环境中不受支持。对于生产环境,必须将集群配置为使用外部数据库。
注意:
(1)数据库编码必须为UTF-8
(2)对于MySQL 5.6和5.7版本,必须安装`MySQL-shared-compat` 或者 `MySQL-shared package`,这个是 `Cloudera Manager Agent` 安装包必须的。
(3)如果mysql开启`GTID-based replication`,`Cloudera Manager`安装不成功。
(4)重新启动Cloudera Manager时,将使用保存在Cloudera Manager数据库中的信息重新部署每个服务的配置。如果此信息不可用,群集不会启动或正常运行。您必须安排并维护Cloudera Manager数据库的常规备份,以在发生数据库丢失的情况下恢复集群。
CM/CDH5版本的MySQL支持
| MySQL | 5.12 | 5.11 | 5.10 | 5.9 | 5.8 | 5.6 | 5.5 | 5.4 | 5.3 | 5.2 | 5.1 | 5 |
| ------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| 5.1 | yes | yes | yes | yes | yes | yes | yes | yes | yes | yes | yes | yes |
| 5.5 | yes | yes | yes | yes | yes | yes | yes | yes | yes | yes | yes | yes |
| 5.6 | yes | yes | yes | yes | yes | yes | yes | yes | yes | yes | yes | no |
| 5.7 | yes | yes | yes | no | no | no | no | no | no | no | no | no |
更多其他数据库版本信息请查看链接:访问:[CM/CDH官方文档](https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#cdh_cm_supported_db)
### - 准备环境
##### hostname配置
修改所有集群主机hosts,添加所有机子和主机名的映射,包括本机的。
[root@CentOS /]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.50 master1
192.168.2.51 slave1
192.168.2.52 slave2
发送Hosts文件到其他机器
[root@CentOS /]# scp /etc/hosts root@192.168.2.51:/etc/hosts
The authenticity of host '192.168.2.51 (192.168.2.51)' can't be established.
ECDSA key fingerprint is 25:c5:e2:04:82:1b:d1:b8:c7:d4:01:b9:73:1e:59:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.51' (ECDSA) to the list of known hosts.
root@192.168.2.51's password:
hosts 100% 219 0.2KB/s 00:00
[root@CentOS /]# scp /etc/hosts root@192.168.2.52:/etc/hosts
The authenticity of host '192.168.2.52 (192.168.2.52)' can't be established.
ECDSA key fingerprint is 25:c5:e2:04:82:1b:d1:b8:c7:d4:01:b9:73:1e:59:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.52' (ECDSA) to the list of known hosts.
root@192.168.2.52's password:
hosts 100% 219 0.2KB/s 00:00
[root@CentOS /]#
修改所有主机hostname在3台机器上执行,然后重启系统```reboot ```
192.168.2.50
[root@CentOS /]# vi /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
HOSTNAME=master1
192.168.2.51
[root@CentOS /]# vi /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
HOSTNAME=slave1
192.168.2.52
[root@CentOS /]# vi /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
HOSTNAME=slave2
如果hostname未生效请执行
[root@CentOS /]# hostnamectl --static set-hostname master1
[root@CentOS /]# reboot
##### 设置无密码SSH
生成公钥和私钥,ssh-keygen -t rsa(每台机都要生成,按3次回车,公钥和私钥默认保存在/root/.ssh/路径下)
分别把slave1和slave2上的公钥备份并把公钥发送给master1
master1主机
[root@master1 ~]# ssh-keygen -t rsa
slave1主机
[root@slave1 ~]# ssh-keygen -t rsa
[root@slave1 ~]# cp /root/.ssh/id_rsa.pub /root/.ssh/slave1_id_rsa.pub
[root@slave1 ~]# scp /root/.ssh/slave1_id_rsa.pub master1:/root/.ssh/
slave2主机
[root@slave2 ~]# ssh-keygen -t rsa
[root@slave2 ~]# cp /root/.ssh/id_rsa.pub /root/.ssh/slave2_id_rsa.pub
[root@slave2 ~]# scp /root/.ssh/slave2_id_rsa.pub master1:/root/.ssh/
首次传输需要输入密码,现在master1的/root/.ssh路径下已经有了master1,slave1和slave2的公钥文件,在msater1上,把公钥文件内容追加到authorized_keys文件里面
[root@master1 .ssh]# cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
[root@master1 .ssh]# ll
总用量 24
-rw-r--r--. 1 root root 394 8月 31 15:53 authorized_keys
-rw-------. 1 root root 1679 8月 31 15:38 id_rsa
-rw-r--r--. 1 root root 394 8月 31 15:38 id_rsa.pub
-rw-r--r--. 1 root root 348 8月 31 14:39 known_hosts
-rw-r--r--. 1 root root 393 8月 31 15:47 slave1_id_rsa.pub
-rw-r--r--. 1 root root 393 8月 31 15:49 slave2_id_rsa.pub
[root@master1 .ssh]# cat /root/.ssh/slave1_id_rsa.pub >> /root/.ssh/authorized_keys
[root@master1 .ssh]# cat /root/.ssh/slave2_id_rsa.pub >> /root/.ssh/authorized_keys
[root@master1 .ssh]# cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfhWwVwYjgHdGPxHDvCbZNujf4Am/2TP1Zm38ACKWRCkmxwGYVu4+YrbQg9h7DElcFRyU37ZtaoVp1yDyssugZ9alqsIT1K44j5veHaEWbq11bRGZ2gWkEOIS+7i6TWhJ3tkmZrK0TGn0I+kAEdkHApjZ/ZBARG2+R5Y3T43PKO3SEOlx5pixn5l3va1VUR6kipaYqQR5B2wGObxjle++HBSMj9Zap0sc7l5+DEkNeADFlO4cAUMgrzRiZv6Hfg3aXeYjDbHI986I1EMlt6LShcM8JLOgcBZc95sBBxsZJlwtniPErkw8YymCXS+f23U/xH2+xXP82jZdblX//b0Af root@master1
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCvQE8QVqhRC9kzFUM6huEVR0NiGOnOR6zFi2mcoxnNeEIZQZhZ8f1tkvIoMrT1Z8cjGQWwatsUIq7u0AeftPBr0x33FwM6U+DMZD3o6yQdqwOQdCd7GrelhUqTz2WU86zcohhEyj8zAXlJbAJ0zwZsRvQR6vvF5ufVB+JpsY8IkHy3T+Ozqo0EqLMl2xx0HT8gsgDPzClVzY9PA4Q6N/b2bnIZ6e34fP0DJ3I9LAUxNGAO9QoYPIyc2nv5MOV/gASPBpqG0X+6zoSc2nJM53ijYW9eYpn+x9IncPLX61lE9Qd3kADMhaFFI4Cspkq31Y5/FGq9rhcvY+R5SqaygxtB root@slave1
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCdXmgXNU4iO0oEkBNjWGcjPmF+ogAsdSDUIHrpfgIgN2HZjUYD8WXut0zAwGGNvtkWIasYpqIPgutbEATOKg66XdhyJmbatNWwvx/guJAFlo7wsZ0fIcE5Y/ELJfesgWkY4GF1ktL+j493aEtoABi4g6rI0qZq/NJBFAZ76+mBh8/lCNyrgs8xLD6mTtfeG88r//OK9ZKq9s8yP9Ss/e/W2pcJZf5eInkmAxzIsiBeJ/fHZ9+C3i9kjOB8dqTzIFANBFpCbixRl8h0bNsxsd+oJHut8Ctszsb38acgo/O/ppm9wQBy65/7Cdtcvqj/sb0KocYQivjZcuHq5/7iR7gT root@slave2
[root@master1 .ssh]#
可以看到authorized_keys文件里面已经有了3个公钥,把authorized_keys文件从master1发送给slave1和slave2,放到/root/.ssh/路径下
[root@master1 ~]# scp /root/.ssh/authorized_keys slave1:/root/.ssh
[root@master1 ~]# scp /root/.ssh/authorized_keys slave2:/root/.ssh
测试:从master1登录slave1和slave2,不需要再输入用户名密码
[root@master1 ~]# ssh slave1
Last login: Thu Aug 31 15:34:51 2017 from 192.168.2.177
[root@slave1 ~]# exit
登出
Connection to slave1 closed.
[root@master1 ~]# ssh slave2
Last login: Thu Aug 31 15:35:11 2017 from 192.168.2.177
[root@slave2 ~]#
##### - 启用NTP
集群中所有节点的时钟需要同步,要安装NTP服务,请在每个主机上运行以下命令:
[root@master1 ~]# yum install -y ntp
要检查启动时是否自动启动NTP服务,请在每个主机上运行以下命令:
[root@master1 ~]# systemctl is-enabled ntpd
disabled
要在启动时将NTP服务设置为自动启动,请在每个主机上运行以下命令:
[root@master1 ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@master1 ~]#
要启动NTP服务,请在每个主机上运行以下命令:
[root@master1 ~]# systemctl start ntpd
##### - 防火墙和SELinux
配置iptable
对于CM在安装期间与其部署到和管理的主机进行通信,某些端口必须是开放的。最简单的方法是暂时禁用iptables,在三台主机上执行一下命令关闭防护墙
[root@master1 ~]# systemctl disable firewalld
[root@master1 ~]# service firewalld stop
禁用SELinux
修改文件```vi /etc/selinux/config```,将```SELINUX=enforcing```改为```SELINUX=disabled```,保存后退出,重启:```reboot```
##### - JDK和MYSQL
如果没有安装JDK和MYSQL请自行安装,JDK1.8 mysql 5.7。
### - 安装CM
数据库准备 创建cm cdh所需数据库
> 说明:如果安装hive,oozie等这些组件 需要创建数据库环境,可根据实际情况进行数据库创建。
[root@master1 ~]# create database hive DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
Query OK, 1 row affected (0.00 sec)
[root@master1 ~]# create database amon DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
Query OK, 1 row affected (0.00 sec)
[root@master1 ~]# create database hue DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
Query OK, 1 row affected (0.00 sec)
[root@master1 ~]# create database monitor DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
Query OK, 1 row affected (0.00 sec)
[root@master1 ~]# create database oozie DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
Query OK, 1 row affected (0.00 sec)
[root@master1 ~]# grant all on *.* to root@"%" Identified by "123456";
拷贝mysql-connector-Java到各个节点指定目录下(所有的节点),CM在初始化数据库时需要驱动包。
[root@master1 ~]# cp mysql-connector-java-5.1.36-bin.jar /usr/share/java/mysql-connector-java.jar
下载Cloudera-Manager离线安装文件,文件较大,建议其他方式下载(迅雷)
[root@master1 ~]# wget http://archive.cloudera.com/cm5/cm/5/cloudera-manager-centos7-cm5.12.0_x86_64.tar.gz
下载CDH离线安装文件,文件较大,建议其他方式下载(迅雷)
[root@master1 ~]# wget http://archive.cloudera.com/cdh5/parcels/latest/CDH-5.12.0-1.cdh5.12.0.p0.29-el7.parcel
[root@master1 ~]# wget http://archive.cloudera.com/cdh5/parcels/latest/CDH-5.12.0-1.cdh5.12.0.p0.29-el7.parcel.sha1
[root@master1 ~]# wget http://archive.cloudera.com/cdh5/parcels/latest/manifest.json
解压cm tar包到指定目录所有服务器都要(或者在主节点解压好,然后通过scp到各个节点同一目录下)
[root@master1 ~]# tar -axvf /data/soft/cloudera-manager-centos7-cm5.7.2_x86_64.tar.gz -C /data/soft/cloudera-manager
[root@master1 ~]# mkdir /data/cm
[root@master1 ~]# cp -R /data/soft/cloudera-manager /data/cm/
[root@master1 ~]# scp -r /data/cm/cloudera-manager/ root@slave1:/data/cm
[root@master1 ~]# scp -r /data/cm/cloudera-manager/ root@slave2:/data/cm
创建cloudera-scm用户(所有节点)
[root@master1 ~]# useradd --system --home /data/cm/cloudera-manager/cm-5.12.0/run/cloudera-scm-server/ --no-create-home --shell=/bin/false --comment "cloudera SCM user" cloudera-scm
在主节点master1创建cloudera-manager-server的本地元数据保存目录
[root@master1 ~]# mkdir /var/cloudera-scm-server
[root@master1 ~]# chown cloudera-scm:cloudera-scm /var/cloudera-scm-server
[root@master1 ~]# chown cloudera-scm:cloudera-scm /data/cm/cloudera-manager
配置从节点slave1,slave2的cloudera-manger-agent指向主节点服务器master1
[root@slave1 ~]# vi /data/cm/cloudera-manager/cm-5.12.0/etc/cloudera-scm-agent/config.ini
[General]#
# Hostname of the CM server.
server_host=master1
# Port that the CM server is listening on.
server_port=7182
# Port that the CM server is listening on.
server_port=7182
## It should not normally be necessary to modify these.
# Port that the CM agent should listen on.
# listening_port=9000
# IP Address that the CM agent should listen on.
# listening_ip=
# Hostname that the CM agent reports as its hostname. If unset, will be
# obtained in code through something like this:
#
# python -c 'import socket; \
# print socket.getfqdn(), \
# socket.gethostbyname(socket.getfqdn())'
#
# listening_hostname=
# An alternate hostname to report as the hostname for this host in CM.
# Useful when this agent is behind a load balancer or proxy and all
# inbound communication must connect through that proxy.
# reported_hostname=
# Port that supervisord should listen on.
# NB: This only takes effect if supervisord is restarted.
# supervisord_port=19001
主节点master1中创建parcel-repo仓库目录,复制事先下载好的离线repo文件,注意```CDH-5.12.0-1.cdh5.12.0.p0.29-el7.parcel.sha1```需改名```CDH-5.12.0-1.cdh5.12.0.p0.29-el7.parcel.sha```
[root@master1 ~]# mkdir -p /opt/cloudera/parcel-repo
[root@master1 ~]# chown cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo
[root@master1 ~]# cp /data/soft/CDH-5.12.0-1.cdh5.12.0.p0.29-el7.parcel /opt/cloudera/parcel-repo/
[root@master1 ~]# cp /data/soft/CDH-5.12.0-1.cdh5.12.0.p0.29-el7.parcel.sha1 /opt/cloudera/parcel-repo/
[root@master1 ~]# cp /data/soft/manifest.json /opt/cloudera/parcel-repo/
[root@master1 ~]# mv /opt/cloudera/parcel-repo/CDH-5.12.0-1.cdh5.12.0.p0.29-el7.parcel.sha1 CDH-5.12.0-1.cdh5.12.0.p0.29-el7.parcel.sha
所有节点```master1``` ```slave1``` ```slave2``` 创建```parcels```目录。说明:```Clouder-Manager```将CDHs从主节点的```/opt/cloudera/parcel-repo```目录中抽取出来,分发解压激活到各个节点的```/opt/cloudera/parcels```目录中
[root@master1 ~]# mkdir -p /opt/cloudera/parcels
[root@master1 ~]# chown cloudera-scm:cloudera-scm /opt/cloudera/parcels
[root@master1 ~]#
主节点```master1```初始脚本配置数据库```scm_prepare_database.sh```
> 说明:这个脚本就是用来创建和配置CMS需要的数据库的脚本。各参数是指:
1. `mysql`:数据库用的是mysql,如果安装过程中用的oracle,那么该参数就应该改为oracle。
2. `-hmaster1`:数据库建立在master1主机上面。也就是主节点上面。
3. `-uroot`:root身份运行mysql。-123456:mysql的root密码是123456。
4. `--scm-host master1`:CMS的主机,一般是和mysql安装的主机是在同一个主机上。
5. 最后三个参数是:数据库名,数据库用户名,数据库密码
[root@master1 ~]# /data/cm/cloudera-manager/cm-5.12.0/share/cmf/schema/scm_prepare_database.sh mysql -hmaster1 -uroot -p123456 --scm-host master1 scmdbm scmdbu scmdbp
JAVA_HOME=/usr/java/jdk1.8.0_144
Verifying that we can write to /data/cm/cloudera-manager/cm-5.12.0/etc/cloudera-scm-server
Creating SCM configuration file in /data/cm/cloudera-manager/cm-5.12.0/etc/cloudera-scm-server
Executing: /usr/java/jdk1.8.0_144/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/data/cm/cloudera-manager/cm-5.12.0/share/cmf/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /data/cm/cloudera-manager/cm-5.12.0/etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
[ main]# DbCommandExecutor INFO Successfully connected to database.
All done, your SCM database is configured correctly!
启动主节点cloudera-scm-server
[root@master1 ~]# cp /data/cm/cloudera-manager/cm-5.12.0/etc/init.d/cloudera-scm-server /etc/init.d/cloudera-scm-server
[root@hadoop1 ~]# chkconfig cloudera-scm-server on
> 此时service ```cloudera-scm-serverstart```的话会报错:“File not found: /usr/sbin/cmf-server”,因为```cloudera-scm-server```里面的变量路径配置不正确!```CMF_DEFAULTS=${CMF_DEFAULTS:-/etc/default}```改为```=/data/cm/cloudera-manager/cm-5.12.0/etc/default```
此时```service cloudera-scm-server start```就不会报错了
同时为了保证在每次服务器重启的时候都能启动```cloudera-scm-server```,应该在开机启动脚本```/etc/rc.local```中加入命令:```service cloudera-scm-server restart```
[root@master1 ~]# vi /etc/init.d/cloudera-scm-server
[root@master1 ~]# service cloudera-scm-server start
Starting cloudera-scm-server (via systemctl): Warning: cloudera-scm-server.service changed on disk. Run 'systemctl daemon-reload' to reload units.
[ 确定 ]
[root@master1 ~]#
> 启动```cloudera-scm-agent```所有节点,同样此时```service cloudera-scm-agent start```的话会报错:File not found: /usr/sbin/cmf-agent,因为```cloudera-scm-agent```里面的变量路径配置不正确!参照cms的配置
同时为了保证在每次服务器重启的时候都能启动```cloudera-scm-agent```,应该在开机启动脚本```/etc/rc.local```中加入命令:```service cloudera-scm-agent restart```
[root@slave1 ~]# cp /data/cm/cloudera-manager/cm-5.12.0/etc/init.d/cloudera-scm-agent /etc/init.d/cloudera-scm-agent
[root@slave1 ~]# chkconfig cloudera-scm-agent on
[root@slave1 ~]# vi /etc/init.d/cloudera-scm-agent
[root@slave1 ~]# service cloudera-scm-agent start
Starting cloudera-scm-agent (via systemctl): Warning: cloudera-scm-agent.service changed on disk. Run 'systemctl daemon-reload' to reload units.
[ 确定 ]
[root@slave1 ~]#
### - 安装,配置和部署CDH集群
接下来就比较简单了傻瓜式安装!
等待主节点安装并且启动就在浏览器中进行操作了
进入192.168.2.50:7180 默认使用admin admin登录
以下在浏览器中使用操作安装。

选择express版本

组件提示

配置主机! 由于我们在各个节点都安装启动了agent,并且各个节点里都在配置文件中指向master1是server节点,所以各个节点的agent就会给server发消息报告,所以这里我们可以在“当前管理的主机”中看到三个主机,全部勾选并继续,注意如果cloudera-scm-agent没有设为开机启动,如果以上有重启这里可能会检测不到其他服务器。

选择cdh! 注意:这里CDH我们已经下载了离线文件所以一定要选择parcel ,可查看存储设置 本地存储路径即我们之前下载的离线文件存储路径。

分发parcels到各个节点

主机正确性的检测 可查看安装版本信息


选择要安装的服务,根据自己需求选择安装即可。然后分配角色


数据库设置选择,这里需要对hive oozie 等组件配置数据库(前提如果选择安装了该组件,需要自己创建好对应的数据库)

集群审核,这里都默认的

开始安装

安装完成,进入首页查看

安装完成!!!