Atlas安装部署

it2025-10-09  2

环境要求: JDK8+Hadoop2+Maven3+Zookeeper3+Hbase2+Solr5

这些组件都要安装。

下载:http://atlas.apache.org/#/Downloads

不要下载最新版的,否则编译会出问题,下载稳定版的。

编译

上传,解压,编译。

tar -zxvf apache-atlas-2.0.0-sources.tar mv apache-atlas-2.0.0-sources atlas cd atlas export MAVEN_OPTS="-Xms2g -Xmx2g" mvn clean -DskipTests install 打包也有几种方式 mvn clean -DskipTests package -Pdist 不带hbase和solr,适合我们兼容自己的存储引擎和索引引擎 mvn clean -DskipTests package -Pdist,embedded-hbase-solr 内嵌模式,适合测试

然后可以得到

修改配置文件

集成Hbase

[root@ha1 conf]# vi atlas-application.properties # 修改atlas存储数据主机 atlas.graph.storage.hostname=ha1:2181,ha2:2181,ha3:2181,ha4:2181 # 建立软连接 [root@ha1 conf]# ln -s /opt/hdk/hbase/conf/ /opt/hdk/atlas/conf/hbase # 添加HBase配置文件路径 [root@ha1 conf]# vi atlas-env.sh export HBASE_CONF_DIR=/opt/hdk/atlas/conf/hbase/conf

集成Solr

[root@ha1 conf]# vi atlas-application.properties # 修改atlas索引 atlas.graph.index.search.solr.zookeeper-url=ha1:2181,ha2:2181,ha3:2181,ha4:2181 # 将Altas自带的solr文件夹拷贝到外部solr集群的各个节点 [root@ha1 conf]# cp -r solr /opt/hdk/solr/ # 重命名 [root@ha1 solr]# mv solr atlas-solr # 启动solr集群,建立索引库 [root@ha1 solr]# bin/solr create -c vertex_index -d /opt/hdk/solr/atlas-solr/ -shards 3 -replicationFactor 2 [root@ha1 solr]# bin/solr create -c edge_index -d /opt/hdk/solr/atlas-solr/ -shards 3 -replicationFactor 2 [root@ha1 solr]# bin/solr create -c fulltext_index -d /opt/hdk/solr/atlas-solr/ -shards 3 -replicationFactor 2

集成kafka

[root@ha1 conf]# vi atlas-application.properties atlas.notification.embedded=false atlas.kafka.data=/opt/hdk/kafka/logs atlas.kafka.zookeeper.connect=ha1:2181,ha2:2181,ha3:2181,ha4:2181 atlas.kafka.bootstrap.servers=ha1:2181,ha2:2181,ha3:2181,ha4:2181 atlas.kafka.zookeeper.session.timeout.ms=4000 atlas.kafka.zookeeper.connection.timeout.ms=2000 atlas.kafka.zookeeper.sync.time.ms=20 atlas.kafka.auto.commit.interval.ms=1000 atlas.kafka.hook.group.id=atlas atlas.kafka.enable.auto.commit=true atlas.kafka.auto.offset.reset=earliest # 启动kafka集群,创建topic [root@ha1 kafka]# bin/kafka-server-start.sh -daemon config/server.properties [root@ha1 kafka]# bin/kafka-topics.sh --zookeeper ha1:2181,ha2:2181,ha3:2181,ha4:2181 --create --replication-factor 3 --partitions 3 --topic ATLAS_HOOK [root@ha1 kafka]# bin/kafka-topics.sh --zookeeper ha1:2181,ha2:2181,ha3:2181,ha4:2181 --create --replication-factor 3 --partitions 3 --topic ATLAS_ENTITIES [root@ha1 conf]# vi atlas-log4j.xml # 打开下面的注释 <!-- Uncomment the following for perf logs --> <appender name="perf_appender" class="org.apache.log4j.DailyRollingFileAppender"> <param name="file" value="${atlas.log.dir}/atlas_perf.log" /> <param name="datePattern" value="'.'yyyy-MM-dd" /> <param name="append" value="true" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d|%t|%m%n" /> </layout> </appender> <logger name="org.apache.atlas.perf" additivity="false"> <level value="debug" /> <appender-ref ref="perf_appender" /> </logger>

集成Hive

[root@ha1 conf]# vi atlas-application.properties ########## Hive Hook Configs ########## atlas.hook.hive.synchronous=false atlas.hook.hive.numRetries=3 atlas.hook.hive.queueSize=10000 atlas.cluster.name=primary # 解压hive-hook [root@ha1 hdk]# tar -zxvf apache-atlas-2.0.0-hive-hook.tar.gz [root@ha1 hdk]# mv apache-atlas-hive-hook-2.0.0 hive-hook # 把配置文件复制到atlas中 [root@ha1 hdk]# cp hive-conf/* atlas/ # 把atlas配置文件写入atlas-plugin-classloader-1.0.0.jar [root@ha1 hdk]# zip -u atlas/hook/hive/atlas-plugin-classloader-1.0.0.jar atlas/conf/atlas-application.properties # 把配置文件复制到hive中 [root@ha1 conf]# cp atlas-application.properties /opt/hdk/hive/conf # 在hive-site.xml 中设置atlas-hook [root@ha1 conf]# vi hive-site.xml <property> <name>hive.exec.post.hooks</name> <value>org.apache.atlas.hive.hook.HiveHook</value> </property> # 修改hive-env.sh export HIVE_AUX_JARS_PATH=/opt/hdk/atlas/hook/hive

其他配置

[root@ha1 conf]# vi atlas-application.properties atlas.rest.address=http://ha1:21000 atlas.server.run.setup.on.start=false atlas.audit.hbase.zookeeper.quorum=ha1:2181,ha2:2181,ha3:2181,ha4:2181

启动和关闭

[root@ha1 bin]# ./atlas_start.py [root@ha1 bin]# ./atlas_stop.py

默认账号密码是admin admin。

最新回复(0)