环境要求: JDK8+Hadoop2+Maven3+Zookeeper3+Hbase2+Solr5
这些组件都要安装。
下载:http://atlas.apache.org/#/Downloads
不要下载最新版的,否则编译会出问题,下载稳定版的。
编译
上传,解压,编译。
tar -zxvf apache-atlas-2.0.0-sources.tar
mv apache-atlas-2.0.0-sources atlas
cd atlas
export MAVEN_OPTS
="-Xms2g -Xmx2g"
mvn clean -DskipTests
install
打包也有几种方式
mvn clean -DskipTests package -Pdist 不带hbase和solr,适合我们兼容自己的存储引擎和索引引擎
mvn clean -DskipTests package -Pdist,embedded-hbase-solr 内嵌模式,适合测试
然后可以得到
修改配置文件
集成Hbase
[root@ha1 conf
]
atlas.graph.storage.hostname
=ha1:2181,ha2:2181,ha3:2181,ha4:2181
[root@ha1 conf
]
[root@ha1 conf
]
export HBASE_CONF_DIR
=/opt/hdk/atlas/conf/hbase/conf
集成Solr
[root@ha1 conf
]
atlas.graph.index.search.solr.zookeeper-url
=ha1:2181,ha2:2181,ha3:2181,ha4:2181
[root@ha1 conf
]
[root@ha1 solr
]
[root@ha1 solr
]
[root@ha1 solr
]
[root@ha1 solr
]
集成kafka
[root@ha1 conf
]
atlas.notification.embedded
=false
atlas.kafka.data
=/opt/hdk/kafka/logs
atlas.kafka.zookeeper.connect
=ha1:2181,ha2:2181,ha3:2181,ha4:2181
atlas.kafka.bootstrap.servers
=ha1:2181,ha2:2181,ha3:2181,ha4:2181
atlas.kafka.zookeeper.session.timeout.ms
=4000
atlas.kafka.zookeeper.connection.timeout.ms
=2000
atlas.kafka.zookeeper.sync.time.ms
=20
atlas.kafka.auto.commit.interval.ms
=1000
atlas.kafka.hook.group.id
=atlas
atlas.kafka.enable.auto.commit
=true
atlas.kafka.auto.offset.reset
=earliest
[root@ha1 kafka
]
[root@ha1 kafka
]
[root@ha1 kafka
]
[root@ha1 conf
]
<!-- Uncomment the following
for perf logs --
>
<appender name
="perf_appender" class
="org.apache.log4j.DailyRollingFileAppender">
<param name
="file" value
="${atlas.log.dir}/atlas_perf.log" /
>
<param name
="datePattern" value
="'.'yyyy-MM-dd" /
>
<param name
="append" value
="true" /
>
<layout class
="org.apache.log4j.PatternLayout">
<param name
="ConversionPattern" value
="%d|%t|%m%n" /
>
</layout
>
</appender
>
<logger name
="org.apache.atlas.perf" additivity
="false">
<level value
="debug" /
>
<appender-ref ref
="perf_appender" /
>
</logger
>
集成Hive
[root@ha1 conf
]
atlas.hook.hive.synchronous
=false
atlas.hook.hive.numRetries
=3
atlas.hook.hive.queueSize
=10000
atlas.cluster.name
=primary
[root@ha1 hdk
]
[root@ha1 hdk
]
[root@ha1 hdk
]
[root@ha1 hdk
]
[root@ha1 conf
]
[root@ha1 conf
]
<property
>
<name
>hive.exec.post.hooks
</name
>
<value
>org.apache.atlas.hive.hook.HiveHook
</value
>
</property
>
export HIVE_AUX_JARS_PATH
=/opt/hdk/atlas/hook/hive
其他配置
[root@ha1 conf
]
atlas.rest.address
=http://ha1:21000
atlas.server.run.setup.on.start
=false
atlas.audit.hbase.zookeeper.quorum
=ha1:2181,ha2:2181,ha3:2181,ha4:2181
启动和关闭
[root@ha1 bin
]
[root@ha1 bin
]
默认账号密码是admin admin。