Apache Atlas部署FAQ

Apache Atlas部署FAQ

  • Atlas部署问题
    • Atlas启动报错(zookeeper启动失败导致)
      • 1.问题描述
      • 2.原因定位
      • 3.解决方法
    • Atlas启动报错(solr启动失败导致)
      • 1.问题描述
      • 2.原因定位
      • 3.解决方法
    • Atlas启动报错(HBase中没有Column)
      • 1.问题描述
      • 2.原因定位
      • 3.解决方法
    • Atlas import-hive.sh报错
      • 1.问题描述
      • 2.原因定位
      • 3.解决方法

Atlas部署问题

Atlas启动报错(zookeeper启动失败导致)

通过Atlas内嵌HBase和Solr启动Atlas服务

1.问题描述

无法访问访问Atlas页面,但提示Atlas启动成功

2.原因定位

检查Atlas启动日志,日志中提示在Zk中没有ZNode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
[root@kafka-dev-01 logs]# vim /opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/logs/application.log

########################################################################################
                               Atlas Server (STARTUP)

        project.name:   apache-atlas
        project.description:    Metadata Management and Data Governance Platform over Hadoop
        build.user:     root
        build.epoch:    1589879105880
        project.version:        2.0.0
        build.version:  2.0.0
        vc.revision:    release
        vc.source.url:  scm:git:git://git.apache.org/atlas.git/atlas-webapp
######################################################################################## (Atlas:215)
2020-05-19 19:37:38,083 INFO  - [main:] ~ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (Atlas:216)
2020-05-19 19:37:38,084 INFO  - [main:] ~ Server starting with TLS ? false on port 21000 (Atlas:217)
2020-05-19 19:37:38,084 INFO  - [main:] ~ <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< (Atlas:218)
2020-05-19 19:37:38,580 INFO  - [main:] ~ No authentication method configured.  Defaulting to simple authentication (LoginProcessor:102)
2020-05-19 19:37:38,722 WARN  - [main:] ~ Unable to load native-hadoop library for your platform... using builtin-java classes where applicable (NativeCodeLoader:60)
2020-05-19 19:37:38,739 INFO  - [main:] ~ Logged in user root (auth:SIMPLE) (LoginProcessor:77)
2020-05-19 19:37:39,144 INFO  - [main:] ~ Not running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:189)
2020-05-19 19:37:40,100 WARN  - [main:] ~ Retrieve cluster id failed (ConnectionImplementation:551)
java.util.concurrent.ExecutionException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/hbaseid
        at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
        at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:549)
        at org.apache.hadoop.hbase.client.ConnectionImplementation.<init>(ConnectionImplementation.java:287)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:219)
        at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:114)
        at org.janusgraph.diskstorage.hbase2.HBaseCompat2_0.createConnection(HBaseCompat2_0.java:46)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.<init>(HBaseStoreManager.java:314)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
        at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:476)
        at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:408)
        at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1254)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:111)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.getGraphInstance(AtlasJanusGraphDatabase.java:165)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.getGraph(AtlasJanusGraphDatabase.java:263)
        at org.apache.atlas.repository.graph.AtlasGraphProvider.getGraphInstance(AtlasGraphProvider.java:52)
        at org.apache.atlas.repository.graph.AtlasGraphProvider.get(AtlasGraphProvider.java:98)
        at org.apache.atlas.repository.graph.AtlasGraphProvider$$EnhancerBySpringCGLIB$$375c3cc3.CGLIB$get$1(<generated>)
        at org.apache.atlas.repository.graph.AtlasGraphProvider$$EnhancerBySpringCGLIB$$375c3cc3$$FastClassBySpringCGLIB$$86ea7f4c.invoke(<generated>)
        at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
        at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358)
        at org.apache.atlas.repository.graph.AtlasGraphProvider$$EnhancerBySpringCGLIB$$375c3cc3.get(<generated>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
        at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1181)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1075)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
        at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:208)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1066)
        at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
        at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
        at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:189)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1201)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1103)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
        at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:208)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1066)
        at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:189)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1201)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1103)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
        at org.springframework.aop.framework.autoproxy.BeanFactoryAdvisorRetrievalHelper.findAdvisorBeans(BeanFactoryAdvisorRetrievalHelper.java:92)
        at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findCandidateAdvisors(AbstractAdvisorAutoProxyCreator.java:102)
        at org.springframework.aop.aspectj.autoproxy.AspectJAwareAdvisorAutoProxyCreator.shouldSkip(AspectJAwareAdvisorAutoProxyCreator.java:103)
        at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessBeforeInstantiation(AbstractAutoProxyCreator.java:248)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInstantiation(AbstractAutowireCapableBeanFactory.java:1045)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.resolveBeforeInstantiation(AbstractAutowireCapableBeanFactory.java:1019)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:473)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
        at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
        at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107)
        at org.apache.atlas.web.setup.KerberosAwareListener.contextInitialized(KerberosAwareListener.java:31)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:843)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:533)
        at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:816)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:345)
        at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1404)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1366)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
        at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:520)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
        at org.eclipse.jetty.server.Server.start(Server.java:422)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
        at org.eclipse.jetty.server.Server.doStart(Server.java:389)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:98)
        at org.apache.atlas.Atlas.main(Atlas.java:133)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/hbaseid
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
        at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:168)
        at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:323)
        at java.lang.Thread.run(Thread.java:748)
2020-05-19 19:40:09,645 INFO  - [pool-1-thread-1:] ~ ==> Shutdown of Atlas (Atlas$1:63)
2020-05-19 19:40:19,646 WARN  - [Thread-0:] ~ ShutdownHook '' timeout, java.util.concurrent.TimeoutException (ShutdownHookManager$1:71)
java.util.concurrent.TimeoutException
        at java.util.concurrent.FutureTask.get(FutureTask.java:205)
        at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68)
2020-05-19 19:40:29,647 ERROR - [Thread-0:] ~ ShutdownHookManger shutdown forcefully. (ShutdownHookManager$1:82)
2020-05-19 20:45:14,847 INFO  - [main:] ~ Loading atlas-application.properties from file:/opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/conf/atlas-application.properties (ApplicationProperties:123)

检查进程发现hbase未启动

1
2
[root@kafka-dev-01 logs]# jps
4159483 Jps

查看Hbase服务日志,发现日志中提示ZK端口被占用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@kafka-dev-01 logs]# vim /opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/hbase/logs/hbase-root-master-kafka-dev-01.log

2020-05-19 19:37:36,849 INFO  [main] server.ZooKeeperServer: Server environment:java.compiler=<NA>
2020-05-19 19:37:36,849 INFO  [main] server.ZooKeeperServer: Server environment:os.name=Linux
2020-05-19 19:37:36,849 INFO  [main] server.ZooKeeperServer: Server environment:os.arch=amd64
2020-05-19 19:37:36,849 INFO  [main] server.ZooKeeperServer: Server environment:os.version=5.3.18-3-pve
2020-05-19 19:37:36,849 INFO  [main] server.ZooKeeperServer: Server environment:user.name=root
2020-05-19 19:37:36,849 INFO  [main] server.ZooKeeperServer: Server environment:user.home=/root
2020-05-19 19:37:36,850 INFO  [main] server.ZooKeeperServer: Server environment:user.dir=/opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/server/webapp/atlas
2020-05-19 19:37:36,861 INFO  [main] server.ZooKeeperServer: Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/data/hbase-zookeeper-data/zookeeper_0/version-2 snapdir /opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/data/hbase-zookeeper-data/zookeeper_0/version-2
2020-05-19 19:37:36,861 INFO  [main] server.ZooKeeperServer: minSessionTimeout set to -1
2020-05-19 19:37:36,862 INFO  [main] server.ZooKeeperServer: maxSessionTimeout set to -1
2020-05-19 19:37:36,868 INFO  [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
2020-05-19 19:37:36,869 INFO  [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2182
2020-05-19 19:37:36,912 ERROR [main] server.ZooKeeperServer: ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes
2020-05-19 19:37:36,915 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182] server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:33280
2020-05-19 19:37:36,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182] server.ServerCnxn: The list of known four letter word commands is : [{1936881266=srvr, 1937006964=stat, 2003003491=wchc, 1685417328=dump, 1668445044=crst, 1936880500=srst, 1701738089=envi, 1668247142=conf, 2003003507=wchs, 2003003504=wchp, 1668247155=cons, 1835955314=mntr, 1769173615=isro, 1920298859=ruok, 1735683435=gtmk, 1937010027=stmk}]
2020-05-19 19:37:36,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182] server.ServerCnxn: The list of enabled four letter word commands is : [[wchs, stat, stmk, conf, ruok, mntr, srvr, envi, srst, isro, dump, gtmk, crst, cons]]
2020-05-19 19:37:36,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182] server.NIOServerCnxn: Processing stat command from /127.0.0.1:33280
2020-05-19 19:37:36,923 INFO  [Thread-2] server.NIOServerCnxn: Stat command output
2020-05-19 19:37:36,923 INFO  [Thread-2] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:33280 (no session established for client)
2020-05-19 19:37:36,923 INFO  [main] zookeeper.MiniZooKeeperCluster: Started MiniZooKeeperCluster and ran successful 'stat' on client port=2182
2020-05-19 19:37:36,924 ERROR [main] master.HMasterCommandLine: Master exiting
java.io.IOException: Could not start ZK at requested port of 2181.  ZK was started at port: 2182.  Aborting as clients (e.g. shell) will not be able to find this ZK quorum.
        at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:217)
        at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2964)

3.解决方法

检查占用2181端口的进程,杀掉该进程,重启hbase服务

1
2
3
4
5
6
[root@kafka-dev-01 logs]# netstat -nltp|grep 2181
tcp6       0      0 :::2181                 :::*                    LISTEN      4055296/java        
[root@kafka-dev-01 logs]# kill 4055296
[root@kafka-dev-01 logs]# /opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/hbase/bin/start-hbase.sh
running master, logging to /opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/hbase/bin/../logs/hbase-root-master-kafka-dev-01.out
[root@kafka-dev-01 bin]#

Atlas启动报错(solr启动失败导致)

通过Atlas内嵌HBase和Solr启动Atlas服务

1.问题描述

无法访问访问Atlas页面,但提示Atlas启动成功

2.原因定位

查看Atlas启动日志,发现Solr启动有问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
[root@kafka-dev-01 logs]# vim /opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/logs/application.log

2020-05-20 11:39:30,104 INFO  - [main:] ~ GraphTransaction intercept for org.apache.atlas.discovery.EntityLineageService.getAtlasLineageInfo (GraphTransactionAdvisor$1:41)
2020-05-20 11:39:30,106 INFO  - [main:] ~ GraphTransaction intercept for org.apache.atlas.discovery.EntityLineageService.getAtlasLineageInfo (GraphTransactionAdvisor$1:41)
2020-05-20 11:39:30,106 INFO  - [main:] ~ GraphTransaction intercept for org.apache.atlas.discovery.EntityLineageService.getSchemaForHiveTableByName (GraphTransactionAdvisor$1:41)
2020-05-20 11:39:30,107 INFO  - [main:] ~ GraphTransaction intercept for org.apache.atlas.discovery.EntityLineageService.getSchemaForHiveTableByGuid (GraphTransactionAdvisor$1:41)
2020-05-20 11:39:30,122 INFO  - [main:] ~ Creating indexes for graph. (GraphBackedSearchIndexer:248)
2020-05-20 11:39:31,215 INFO  - [main:] ~ Created index : vertex_index (GraphBackedSearchIndexer:253)
2020-05-20 11:39:31,234 INFO  - [main:] ~ Created index : edge_index (GraphBackedSearchIndexer:259)
2020-05-20 11:39:31,240 INFO  - [main:] ~ Created index : fulltext_index (GraphBackedSearchIndexer:265)
2020-05-20 11:39:31,386 ERROR - [main:] ~ GraphBackedSearchIndexer.initialize() failed (GraphBackedSearchIndexer:307)
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://10.90.0.94:8983/solr: Can not find the specified config set: vertex_index
        at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
        at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
        at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
        at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
        at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
        at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
        at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
        at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
        at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
        at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
        at org.janusgraph.diskstorage.solr.Solr6Index.createCollectionIfNotExists(Solr6Index.java:1086)
        at org.janusgraph.diskstorage.solr.Solr6Index.register(Solr6Index.java:307)
        at org.janusgraph.diskstorage.indexing.IndexTransaction.register(IndexTransaction.java:96)
        at org.janusgraph.graphdb.database.IndexSerializer.register(IndexSerializer.java:105)
        at org.janusgraph.graphdb.database.management.ManagementSystem.addIndexKey(ManagementSystem.java:529)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphManagement.addMixedIndex(AtlasJanusGraphManagement.java:202)
        at org.apache.atlas.repository.graph.GraphBackedSearchIndexer.<init>(GraphBackedSearchIndexer.java:126)
        at org.apache.atlas.repository.graph.GraphBackedSearchIndexer.<init>(GraphBackedSearchIndexer.java:117)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:142)
        at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:122)
        at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:271)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1201)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1103)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
        at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:208)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1066)
        at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
        at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
        at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:189)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1201)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1103)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
        at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
        at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107)
        at org.apache.atlas.web.setup.KerberosAwareListener.contextInitialized(KerberosAwareListener.java:31)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:843)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:533)
        at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:816)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:345)
        at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1404)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1366)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
        at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:520)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
        at org.eclipse.jetty.server.Server.start(Server.java:422)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
        at org.eclipse.jetty.server.Server.doStart(Server.java:389)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:98)
        at org.apache.atlas.Atlas.main(Atlas.java:133)
2020-05-20 12:00:33,545 INFO  - [pool-1-thread-1:] ~ ==> Shutdown of Atlas (Atlas$1:63)
2020-05-20 12:09:45,335 INFO  - [main:] ~ Loading atlas-application.properties from file:/opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/conf/atlas-application.properties (ApplicationProperties:123)

3.解决方法

在solr中创建,需要在启动zk的前提下创建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@kafka-dev-01 logs]# cd /opt/software/apache-atlas-sources-2.0.0/distro/target/apache-atlas-2.0.0-bin/apache-atlas-2.0.0/
[root@kafka-dev-01 apache-atlas-2.0.0]# solr/bin/solr create -c vertex_index -force -d conf/solr/
INFO  - 2020-05-20 12:07:24.129; org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL Credential Provider chain: env;sysprop
Created collection 'vertex_index' with 1 shard(s), 1 replica(s) with config-set 'vertex_index'
[root@kafka-dev-01 apache-atlas-2.0.0]# solr/bin/solr create -c edge_index -force -d conf/solr/
INFO  - 2020-05-20 12:07:44.262; org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL Credential Provider chain: env;sysprop
Created collection 'edge_index' with 1 shard(s), 1 replica(s) with config-set 'edge_index'
[root@kafka-dev-01 apache-atlas-2.0.0]# solr/bin/solr create -c fulltext_index -force -d conf/solr/
INFO  - 2020-05-20 12:08:01.040; org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL Credential Provider chain: env;sysprop
Created collection 'fulltext_index' with 1 shard(s), 1 replica(s) with config-set 'fulltext_index'
[root@kafka-dev-01 apache-atlas-2.0.0]# solr/bin/solr start -c -z localhost:2181 -p 8983 -force    
*** [WARN] *** Your open file limit is currently 1024.  
 It should be set to 65000 to avoid operational disruption.
 If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh

Port 8983 is already being used by another process (pid: 4022294)
Please choose a different port using the -p option.

[root@kafka-dev-01 apache-atlas-2.0.0]# jps
4057719 Jps
4022294 jar
4055296 HMaster
[root@kafka-dev-01 apache-atlas-2.0.0]#

Atlas启动报错(HBase中没有Column)

Atlas与CDH集成,使用CDH Hive、HBase、Kafka

1.问题描述

无法访问访问Atlas页面,但提示Atlas启动成功

2.原因定位

查看Atlas启动日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
2020-05-20 19:32:34,642 WARN  - [main:] ~ Unexpected exception during getDeployment() (HBaseStoreManager:399)
java.lang.RuntimeException: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.getDeployment(HBaseStoreManager.java:358)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.getFeatures(HBaseStoreManager.java:397)
        at org.janusgraph.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:441)
        at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1257)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:111)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.getGraphInstance(AtlasJanusGraphDatabase.java:165)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraph.<init>(AtlasJanusGraph.java:95)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.getGraphInstance(AtlasJanusGraphDatabase.java:174)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraph.<init>(AtlasJanusGraph.java:95)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.getGraphInstance(AtlasJanusGraphDatabase.java:174)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraph.<init>(AtlasJanusGraph.java:95)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.getGraphInstance(AtlasJanusGraphDatabase.java:174)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraph.<init>(AtlasJanusGraph.java:95)
...
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.ensureTableExists(HBaseStoreManager.java:732)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.getLocalKeyPartition(HBaseStoreManager.java:518)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.getDeployment(HBaseStoreManager.java:355)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.getFeatures(HBaseStoreManager.java:397)
        at org.janusgraph.diskstorage.common.AbstractStoreManager.getMetaDataSchema(AbstractStoreManager.java:58)
        at org.janusgraph.diskstorage.hbase2.HBaseKeyColumnValueStore.<init>(HBaseKeyColumnValueStore.java:103)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.openDatabase(HBaseStoreManager.java:457)
        at org.janusgraph.diskstorage.keycolumnvalue.KeyColumnValueStoreManager.openDatabase(KeyColumnValueStoreManager.java:43)
        at org.janusgraph.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:452)
        at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1257)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:111)
        at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.getGraphInstance(AtlasJanusGraphDatabase.java:165)
        ... 1010 more
Caused by: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family table does not exist in region hbase:meta,,1.1588230740 in table 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', BLOOMFILTER => 'NONE', VERSIONS => '10', IN_MEMORY => 'true', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', CACHE_DATA_IN_L1 => 'true', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}
        at org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:7941)
        at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6974)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2027)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33644)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)

        at sun.reflect.GeneratedConstructorAccessor5.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
        at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
        at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:279)
        at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:266)
        at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:129)
        at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
        at org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:1066)
        at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:389)
        at org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:439)
        at org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:436)
        at org.apache.hadoop.hbase.client.RpcRetryingCallable.call(RpcRetryingCallable.java:58)
        at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
        at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3078)
        at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3070)
        at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:436)
        at org.janusgraph.diskstorage.hbase2.HBaseAdmin2_0.tableExists(HBaseAdmin2_0.java:110)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.ensureTableExists(HBaseStoreManager.java:709)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.getLocalKeyPartition(HBaseStoreManager.java:518)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.getDeployment(HBaseStoreManager.java:355)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.getFeatures(HBaseStoreManager.java:397)
        at org.janusgraph.diskstorage.common.AbstractStoreManager.getMetaDataSchema(AbstractStoreManager.java:58)
        at org.janusgraph.diskstorage.hbase2.HBaseKeyColumnValueStore.<init>(HBaseKeyColumnValueStore.java:103)
        at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.openDatabase(HBaseStoreManager.java:457)
        at org.janusgraph.diskstorage.keycolumnvalue.KeyColumnValueStoreManager.openDatabase(KeyColumnValueStoreManager.java:43)
        at org.janusgraph.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:452)
        at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1257)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:111)
...
2020-05-20 19:32:34,852 WARN  - [main:] ~ JanusGraphException: Could not open global configuration (AtlasJanusGraphDatabase:167)
2020-05-20 19:32:34,853 WARN  - [main:] ~ Failed to obtain graph instance on attempt 3 of 3 (AtlasGraphProvider:118)
java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.hbase2.HBaseStoreManager
        at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:64)
        at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:476)
        at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:408)
        at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1254)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:160)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:131)
        at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:111)

查看Atlas的版本依赖,发现Atlas2.0.0依赖版本过高,但是cdh5.16中集成的版本较低

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@kafka-dev-01 apache-atlas-2.0.0]# vim pom.xml
...
        <janus.version>0.3.1</janus.version>
        <hadoop.version>3.1.1</hadoop.version>
        <hbase.version>2.0.2</hbase.version>
        <solr.version>7.5.0</solr.version>
        <hive.version>3.1.0</hive.version>
        <kafka.version>2.0.0</kafka.version>
        <kafka.scala.binary.version>2.11</kafka.scala.binary.version>
        <calcite.version>1.16.0</calcite.version>
        <zookeeper.version>3.4.6</zookeeper.version>
        <falcon.version>0.8</falcon.version>
        <sqoop.version>1.4.6.2.3.99.0-195</sqoop.version>
        <storm.version>1.2.0</storm.version>
        <curator.version>4.0.1</curator.version>
        <elasticsearch.version>5.6.4</elasticsearch.version>
...

3.解决方法

Atlas降版本,CDH-5.16可以用Atlas-1.2.0

Atlas import-hive.sh报错

1.问题描述

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@cdh-cluster-01 bin]# ./import-hive.sh
Using Hive configuration directory [/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hive/conf]
/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hive/conf:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/libexec/../../hadoop/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/libexec/../../hadoop/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hadoop/libexec/../../hadoop-yarn/.//*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/lib/*:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//*
Log file for import is /opt/software/apache-atlas-1.2.0/logs/import-hive.log
log4j:WARN No such property [maxFileSize] in org.apache.log4j.PatternLayout.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.PatternLayout.
Enter username for atlas :- admin
Enter password for atlas :-
Exception in thread "main" java.lang.NoClassDefFoundError: com/fasterxml/jackson/jaxrs/json/JacksonJaxbJsonProvider
    at org.apache.atlas.AtlasBaseClient.getClient(AtlasBaseClient.java:270)
    at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:453)
    at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:448)
    at org.apache.atlas.AtlasBaseClient.<init>(AtlasBaseClient.java:132)
    at org.apache.atlas.AtlasClientV2.<init>(AtlasClientV2.java:82)
    at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:131)
Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.jaxrs.json.JacksonJaxbJsonProvider
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 6 more
Failed to import Hive Meta Data!!!
[root@cdh-cluster-01 bin]#

2.原因定位

通过查看Atlas对应版本源码,hive/hbase bridge模块,发现其相关依赖为hbase client原生的客户端,直接编译源码,则无法在HBase同步元数据,并且上述模块存在缺少jar的情况

3.解决方法

在编译前修改源码

1
2
3
4
5
6
7
8
9
10
#两个脚本中的内容也可以不添加,通过其他方式在编译以后解决
[root@kafka-dev-01 bin]# vim /root/apache-atlas-sources-1.2.0/addons/hive-bridge/src/bin/import-hive.sh

#在脚本中找到该行内容,并添加-Datlas.conf
"${JAVA_BIN}" ${JAVA_PROPERTIES} -Datlas.conf=/etc/hive/conf -cp "${CP}" org.apache.atlas.hive.bridge.HiveMetaStoreBridge $IMPORT_ARGS

[root@kafka-dev-01 bin]# vim /root/apache-atlas-sources-1.2.0/addons/hbase-bridge/src/bin/import-hbase.sh

#在脚本中找到该行内容,并添加-Datlas.conf
"${JAVA_BIN}" ${JAVA_PROPERTIES} -Datlas.conf=/etc/hbase/conf -cp "${CP}" org.apache.atlas.hbase.bridge.HBaseBridge $IMPORT_ARGS

添加依赖

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[root@kafka-dev-01 hive-bridge]# vim /root/apache-atlas-sources-1.2.0/addons/hive-bridge/pom.xml

<!-- 加入这三个依赖,不加将看到如例一的报错,也可以直接从仓库下载对应的jar包,放到/opt/software/apache-atlas-1.2.0/hook/hive/atlas-hive-plugin-impl目录下 -->
        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-json-provider</artifactId>
            <version>${jackson.version}</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.module</groupId>
            <artifactId>jackson-module-jaxb-annotations</artifactId>
            <version>${jackson.version}</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-base</artifactId>
            <version>${jackson.version}</version>
        </dependency>

<!-- 修改原生的依赖,改为CDH的,不修改将看到例二的报错;还得再单独指定镜像仓库,阿里仓库没有,不修改将看到例三的报错 -->
    <properties>
        <hbase.version>1.2.0-cdh5.16.2</hbase.version>
        <calcite.version>0.9.2-incubating</calcite.version>
    </properties>
    <repositories>
        <repository>
          <id>cloudera</id>
          <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
        </repository>
    </repositories>

修改hbase bridge调用接口
hbase bridge采用的hbase-client原生jar,而在CDH中采用的jar包由于CDK对其中的方法进行了重写,所以需要进行修改
修改文件HBaseAtlasHook.javaHBaseBridge.java,将getKeepDeletedCells修改为getKeepDeletedCellsAsEnum()

1
2
3
4
5
6
[root@kafka-dev-01 apache-atlas-sources-1.2.0]# cd addons/hbase-bridge/src/main/java/org/apache/atlas/hbase/bridge/
[root@kafka-dev-01 bridge]# ll
total 64
-rw-r--r-- 1 root root 32542 Jun  3  2019 HBaseAtlasHook.java
-rw-r--r-- 1 root root 31320 Jun  3  2019 HBaseBridge.java
[root@kafka-dev-01 bridge]#

例一:

1
2
3
4
5
6
7
8
9
10
11
12
13
Exception in thread "main" java.lang.NoClassDefFoundError: com/fasterxml/jackson/jaxrs/json/JacksonJaxbJsonProvider
 at org.apache.atlas.AtlasBaseClient.getClient(AtlasBaseClient.java:253)
 at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:425)
 at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:420)
 at org.apache.atlas.AtlasBaseClient.<init>(AtlasBaseClient.java:115)
 at org.apache.atlas.AtlasClientV2.<init>(AtlasClientV2.java:77)
 at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:131)
Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.jaxrs.json.JacksonJaxbJsonProvider
 at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 6 more

例二:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
2020-05-21 11:46:02,586 INFO  - [main:] ~ checking HBase availability.. (HBaseBridge:204)
2020-05-21 11:46:02,932 ERROR - [main:] ~ ImportHBaseEntities failed (HBaseBridge:188)
java.io.IOException: java.lang.reflect.InvocationTargetException
        at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
        at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
        at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
        at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3172)
        at org.apache.atlas.hbase.bridge.HBaseBridge.<init>(HBaseBridge.java:206)
        at org.apache.atlas.hbase.bridge.HBaseBridge.main(HBaseBridge.java:148)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
        ... 5 more
Caused by: java.lang.NoSuchFieldError: HBASE_CLIENT_PREFETCH
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:721)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:654)
        ... 10 more

例三:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
Downloading from repository.jboss.org-public: https://repository.jboss.org/nexus/content/groups/public/org/apache/hbase/hbase-hadoop2-compat/1.2.0-cdh5.16.2/hbase-hadoop2-compat-1.2.0-cdh5.16.2-tests.jar
Downloading from repository.jboss.org-public: https://repository.jboss.org/nexus/content/groups/public/org/apache/hbase/hbase-common/1.2.0-cdh5.16.2/hbase-common-1.2.0-cdh5.16.2.jar
Downloading from repository.jboss.org-public: https://repository.jboss.org/nexus/content/groups/public/org/apache/hbase/hbase-hadoop-compat/1.2.0-cdh5.16.2/hbase-hadoop-compat-1.2.0-cdh5.16.2-tests.jar
Downloading from typesafe: https://repo.typesafe.com/typesafe/releases/org/apache/hbase/hbase-server/1.2.0-cdh5.16.2/hbase-server-1.2.0-cdh5.16.2.jar
Downloading from typesafe: https://repo.typesafe.com/typesafe/releases/org/apache/hbase/hbase-server/1.2.0-cdh5.16.2/hbase-server-1.2.0-cdh5.16.2-tests.jar
Downloading from typesafe: https://repo.typesafe.com/typesafe/releases/org/apache/hbase/hbase-client/1.2.0-cdh5.16.2/hbase-client-1.2.0-cdh5.16.2.jar
Downloading from typesafe: https://repo.typesafe.com/typesafe/releases/org/apache/hbase/hbase-hadoop2-compat/1.2.0-cdh5.16.2/hbase-hadoop2-compat-1.2.0-cdh5.16.2-tests.jar
Downloading from typesafe: https://repo.typesafe.com/typesafe/releases/org/apache/hbase/hbase-common/1.2.0-cdh5.16.2/hbase-common-1.2.0-cdh5.16.2.jar
Downloading from typesafe: https://repo.typesafe.com/typesafe/releases/org/apache/hbase/hbase-hadoop-compat/1.2.0-cdh5.16.2/hbase-hadoop-compat-1.2.0-cdh5.16.2-tests.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Atlas Server Build Tools 1.0 ................ SUCCESS [  0.442 s]
[INFO] apache-atlas 1.2.0 ................................. SUCCESS [  2.714 s]
[INFO] Apache Atlas Test Utility Tools .................... SUCCESS [  4.101 s]
[INFO] Apache Atlas Integration ........................... SUCCESS [  3.813 s]
[INFO] Apache Atlas Common ................................ SUCCESS [  1.035 s]
[INFO] Apache Atlas Client ................................ SUCCESS [  0.107 s]
[INFO] atlas-client-common ................................ SUCCESS [  0.533 s]
[INFO] atlas-client-v1 .................................... SUCCESS [  0.705 s]
[INFO] Apache Atlas Server API ............................ SUCCESS [  0.838 s]
[INFO] Apache Atlas Notification .......................... SUCCESS [  1.486 s]
[INFO] atlas-client-v2 .................................... SUCCESS [  0.526 s]
[INFO] Apache Atlas Graph Database Projects ............... SUCCESS [  0.065 s]
[INFO] Apache Atlas Graph Database API .................... SUCCESS [  0.599 s]
[INFO] Graph Database Common Code ......................... SUCCESS [  0.630 s]
[INFO] Apache Atlas JanusGraph DB Impl .................... SUCCESS [ 46.304 s]
[INFO] Apache Atlas Graph Database Implementation Dependencies SUCCESS [  1.849 s]
[INFO] Shaded version of Apache hbase client .............. SUCCESS [  5.264 s]
[INFO] Shaded version of Apache hbase server .............. SUCCESS [ 13.347 s]
[INFO] Apache Atlas Authorization ......................... SUCCESS [  0.826 s]
[INFO] Apache Atlas Repository ............................ SUCCESS [  7.587 s]
[INFO] Apache Atlas UI .................................... SUCCESS [ 54.525 s]
[INFO] Apache Atlas Web Application ....................... SUCCESS [ 58.020 s]
[INFO] Apache Atlas Documentation ......................... SUCCESS [  7.783 s]
[INFO] Apache Atlas FileSystem Model ...................... SUCCESS [  1.850 s]
[INFO] Apache Atlas Plugin Classloader .................... SUCCESS [  0.480 s]
[INFO] Apache Atlas Hive Bridge Shim ...................... SUCCESS [  0.727 s]
[INFO] Apache Atlas Hive Bridge ........................... SUCCESS [  2.478 s]
[INFO] Apache Atlas Falcon Bridge Shim .................... SUCCESS [  0.760 s]
[INFO] Apache Atlas Falcon Bridge ......................... SUCCESS [  0.931 s]
[INFO] Apache Atlas Sqoop Bridge Shim ..................... SUCCESS [  0.113 s]
[INFO] Apache Atlas Sqoop Bridge .......................... SUCCESS [  1.350 s]
[INFO] Apache Atlas Storm Bridge Shim ..................... SUCCESS [ 22.128 s]
[INFO] Apache Atlas Storm Bridge .......................... SUCCESS [  1.684 s]
[INFO] Apache Atlas Hbase Bridge Shim ..................... SUCCESS [  1.297 s]
[INFO] Apache Atlas Hbase Bridge .......................... FAILURE [02:19 min]
[INFO] Apache Atlas Kafka Bridge .......................... SKIPPED
[INFO] Apache Atlas Distribution 1.2.0 .................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 06:27 min
[INFO] Finished at: 2020-05-21T00:41:44+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project hbase-bridge: Could not resolve dependencies for project org.apache.atlas:hbase-bridge:jar:1.2.0: The following artifacts could not be resolved: org.apache.hbase:hbase-server:jar:1.2.0-cdh5.16.2, org.apache.hbase:hbase-server:jar:tests:1.2.0-cdh5.16.2, org.apache.hbase:hbase-client:jar:1.2.0-cdh5.16.2, org.apache.hbase:hbase-common:jar:1.2.0-cdh5.16.2, org.apache.hbase:hbase-hadoop2-compat:jar:tests:1.2.0-cdh5.16.2, org.apache.hbase:hbase-hadoop-compat:jar:tests:1.2.0-cdh5.16.2: Could not find artifact org.apache.hbase:hbase-server:jar:1.2.0-cdh5.16.2 in alimaven (http://maven.aliyun.com/nexus/content/repositories/central/) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hbase-bridge
[root@kafka-dev-01 apache-atlas-sources-1.2.0]#

例四:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
[INFO] --- maven-compiler-plugin:3.7.0:compile (default-compile) @ hbase-bridge ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to /root/apache-atlas-sources-1.2.0/addons/hbase-bridge/target/classes
[INFO] /root/apache-atlas-sources-1.2.0/addons/hbase-bridge/src/main/java/org/apache/atlas/hbase/hook/HBaseAtlasCoprocessorBase.java: Some input files use or override a deprecated API.
[INFO] /root/apache-atlas-sources-1.2.0/addons/hbase-bridge/src/main/java/org/apache/atlas/hbase/hook/HBaseAtlasCoprocessorBase.java: Recompile with -Xlint:deprecation for details.
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /root/apache-atlas-sources-1.2.0/addons/hbase-bridge/src/main/java/org/apache/atlas/hbase/hook/HBaseAtlasCoprocessor.java:[37,8] org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor is not abstract and does not override abstract method postBalanceRSGroup(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment>,java.lang.String,boolean) in org.apache.hadoop.hbase.coprocessor.MasterObserver
[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Atlas Server Build Tools 1.0 ................ SUCCESS [  0.372 s]
[INFO] apache-atlas 1.2.0 ................................. SUCCESS [  2.329 s]
[INFO] Apache Atlas Test Utility Tools .................... SUCCESS [  3.784 s]
[INFO] Apache Atlas Integration ........................... SUCCESS [  4.000 s]
[INFO] Apache Atlas Common ................................ SUCCESS [  1.143 s]
[INFO] Apache Atlas Client ................................ SUCCESS [  0.099 s]
[INFO] atlas-client-common ................................ SUCCESS [  0.474 s]
[INFO] atlas-client-v1 .................................... SUCCESS [  0.621 s]
[INFO] Apache Atlas Server API ............................ SUCCESS [  0.782 s]
[INFO] Apache Atlas Notification .......................... SUCCESS [  1.510 s]
[INFO] atlas-client-v2 .................................... SUCCESS [  0.534 s]
[INFO] Apache Atlas Graph Database Projects ............... SUCCESS [  0.081 s]
[INFO] Apache Atlas Graph Database API .................... SUCCESS [  0.596 s]
[INFO] Graph Database Common Code ......................... SUCCESS [  0.600 s]
[INFO] Apache Atlas JanusGraph DB Impl .................... SUCCESS [  4.628 s]
[INFO] Apache Atlas Graph Database Implementation Dependencies SUCCESS [  1.751 s]
[INFO] Shaded version of Apache hbase client .............. SUCCESS [  5.282 s]
[INFO] Shaded version of Apache hbase server .............. SUCCESS [ 13.297 s]
[INFO] Apache Atlas Authorization ......................... SUCCESS [  0.804 s]
[INFO] Apache Atlas Repository ............................ SUCCESS [  7.324 s]
[INFO] Apache Atlas UI .................................... SUCCESS [ 47.830 s]
[INFO] Apache Atlas Web Application ....................... SUCCESS [ 35.681 s]
[INFO] Apache Atlas Documentation ......................... SUCCESS [  3.309 s]
[INFO] Apache Atlas FileSystem Model ...................... SUCCESS [  1.833 s]
[INFO] Apache Atlas Plugin Classloader .................... SUCCESS [  0.406 s]
[INFO] Apache Atlas Hive Bridge Shim ...................... SUCCESS [  0.698 s]
[INFO] Apache Atlas Hive Bridge ........................... SUCCESS [  2.365 s]
[INFO] Apache Atlas Falcon Bridge Shim .................... SUCCESS [  0.817 s]
[INFO] Apache Atlas Falcon Bridge ......................... SUCCESS [  1.007 s]
[INFO] Apache Atlas Sqoop Bridge Shim ..................... SUCCESS [  0.104 s]
[INFO] Apache Atlas Sqoop Bridge .......................... SUCCESS [  1.232 s]
[INFO] Apache Atlas Storm Bridge Shim ..................... SUCCESS [  0.387 s]
[INFO] Apache Atlas Storm Bridge .......................... SUCCESS [  1.605 s]
[INFO] Apache Atlas Hbase Bridge Shim ..................... SUCCESS [  1.295 s]
[INFO] Apache Atlas Hbase Bridge .......................... FAILURE [  4.167 s]
[INFO] Apache Atlas Kafka Bridge .......................... SKIPPED
[INFO] Apache Atlas Distribution 1.2.0 .................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:33 min
[INFO] Finished at: 2020-05-21T12:42:57+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.7.0:compile (default-compile) on project hbase-bridge: Compilation failure
[ERROR] /root/apache-atlas-sources-1.2.0/addons/hbase-bridge/src/main/java/org/apache/atlas/hbase/hook/HBaseAtlasCoprocessor.java:[37,8] org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor is not abstract and does not override abstract method postBalanceRSGroup(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment>,java.lang.String,boolean) in org.apache.hadoop.hbase.coprocessor.MasterObserver
[ERROR]
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hbase-bridge
[root@kafka-dev-01 apache-atlas-sources-1.2.0]#