wso2bam中介stats db(配置单元未在db中创建表)

laximzn5  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(364)

大家好,我正在配置wso2esb 4.8.1和wso2bam 2.4.0以查看统计信息,我正在将数据从esb发送到存储在cassandra中的bam,但问题是,当我通过bam工具箱部署中介统计信息监视时,在控制台中看到以下错误:

[2014-02-07 13:45:21,638] ERROR {org.apache.hadoop.hive.ql.exec.Task} -  /var/www/formascloud/apps/wso2bam-2.4.0/repository/logs//wso2carbon.log
[2014-02-07 13:45:21,638] ERROR {org.apache.hadoop.hive.ql.exec.ExecDriver} -  Execution failed with exit status: 2
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
[2014-02-07 13:45:21,639] ERROR {org.apache.hadoop.hive.ql.Driver} -  FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
[2014-02-07 13:45:21,640] ERROR {org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl} -  Error while executing Hive script.
Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
        at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:189)
        at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.executeHiveQuery(HiveExecutorServiceImpl.java:569)
        at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:282)
        at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:189)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:722)
[2014-02-07 13:45:21,641] ERROR {org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask} -  Error while executing script : esb_stats_296
org.wso2.carbon.analytics.hive.exception.HiveExecutionException: Error while executing Hive script.Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
        at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl.execute(HiveExecutorServiceImpl.java:115)
        at org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask.execute(HiveScriptExecutorTask.java:60)
        at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
        at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:722)

[2014-02-07 13:45:21,443] ERROR {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation} -  Failed to get total row count
org.postgresql.util.PSQLException: ERROR: relation "mediation_stats_summary_per_minute" does not exist
  Position: 38
        at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2103)
        at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836)
        at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
        at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:512)
        at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:388)
        at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:273)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.getTotalCount(DBOperation.java:335)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.input.JDBCSplit.getSplits(JDBCSplit.java:113)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.input.JDBCDataInputFormat.getSplits(JDBCDataInputFormat.java:41)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:302)
        at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:292)
        at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:933)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:925)
        at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
        at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:460)
        at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:733)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
java.lang.NullPointerException
        at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.getTotalCount(DBOperation.java:344)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.input.JDBCSplit.getSplits(JDBCSplit.java:113)
        at org.wso2.carbon.hadoop.hive.jdbc.storage.input.JDBCDataInputFormat.getSplits(JDBCDataInputFormat.java:41)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:302)
        at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:292)
        at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:933)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:925)
        at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
        at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:460)
        at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:733)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

如您所见,我使用的是postgresql,我创建了一个名为bam\u stats\u db的数据库,这是存储在master-datasources.xml中的配置:

<datasource>
    <name>WSO2BAM_DATASOURCE</name>
    <description>The datasource used for analyzer data</description>
    <definition type="RDBMS">
        <configuration>                             <url>jdbc:postgresql://localhost:5432/BAM_STATS_DB</url>
                    <username>gregadmin</username>
                    <password>gregadmin</password>
                    <driverClassName>org.postgresql.Driver</driverClassName>
                                <maxActive>80</maxActive>
                            <maxWait>60000</maxWait>
                                <minIdle>5</minIdle>
                                <testOnBorrow>true</testOnBorrow>
                                <validationQuery>select version();</validationQuery>
                             <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
        </datasource>

我做错什么了?我正在遵循wsobam文档中的配置。

9fkzdhlc

9fkzdhlc1#

看起来bam工具箱不支持postgresql数据库。
bam工具箱具有以下项目
流定义
分析
Jmeter 板组件
分析具有配置单元脚本,这些脚本依赖于用于wso2bam\u数据源的底层数据库
如果你看 wso2bam-2.4.0/samples/toolboxes ,您将看到多个工具箱,具体取决于数据库。
例如,中介统计信息监视工具箱有3个不同的tbox文件。
中介\统计\监控\ mssql.tbox
中介\统计\监控\ oracle.tbox
中介\统计\监控.tbox
如果通过管理控制台ui安装工具箱,则将安装默认工具箱。在本例中,使用mediation\u statistics\u monitoring.tbox。
此默认工具箱适用于少数没有任何问题的数据库。例如mysql。
您只需编辑分析脚本即可使用postgresql数据库。
您可以解压缩工具箱(mediation\u statistics\u monitoring.tbox)并编辑 /Mediation_Statistics_Monitoring/analytics/esb_stats 修复jdbcstoragehandler使用的所有“create table”sql。
然后再次创建tbox存档并复制到 wso2bam-2.4.0/repository/deployment/server/bam-toolbox 确保先取消部署现有工具箱。
我希望这有帮助。

相关问题