janusgraph使用scriptinputformat批量加载csv

fdbelqdn  于 2021-06-01  发布在  Hadoop
关注(0)|答案(0)|浏览(299)

我正在尝试将csv文件加载到janusgraph中。据我所知,我需要创建我的图形和模式,然后使用bulkloadservertex程序和我自己的定制groovy脚本来解析我的csv文件。这样做,它似乎工作,因为我可以看到顶点,但边没有创建。
我的配置似乎与加载csv文件时所能找到的所有示例几乎相同,但肯定有一些东西我不明白或忘记了。
是否可以从csv文件批量加载边缘?
以下是我的设置:
我用默认的bin/janusgraph.sh脚本启动cassandra
我的小精灵命令:

gremlin> :load data/defineNCBIOSchema.groovy
==>true
gremlin> graph = JanusGraphFactory.open('conf/gremlin-server/socket-janusgraph-apr-test.properties')
==>standardjanusgraph[cassandrathrift:[127.0.0.1]]
gremlin> defineNCBIOSchema(graph)
==>null
gremlin> graph.close()
==>null

gremlin> graph = GraphFactory.open('conf/hadoop-graph/apr-test-hadoop-script.properties')
==>hadoopgraph[scriptinputformat->graphsonoutputformat]
gremlin> blvp = BulkLoaderVertexProgram.build().bulkLoader(OneTimeBulkLoader).writeGraph('conf/gremlin-server/socket-janusgraph-apr-test.properties').create(graph)
==>BulkLoaderVertexProgram[bulkLoader=IncrementalBulkLoader, vertexIdProperty=bulkLoader.vertex.id, userSuppliedIds=false, keepOriginalIds=true, batchSize=0]
gremlin> graph.compute(SparkGraphComputer).workers(1).program(blvp).submit().get()
==>result[hadoopgraph[scriptinputformat->graphsonoutputformat],memory[size:0]]
gremlin> graph.close()
==>null

gremlin> graph = GraphFactory.open('conf/hadoop-graph/apr-test-hadoop-load.properties')
==>hadoopgraph[cassandrainputformat->gryooutputformat]
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[cassandrainputformat->gryooutputformat], sparkgraphcomputer]
gremlin> g.E() <--- returns nothing

my janusgraph:(conf/gremlin server/socket janusgraph apr test.properties)

gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
index.search.backend=elasticsearch
index.search.directory=/tmp/searchindex
index.search.elasticsearch.client-only=false
index.search.elasticsearch.local-mode=true
index.search.hostname=127.0.0.1

bulkloader的图表:(conf/hadoop graph/apr test hadoop script.properties)

gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphInputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.script.ScriptInputFormat
gremlin.hadoop.graphOutputFormat=org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
gremlin.hadoop.jarsInDistributedCache=true

gremlin.hadoop.inputLocation=data/apr-test-doc.csv
gremlin.hadoop.scriptInputFormat.script=data/apr-test-CSVInputScript.groovy
gremlin.hadoop.outputLocation=output

query.fast-property=false

spark.master=local[*]
spark.executor.memory=1g
spark.serializer=org.apache.spark.serializer.KryoSerializer

读取图:(conf/hadoopgraph/apr test hadoopload.properties)

gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphInputFormat=org.janusgraph.hadoop.formats.cassandra.CassandraInputFormat
gremlin.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat

gremlin.hadoop.jarsInDistributedCache=true
gremlin.hadoop.inputLocation=none
gremlin.hadoop.outputLocation=output
janusgraphmr.ioformat.conf.storage.backend=cassandra
janusgraphmr.ioformat.conf.storage.hostname=localhost
janusgraphmr.ioformat.conf.storage.port=9160
janusgraphmr.ioformat.conf.storage.cassandra.keyspace=janusgraph
cassandra.thrift.framed.size_mb=60
cassandra.input.partitioner.class=org.apache.cassandra.dht.Murmur3Partitioner
spark.master=local[*]
spark.serializer=org.apache.spark.serializer.KryoSerializer

我的groovy脚本

class Globals {
    static String[] h = [];
    static int lineNumber = 0;
}

def parse(line, factory) {
    def vertexType = 'Disease'
    def edgeLabel = 'parent'
    def parentsIndex = 2;

    Globals.lineNumber++

    // columns ignoring quoted ,
    def c = line.split(/,(?=(?:[^\"]*\"[^\"]*\")*[^\"]*$)/)

    //  if first column is Class ID ignore the line, it is the header line
    if (c[0] == /ClassID/) {
        Globals.h = c
        return null
    }

    def v1 = graph.addVertex(T.id, c[0], T.label, vertexType)

    for (i = 0; i < c.length; ++i) {
        if (i != parentsIndex) { // Ignore parent
            def f = removeInvalidChar(c[i])
            if (f?.trim()) {
                v1.property(Globals.h[i], f)
            }
        }
    }

    def parents = []    
    if (c.length > parentsIndex) {
        parents = c[parentsIndex].split(/\|/)
    }

    for (i = 0; i < parents.size(); ++i) {
        def v2 = graph.addVertex(T.id, parents[i], T.label, vertexType)
        v1.addInEdge(edgeLabel, v2)             
    }

    return v1
}

def removeInvalidChar(col) {

    def f = col.replaceAll(/^\"|\"$/, "") // Remove quotes
    f = f.replaceAll(/\{/, /(/) // Remove {
    f = f.replaceAll(/\}/, /)/) // Remove }

    if (f == /label/) {
        f = /label2/
    }

    return f
}

架构

def defineNCBIOSchema(graph) {

    mgmt = graph.openManagement()

    // vertex labels
    vertexLabel = mgmt.makeVertexLabel('Disease').make()

    // edge labels
    parent = mgmt.makeEdgeLabel('parent').multiplicity(MULTI).make()

    // vertex and edge properties
    blid = mgmt.makePropertyKey('bulkLoader.vertex.id').dataType(String.class).make()
    classID = mgmt.makePropertyKey('ClassID').dataType(String.class).cardinality(Cardinality.SINGLE).make()
    preferedLabel = mgmt.makePropertyKey('PreferredLabel').dataType(String.class).cardinality(Cardinality.SINGLE).make()

    // global indices
    mgmt.buildIndex('ClassIDIndex', Vertex.class).addKey(classID).unique()

    mgmt.commit()
}

csv

ClassID,PreferredLabel,Parents
Vertex3,Prefered Label 3,
Vertex2,Prefered Label 2,Vertex3
Vertex1,Prefered Label 1,Vertex2|Vertex3

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题