Git Product home page Git Product logo

apoc's Introduction

Neo4j: Graphs for Everyone

Neo4j is the world’s leading Graph Database. It is a high performance graph store with all the features expected of a mature and robust database, like a friendly query language and ACID transactions. The programmer works with a flexible network structure of nodes and relationships rather than static tables — yet enjoys all the benefits of enterprise-quality database. For many applications, Neo4j offers orders of magnitude performance benefits compared to relational DBs.

Learn more on the Neo4j website.

Discord

Discourse users

Using Neo4j

Neo4j is available both as a standalone server, or an embeddable component. You can download or try online.

Extending Neo4j

We encourage experimentation with Neo4j. You can build extensions to Neo4j, develop library or drivers atop the product, or make contributions directly to the product core. You’ll need to sign a Contributor License Agreement in order for us to accept your patches.

Dependencies

Neo4j is built using Apache Maven version 3.8.2 and a recent version of supported VM. Bash and Make are also required. Note that maven needs more memory than the standard configuration, this can be achieved with export MAVEN_OPTS="-Xmx2048m".

macOS users need to have Homebrew installed.

With brew on macOS

brew install maven

Please note that we do not support building Debian packages on macOS.

With apt-get on Ubuntu

sudo apt install maven openjdk-17-jdk

Be sure that the JAVA_HOME environment variable points to /usr/lib/jvm/java-17-openjdk-amd64 (you may have various java versions installed).

Building Neo4j

Before you start running the unit and integration tests in the Neo4j Maven project on a Linux-like system, you should ensure your limit on open files is set to a reasonable value. You can test it with ulimit -n. We recommend you have a limit of at least 40K.

  • A plain mvn clean install -T1C will only build the individual jar files.

  • Test execution is, of course, part of the build.

  • In case you just want the jars, without running tests, this is for you: mvn clean install -DskipTests -T1C.

  • You may need to increase the memory available to Maven: export MAVEN_OPTS="-Xmx2048m" (try this first if you get build errors).

  • You may run into problems resolving org.neo4j.build:build-resources due to a bug in maven. To resolve this simply invoke mvn clean install -pl build-resources.

Running Neo4j

After running a mvn clean install, cd into packaging/standalone/target and extract the version you want, then:

bin/neo4j-admin server start

in the extracted folder to start Neo4j on localhost:7474. On Windows you want to run:

bin\neo4j-admin server start

instead.

Neo4j Desktop

Neo4j Desktop is a convenient way for developers to work with local Neo4j databases.

To install Neo4j Desktop, go to Neo4j Download Center and follow the instructions.

Licensing

Neo4j Community Edition is an open source product licensed under GPLv3.

Neo4j Enterprise Edition includes additional closed-source components not available in this repository and requires a commercial license from Neo4j or one of its affiliates.

Trademark

Neo4j’s trademark policy is available at our trademark policy page.

apoc's People

Contributors

adam-cowley avatar adtc avatar albertodelazzari avatar alexiudice avatar angelobusato avatar azuobs avatar burqen avatar conker84 avatar fbiville avatar gem-neo4j avatar github-actions[bot] avatar ikwattro avatar inversefalcon avatar jexp avatar jmhreif avatar klaren avatar lojjs avatar loveleif avatar mishademianenko avatar mnd999 avatar mneedham avatar moxious avatar nadja-muller avatar ncordon avatar neo4j-oss-build avatar sarmbruster avatar szarnyasg avatar tomasonjo avatar vga91 avatar zimmre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apoc's Issues

3.5.0.3 apoc.export.json.query does not allow ARRAY_JSON

Issue by Rezu
Thursday Nov 04, 2021 at 23:51 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2282


Greetings,

Cross posting with Neo4j forum: https://community.neo4j.com/t/apoc-3-5-0-3-exact-syntax-for-jsonformat-array-json/46718

I am trying to pin down the exact apoc syntax for producing array json using apoc 3.5.0.3. I am having a very hard time even finding apoc 3.5.0.3 documentation.

Here is what I am trying, it does not work:

CALL apoc.export.json.query( "Match (b:Beer) Return b.name as name Limit 5", "bnl.json", {config:{jsonFormat:'ARRAY_JSON'}})

I have tried all kinds of iterations in the config param, but nothing seems to work and the only real, with example, type documentation I can find is for apoc 4.*. I can find nothing for 3.5.

Of note, the following works for 4.1.9 Community, but I want it to work in 3.5 Community:

CALL apoc.export.json.query( "Match (b:Beer) Return b.name as name Limit 5", "bnl.json", {jsonFormat:'ARRAY_JSON'})

Grateful for any help.

Many Thanks,

Keith

Improve docs about configuration with env variables

Issue by akshaymhetre
Wednesday Jul 28, 2021 at 02:44 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2120


I am using neo4j with docker version 4.2.5 and cypher file to initialize data.

I am setting following environment property to initialize data with docker-compose -

NEO4J_apoc_initializer_cypher=CALL apoc.cypher.runFile ("file:///sample.cypher")

This works fine and load the data after server started. However, you cannot create indexes in this file. You need runSchema call to create indexes. As per doc for neo4j 4.2, you can run multiple cypher,

NEO4J_apoc_initializer_cypher_0=CALL apoc.cypher.runSchemaFile ("file:///schema.cypher")

NEO4J_apoc_initializer_cypher_1=CALL apoc.cypher.runFile ("file:///sample.cypher")

However, this throws error:

unknown settings: apoc.initializer.cypher.0

unknown settings: apoc.initializer.cypher.1

Raised this on neo4j docker site as well.

file:/// urls don't work with apoc.export.*

Issue by jexp
Tuesday Aug 17, 2021 at 11:00 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2179


neo4j@neo4j> call apoc.export.csv.all("file:///movies.csv",{});
Failed to invoke procedure `apoc.export.csv.all`: Caused by: java.io.FileNotFoundException: /var/lib/neo4j/import/file:/movies.csv (No such file or directory)
neo4j@neo4j> call apoc.export.csv.all("file://movies.csv",{});
Failed to invoke procedure `apoc.export.csv.all`: Caused by: java.io.FileNotFoundException: /var/lib/neo4j/import/file:/movies.csv (No such file or directory)

Round trip with apoc graphml coerces types to string

Issue by Rosswart
Tuesday Jul 20, 2021 at 10:03 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2099


Expected Behavior

Exporting a graph with

    CALL apoc.export.graphml.all('types.graphml', {useTypes: true, storeNodeIds: false})

into graphml

    <?xml version="1.0" encoding="UTF-8"?>
    <graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
    <key id="b" for="node" attr.name="b" attr.type="string"/>
    <key id="c" for="node" attr.name="c" attr.type="string" attr.list="string"/>
    <key id="d" for="node" attr.name="d" attr.type="long"/>
    <graph id="G" edgedefault="directed">
    <node id="n0"><data key="a">string</data><data key="b">string</data></node>
    <node id="n1"><data key="a">string</data><data key="b">string</data></node>
    <node id="n2"><data key="a">["array","string"]</data><data key="c">["array","string"]</data></node>
    <node id="n3"><data key="a">["array","string"]</data><data key="c">["array","string"]</data></node>
    <node id="n4"><data key="a">123</data><data key="d">123</data></node>
    <node id="n5"><data key="a">123</data><data key="d">123</data></node>
    </graph>
    </graphml>

and then importing this again with

    CALL apoc.import.graphml('types.graphml', {readLabels: true})

should reproduce the same graph

    MATCH (n)
    RETURN
      apoc.meta.type(n.a), n.a,
      apoc.meta.type(n.b), n.b,
      apoc.meta.type(n.c), n.c,
      apoc.meta.type(n.d), n.d

with all properties having the same type as in the original graph:

    ╒═════════════════════╤══════════════════╤═════════════════════╤════════╤═════════════════════╤══════════════════╤═════════════════════╤═════╕
    │"apoc.meta.type(n.a)"│"n.a"             │"apoc.meta.type(n.b)"│"n.b"   │"apoc.meta.type(n.c)"│"n.c"             │"apoc.meta.type(n.d)"│"n.d"│
    ╞═════════════════════╪══════════════════╪═════════════════════╪════════╪═════════════════════╪══════════════════╪═════════════════════╪═════╡
    │"STRING"             │"string"          │"STRING"             │"string"│"NULL"               │null              │"NULL"               │null │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"string"          │"STRING"             │"string"│"NULL"               │null              │"NULL"               │null │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"String[]"           │["array","string"]│"NULL"               │null    │"String[]"           │["array","string"]│"NULL"               │null │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"String[]"           │["array","string"]│"NULL"               │null    │"String[]"           │["array","string"]│"NULL"               │null │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"INTEGER"            │123               │"NULL"               │null    │"NULL"               │null              │"INTEGER"            │123  │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"INTEGER"            │123               │"NULL"               │null    │"NULL"               │null              │"INTEGER"            │123  │
    └─────────────────────┴──────────────────┴─────────────────────┴────────┴─────────────────────┴──────────────────┴─────────────────────┴─────┘

Actual Behavior

If the original node properties are of mixed type (see "n.a"), then the imported properties are coerced to string. All properties with pure types do have the original type.

    ╒═════════════════════╤════════════════════╤═════════════════════╤════════╤═════════════════════╤══════════════════╤═════════════════════╤═════╕
    │"apoc.meta.type(n.a)"│"n.a"               │"apoc.meta.type(n.b)"│"n.b"   │"apoc.meta.type(n.c)"│"n.c"             │"apoc.meta.type(n.d)"│"n.d"│
    ╞═════════════════════╪════════════════════╪═════════════════════╪════════╪═════════════════════╪══════════════════╪═════════════════════╪═════╡
    │"STRING"             │"string"            │"STRING"             │"string"│"NULL"               │null              │"NULL"               │null │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"string"            │"STRING"             │"string"│"NULL"               │null              │"NULL"               │null │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"["array","string"]"│"NULL"               │null    │"String[]"           │["array","string"]│"NULL"               │null │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"["array","string"]"│"NULL"               │null    │"String[]"           │["array","string"]│"NULL"               │null │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"123"               │"NULL"               │null    │"NULL"               │null              │"INTEGER"            │123  │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"123"               │"NULL"               │null    │"NULL"               │null              │"INTEGER"            │123  │
    └─────────────────────┴────────────────────┴─────────────────────┴────────┴─────────────────────┴──────────────────┴─────────────────────┴─────┘

How to Reproduce the Problem

Simple Dataset

    CREATE ({a: "string",           b: "string"})
    CREATE ({a: "string",           b: "string"})
    CREATE ({a: ["array", "string", c: ["array", "string"]]})
    CREATE ({a: ["array", "string", c: ["array", "string"]]})
    CREATE ({a: 123,                d: 123})
    CREATE ({a: 123,                d: 123})

Steps (Mandatory)

  1. Create graph
  2. Export
  3. Import

Specifications

Versions

  • OS: Windows 10 Enterprise 10.0.17763 Build 17763
  • Neo4j: neo4j-3.5.28
  • Neo4j-Apoc: apoc-3.5.0.15

Providing s3 authentication through environment variables fails when using 'apoc.load.csv'

Issue by mskyttner
Wednesday Sep 08, 2021 at 06:50 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2198


Looking at https://neo4j.com/labs/apoc/4.1/import/web-apis/#_using_s3_protocol, I tried using environment variables for setting "accessKey" and "secretKey" values. I verified that such environment variables are set when neo4j starts, env | grep Key gives:

accessKey=mysecretaccesskey
secretKey=mysecretkey

I couldn't find any example where those credentials are provided through environment variables. Docs say that this format for the S3 connectionstring should be valid: "s3://endpoint:port/bucket/key if the accessKey, secretKey, and the optional sessionToken are provided in the environment variables". Should those environment variables be named differently to be picked up?

Actual Behavior (Mandatory)

Running CALL apoc.load.csv('s3://myminio.object.storage/bibliometrics/authors.csv',{sep:","}) YIELD lineNo, list, strings, map, stringMap fails with:

Neo.ClientError.Procedure.ProcedureCallFailed
Failed to invoke procedure `apoc.load.csv`: Caused by: java.lang.NullPointerException

How to Reproduce the Problem

Use a valid s3 connection string, make sure it works with accessKey and secretKey parts of the URL, then remove those parts of the connection string and expect values set in environment variables "accessKey" and "secretKey" to be automatically used.

Specifications (Mandatory)

Currently used versions

Versions

  • OS: Linux
  • Neo4j: 4.0.7
  • Neo4j-Apoc: version provided when installing apoc using NEO4JLABS_PLUGINS '["apoc"'] in Dockerfile

[DUPLICATED] Don't support relationship indexes in apoc.export.cypher.all

Issue by alizadehei
Wednesday Jun 30, 2021 at 10:03 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2048


Hi
I tried to export the database using the following query:

CALL apoc.export.cypher.all("db.cypher",{format: "neo4j-shell", useOptimizations: {type: "UNWIND_BATCH",unwindBatchSize: 20}})
After running, I encountered the following error:

Failed to invoke procedure 'apoc.export.cypher.all' : Caused by: java.lang.IllegalStateException: This is not a node index.

It seems that the export procedure only supports node indexes. So what about the relationship indexes?

Versions

  • OS: Linux 20.04
  • Neo4j: 4.3.1
  • Neo4j-Apoc: 4.2.0.4

[DUPLICATED] Detect missing procedure at build time

Issue by fbiville
Thursday Jun 24, 2021 at 14:26 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2033


Problem

Several releases of APOC have been missing procedures/functions.

The last incident of this kind happened because of an issue with the scope of Guava: it was available in tests,, thus passing CI but was not included in the production core/full JARs, resulting in loading errors for some procedures/functions.

Proposed Solution

Given all APOC procedures and functions are available from APOC sources, we can define a compile-time annotation processor to list and dump these to a flat file.
We can then introduce a new testing module, configured like a regular APOC customer project, with a Docker container configured with APOC. Based on that setup, we can detect any discrepancy between the available procedure/extension from sources and the ones actually available at runtime.

This might be implemented two phases, as standing up a Neo4j containers with all the extra dependencies required by APOC full might be tricky. We can at least start with APOC core and then extend the logic to full.

`NoSuchMethodError` exception for `getReadCapabilities` when trying to use `apoc.convert.fromJsonList`

Issue by tekrei
Friday May 06, 2022 at 10:53 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2861


I am receiving NoSuchMethodError for com.fasterxml.jackson.core.JsonParser.getReadCapabilities() when I try to use apoc.convert.fromJsonList.

Expected Behavior (Mandatory)

Query should return nodes.

Actual Behavior (Mandatory)

It is giving the following error:

Failed to invoke function `apoc.convert.fromJsonList`: Caused by: java.lang.NoSuchMethodError: 'com.fasterxml.jackson.core.util.JacksonFeatureSet com.fasterxml.jackson.core.JsonParser.getReadCapabilities()'

How to Reproduce the Problem

The following simple apoc.convert.fromJsonList giving the error:

RETURN apoc.convert.fromJsonList("[{'token': '1', 'scope': 'test'},{'token': '2','scope': 'test'}]") AS data

Steps (Mandatory)

  1. Just run the given query.

Screenshots (where it's possible)

error

Specifications (Mandatory)

Currently used versions

Versions

  • OS: Neo4j is deployed as a Docker container using neo4j:latest image over Ubuntu 22.04 host
  • Neo4j: 4.4.6
  • Neo4j-Apoc: 4.4.0.4

The apoc.periodic.repeat and apoc.periodic.sumbit procedures don't work with some statements which return something

Issue by vga91
Thursday May 26, 2022 at 14:07 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2942


See also: https://community.neo4j.com/t/apoc-periodic-repeat-with-node-similarity/55782/2

How to Reproduce the Problem

Create 2 similar jobs with the apoc.log.info procedure. The 1st return void:

CALL apoc.periodic.repeat('jobOne', "CALL apoc.log.info('testOne')", 2)

and the 2nd return 1:

CALL apoc.periodic.repeat('jobTwo', "CALL apoc.log.info('testTwo') RETURN 1", 2)

Note that only jobOne create log.infos


The same bug regarding the apoc.periodic.submit, for example:

CALL apoc.periodic.submit('submit-2', 'CALL apoc.log.info('testThree') RETURN 1', {})

Versions

  • OS: MacOs
  • Neo4j: 4.4.5
  • Neo4j-Apoc: 4.4.0.5

XML DOCTYPE is disallowed when the feature set to true

Issue by tomasonjo
Sunday Mar 06, 2022 at 11:31 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2608


Guidelines

I am trying to open an xml.gz file with the help of apoc.load.xml, but I get the following error:

Failed to invoke procedure apoc.load.xml: Caused by: org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 10; DOCTYPE is disallowed when the feature "http://apache.org/xml/features/disallow-doctype-decl" set to true.

I have no idea why this happens, it is independant of the compression config I use or if I try to load from a local file or from internet.

Expected Behavior (Mandatory)

The apoc.load.xml should open the file and display some values

Actual Behavior (Mandatory)

I get the following error when executing apoc.load.xml

Failed to invoke procedure apoc.load.xml: Caused by: org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 10; DOCTYPE is disallowed when the feature "http://apache.org/xml/features/disallow-doctype-decl" set to true.

How to Reproduce the Problem

Simple Dataset (where it's possibile)

The XML GZ files are available on https://ftp.ncbi.nlm.nih.gov/pubmed/updatefiles/

Steps (Mandatory)

If you simply execute the load.xml function you get the following error:

WITH "https://ftp.ncbi.nlm.nih.gov/pubmed/updatefiles/pubmed22n1219.xml.gz" AS url
CALL apoc.load.xml(url) YIELD value
UNWIND value AS article
RETURN article LIMIT 5

Failed to invoke procedure apoc.load.xml: Caused by: org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 10; DOCTYPE is disallowed when the feature "http://apache.org/xml/features/disallow-doctype-decl" set to true.

You can add the compression config but it doesn't help at all:

WITH "https://ftp.ncbi.nlm.nih.gov/pubmed/updatefiles/pubmed22n1219.xml.gz" AS url
CALL apoc.load.xml(url, "/", {compression:"GZIP"}) YIELD value
UNWIND value AS article
RETURN article LIMIT 5

Specifications (Mandatory)

Currently used versions

Versions

  • OS: Ubuntu 20.04
  • Neo4j: 4.4.2
  • Neo4j-Apoc: 4.4.0.3

[DUPLICATED] Improve docs about configuration with env variables

Issue by akshaymhetre
Wednesday Jul 28, 2021 at 02:44 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2120


I am using neo4j with docker version 4.2.5 and cypher file to initialize data.

I am setting following environment property to initialize data with docker-compose -

NEO4J_apoc_initializer_cypher=CALL apoc.cypher.runFile ("file:///sample.cypher")

This works fine and load the data after server started. However, you cannot create indexes in this file. You need runSchema call to create indexes. As per doc for neo4j 4.2, you can run multiple cypher,

NEO4J_apoc_initializer_cypher_0=CALL apoc.cypher.runSchemaFile ("file:///schema.cypher")

NEO4J_apoc_initializer_cypher_1=CALL apoc.cypher.runFile ("file:///sample.cypher")

However, this throws error:

unknown settings: apoc.initializer.cypher.0

unknown settings: apoc.initializer.cypher.1

Raised this on neo4j docker site as well.

Don't support relationship indexes in apoc.export.cypher.all

Issue by alizadehei
Wednesday Jun 30, 2021 at 10:03 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2048


Hi
I tried to export the database using the following query:

CALL apoc.export.cypher.all("db.cypher",{format: "neo4j-shell", useOptimizations: {type: "UNWIND_BATCH",unwindBatchSize: 20}})
After running, I encountered the following error:

Failed to invoke procedure 'apoc.export.cypher.all' : Caused by: java.lang.IllegalStateException: This is not a node index.

It seems that the export procedure only supports node indexes. So what about the relationship indexes?

Versions

  • OS: Linux 20.04
  • Neo4j: 4.3.1
  • Neo4j-Apoc: 4.2.0.4

apoc.load* don't actually support tar and tar.gz

Issue by nitay-bachrach
Thursday Aug 25, 2022 at 17:53 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#3151


Expected Behavior (Mandatory)

According to https://neo4j.com/labs/apoc/4.3/import/web-apis/ and other sources, apoc.load.json, apoc.load.csv etc should support .tar, .tar.gz, and .tgz

CALL apoc.load.json("https://github.com/neo4j-contrib/neo4j-apoc-procedures/blob/4.3/core/src/test/resources/testload.tgz?raw=true!person.json");

Actual Behavior (Mandatory)

In practice, only .zip is supported
In fact, all the test files here:
https://github.com/neo4j-contrib/neo4j-apoc-procedures/tree/4.4/full/src/test/resources
and here:
https://github.com/neo4j-contrib/neo4j-apoc-procedures/tree/4.4/core/src/test/resources

are just zip fies with different extensions. They all share the same md5.

How to Reproduce the Problem

Create an actual tar/tar.gz/tgz file (using tar -xf or tar -xzf) and try to load it

Simple Dataset (where it's possibile)

Steps (Mandatory)

  1. echo {"a":"b"} > test.json
  2. . tar -czf test.tar.gz test.json
  3. python -m http.server 8080
  4. CALL apoc.load.json("http://localhost:8080/test.tar.gz!test.json")

Specifications (Mandatory)

All versions

Versions

  • OS: Windows/Mac/Linux
  • Neo4j: 4.*
  • Neo4j-Apoc: *

apoc.export.csv.query() function overwrites the CSV file everytime it executes. Need to append query results to CSV file. Need option to append query results to CSV file instead of overwriting CSV file.

Issue by ThulasiPaladugu
Tuesday Nov 02, 2021 at 06:23 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2273


Guidelines

Please note that GitHub issues are only meant for bug reports/feature requests. If you have questions on how to use APOC, please ask on the Neo4j Discussion Forum instead of creating an issue here.

Expected Behavior (Mandatory)

Query results need to append to the CSV file

Actual Behavior (Mandatory)

Overwriting the CSV file while writing to CSV

How to Reproduce the Problem

by calling call apoc.export.csv.query()

Simple Dataset (where it's possibile)

//Insert here a set of Cypher statements that helps us to reproduce the problem

Steps (Mandatory)

Screenshots (where it's possibile)

Specifications (Mandatory)

Currently used versions

Versions

  • OS: Windows and Linux
  • Neo4j: 4.3.3
  • Neo4j-Apoc:4.3.0.0

apoc.refactor.rename.label delets label if oldLabel is equal to newLabel

Issue by NicolaAggio
Thursday Mar 17, 2022 at 11:24 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2646


I undertstand it seems like a silly problem, but I think it is a bug.

Expected Behavior (Mandatory)

The function should not delete the label.

Actual Behavior (Mandatory)

The function deletes the label.

How to Reproduce the Problem

Simple Dataset (where it's possibile)

CREATE (n:Label)

Steps (Mandatory)

  1. CALL apoc.refactor.rename.label("Label", "Label")

Screenshots (where it's possibile)

Specifications (Mandatory)

Currently used versions

Versions

  • OS: Windows 11
  • Neo4j: 4.4.3
  • Neo4j-Apoc: 4.4.0.3

apoc.merge.node just for Match-only

Issue by zirkelc
Thursday Mar 31, 2022 at 14:35 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2674


Feature description (Mandatory)

apoc.merge.node
apoc.merge.relationship

apoc.merge.node allows to dynamically MERGE a node with labels and properties that can be provided as parameters instead of building a static Cyper query beforehand. This is useful in cases where we unwind on multiple rows and the MERGE clause would be different for each row:

UNWIND $rows AS row
CALL apoc.merge.node(row.labels, row.match, row.onCreateProps, row.onMatchProps) YIELD node
RETURN collect(id(node)) as nodeIds

In the same way, we can use apoc.merge.relationship to dynamically MERGE a relationship with type and properties provided as parameters. However, we must also provide the start and end node of the relationship. Both nodes can be matched with apoc.merge.node without providing properties to be merge (last two params):

UNWIND $rows AS row
CALL apoc.merge.node(row.start.labels, row.start.match) YIELD node as startNode
CALL apoc.merge.node(row.end.labels, row.end.match) YIELD node as endNode
CALL apoc.merge.relationship(startNode, row.relType, row.match, row.onCreateProps, endNode, row.onMatchProps) YIELD rel
RETURN collect(id(rel)) as relationshipIds

In the second example, I'm using apoc.merge.node to MATCH the start and end nodes without actually creating or changing them. My assumption is though, that Neo4j still write-locks these nodes which affects the performance and would hinder optimisations such as parallelisation via apoc.periodic.iterate due to deadlocks on the same node.

I would like to suggest a new procedure apoc.match.node or apoc.nodes.match to dynamically MATCH nodes. The implementation would be fairly easy as we could copy apoc.merge.node and change the Cypher query in line 77 from MERGE to MATCH:
https://github.com/neo4j-contrib/neo4j-apoc-procedures/blob/c56b754bf41d16a78ace806e42e293bdb7d13e13/core/src/main/java/apoc/merge/Merge.java#L67-L79

Considered alternatives

The default values of apoc.merge.node for onCreateProps and onMatchProps is {}. This means no change to the properties on CREATE or MATCH. Therefore, we could alternatively add condition in Java to check if properties have been provided. If not, we could switch the Cypher query from MERGE to MATCH in this case.


If you find this request useful and would be wiling to add it to APOC, then I would go ahead and submit a PR for this! Let me know what you think! :-)

Unable to import json file with apoc.import.json

Issue by krishnan-pb
Monday Apr 18, 2022 at 08:07 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2759


Guidelines

Using apoc version "4.3.0.5.0", trying to import json using the procedure "apoc.import.json("file:///all_6_apr.jsonl")" fails with an exception the node that is is failing in import is in the sample below, which was output of apoc.export.json. It's only this node that failed.

Expected Behavior (Mandatory)

Import should be successful

Actual Behavior (Mandatory)

"ERRORNeo.ClientError.Procedure.ProcedureCallFailed
Failed to invoke procedure apoc.import.json: Caused by: java.lang.NullPointerException"

How to Reproduce the Problem

Perform apco json import with the node

{"type":"node","id":"8214","labels":["VALUE"],"properties":{"modificationDate":1650264267125,"oid":"b3a7d3dc-5746-441a-9f02-c78d884a7826","creationDate":1650264267125,"value":"cases://cases/813b1f38-0159-467e-bb86-a15d13bf7f9f"}}

Simple Dataset (where it's possibile)

apoc.import.json("file:///all_6_apr.jsonl")

Steps (Mandatory)

  1. Create a json file with the above node
  2. Perform apoc.import.json("file:///all_6_apr.jsonl")

The issue seems to be with "labels":["VALUE"] when I change this to "labels":["VALUE1"], import works.

Screenshots (where it's possibile)

Specifications (Mandatory)

Currently used versions

Versions

  • OS: macOS Monterey 12.3.1
  • Neo4j: 4.3.10
  • Neo4j-Apoc: 4.3.0.5.0

The apoc.export.cypher.* procedures don't create relationships with start/end node without properties.

Issue by vga91
Friday May 13, 2022 at 09:50 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2886


Expected Behavior (Mandatory)

The relationship should be created.

Actual Behavior (Mandatory)

No relationships created.

Steps (Mandatory)

Create a rel with an end node withoud properties:

create (:Start {a: 1)-[:WORKS_FOR {id: 1}]->(:Project)

Execute this statement:

CALL apoc.export.cypher.all(null, {useOptimizations: { type: 'none'}, cypherFormat: 'create' })

Copy the cypherStatements result from above proc., delete and reimport entities:

CALL apoc.cypher.runMany(<cypherStatements>, {})

No rel. created.

Versions

  • OS: OSX
  • Neo4j: 4.4.5
  • Apoc: 4.4.0.5

Improve error handling with s3 and hdfs protocols

Issue by vga91
Friday Sep 17, 2021 at 13:38 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2212


Expected Behavior (Mandatory)

Should return apoc.util.MissingDepencencyException similarly to MongoDb and Couchbase procs.

Actual Behavior (Mandatory)

Return java.net.MalformedURLException: unknown protocol: s3 or unknown protocol: hdfs

How to Reproduce the Problem

Call any export/import/load procedure with a protocol s3 or hdfs. For example:

CALL apoc.load.csv('s3://MY_S3_URL')

Specifications (Mandatory)

Currently used versions

Versions

  • OS: Mac BigSur
  • Neo4j: 4.2.7
  • Neo4j-Apoc: 4.2.0.5

Incorrect behaviours for format durations

Issue by vga91
Wednesday Mar 16, 2022 at 08:59 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2642


WITH duration.between(  datetime("2019-06-02T19:40:32"), datetime("2019-06-03T19:40:32") ) AS duration
RETURN apoc.temporal.format(duration, "dd 'days'") AS value

returns "02 days", and

WITH duration.between(  datetime("2019-06-02T19:40:32"), datetime("2019-06-02T19:40:32") ) AS duration
RETURN apoc.temporal.format(duration, "dd 'days'") AS value

returns "01 days".

Shaded APOC dependencies pollute classpath

Issue by nioertel
Friday Mar 04, 2022 at 10:24 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2604


Situation

We have a Java application that works with Neo4j (version 4.4). The application connects to Neo4j via Bolt using Neo4j Java driver.
For unit tests we tear up Neo4j inside the same JVM using the Neo4j harness, expose the bolt port and connect from the application via Bolt. As we are using APOC procedures, we also need to put APOC on the classpath.
So overall the classpath contains:

  • Neo4j 4.4 (Enterprise)
  • Neo4j Java Driver 4.4.3
  • APOC 4.4.0.3
  • Our application
  • Our application's dependencies

To add APOC to our application, from my understanding of the official documentation there are two options:

  • APOC Core: [...] battle hardened procedures and functions that don’t have external dependencies [...] -> this doesn't seem correct, as it has lots of external dependencies which are shaded into on fat jar
  • APOC Full: [...] additional procedures and functions [...]

Looking at the Github releases, I find the download links for both. The jars are huge, containing shaded versions of all external dependencies of APOC.
Looking at Maven Central, I find a version with classifier all that appears to be the same as APOC Full and a version without classifier that seems to have the same dependencies as APOC Core (but not shaded) but doesn't have the feature set of APOC core.

Problem Description

There are multiple issues:

  1. Having APOC Full 4.4.0.3 and Neo4j Java Driver 4.4.3 on the classpath doesn't work as APOC brings a shaded version of the old Neo4j Java Driver 4.0.0 which is not API compatible.
  2. For our use case APOC Core 4.4.0.3 would be fine which doesn't contain a shaded Neo4j Java driver but it is not available on a public maven repository.
  3. We could work around issue 2. by uploading APOC Core to our internal Maven repository but that still leaves lots of other shaded dependencies inside APOC core which may lead to side effects with dependencies of our application. Therefor I'd prefer to use a version of APOC Core that does not contain shaded dependencies, so I have full control of everything (knowing that APOC functionality may break if I override dependency versions).

Questions / Remarks / Suggestions

  1. Please explain the focus/scope of the apoc jar (without classifier) on Maven Central.
  2. Is it possible to publish apoc-core to Maven Central as a short term solution?
  3. Is it possible to publish a version of apoc-core to Maven Central that does not contain any shaded dependencies?
  4. Is there a reason for not upgrading the Neo4j Java Driver in APOC?
  5. In the long term it probably would be good to add a prefix to the java package names of shaded dependencies. Lots of other libraries are doing this (e.g, neo4j-logging with Log4j).

Further Notes

I am aware that using test containers instead of the Neo4j harness would solve the issue as APOC could be put into the Neo4j container, however that makes the development, testing and debugging process slower so we don't want to use this as the default.

The issue was already discussed in a couple of related issues without solution:

  • #1656 ends with the note We'll add a test-harness example soon. -> this would be pretty much what I described here and it currently won't work
  • #1825 mentioned that there is only the 'small' apoc jar on Maven Central but in the fix only apoc-full but not apoc-core was added
  • neo4j/neo4j#12575 was also about the issue that apoc from Maven Central does not contain the core procedures. No real solution for Maven was created but only a Gradle solution that downloads the dependency directly from APOC's GitHub releases during build.

Incorrect statistics

Issue by li1191863273
Wednesday Sep 01, 2021 at 07:11 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2192


Guidelines

Call apoc.meta.stats()

labelCount relTypeCount propertyKeyCount
35 36 70

Get labelCount: MATCH (n) RETURN count(distinct labels(n))
34
34 is the correct number

The real number of propertyKeyCount is 69

How to Reproduce the Problem

https://neo4j.com/labs/apoc/4.3/export/cypher/#export-cypher-cypher-shell

  1. Export Cypher statements from existing Neo4j
CALL apoc.export.cypher.all("all.cypher", {
    format: "cypher-shell",
    useOptimizations: {type: "UNWIND_BATCH", unwindBatchSize: 20}
})
YIELD file, batches, source, format, nodes, relationships, properties, time, rows, batchSize
RETURN file, batches, source, format, nodes, relationships, properties, time, rows, batchSize;
  1. Recreate a new Neo4j container, and then run the Cypher statement obtained in the previous step
cat all.cypher | ./bin/cypher-shell -a <bolt-url> -u neo4j -p <password> --format verbose
  1. Restart the container, run the statistical statement Call apoc.meta.stats(), and find that the statistical data is incorrect

Versions

  • OS: CentOS Linux 7 (Core) x86_64
  • Neo4j-docker: neo4j:4.3.3
  • Neo4j-Apoc: apoc-4.3.0.0-all.jar

The `apoc.import.json` constraint check should works only when an unique constraint exists

Issue by vga91
Tuesday May 24, 2022 at 10:52 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2930


Expected Behavior (Mandatory)

Missing constraint error

Actual Behavior (Mandatory)

The import works

How to Reproduce the Problem

Create a file all.json with label User:

{"type":"node","id":"0","labels":["User"],"properties":{"born":"2015-07-04T19:32:24","name":"Adam","place":{"crs":"wgs-84","latitude":33.46789,"longitude":13.1,"height":null},"male":true,"age":42,"kids":["Sam","Anna","Grace"]}}

Try to import the file without constraints:

CALL apoc.import.json('all.json')

this will throw an error message:

Missing constraint required for import. Execute this query: 
CREATE CONSTRAINT ON (n:User) assert n.neo4jImportId IS UNIQUE;

Create an existence constraint instead of an unique one.

CREATE CONSTRAINT ON (n:User) assert n.neo4jImportId IS NOT NULL

Retry the import and note that it's work.

Versions

  • OS: OSX
  • Neo4j: 4.3.11
  • Neo4j-Apoc: 4.3.0.5

[DUPLICATED] Allow to treat JSON numbers as strings

Issue by odessit55
Tuesday Jul 20, 2021 at 19:18 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2109


When processing following JSON file:
{"info_file_properties_modified_time":"18446744062065078016",
"info_file_properties_modified_time2":18446744062065078016}
using
apoc.load.json()
We get following error:
Failed to invoke procedure apoc.load.json: Caused by: com.fasterxml.jackson.core.exc.InputCoercionException: Numeric value (18446744062065078016) out of range of long (-9223372036854775808 - 9223372036854775807)
at [Source: (apoc.export.util.CountingInputStream); line: 3, column: 59]

Following JSON works fine as I removed a trailing "6" from the string
{"info_file_properties_modified_time":"18446744062065078016",
"info_file_properties_modified_time2":1844674406206507801}

It would be great if we can specify that certain JSON fields should be treated as strings/larger numbers or at least to treat all strings as numbers.

Thank you!

The `apoc.path.expand` procedure doesn't stop if `dbms.memory.transaction.database_max_size` is reached

Issue by vga91
Wednesday Oct 27, 2021 at 09:48 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2264


With dbms.memory.transaction.database_max_size enabled and using apoc.path.expand in a query, it is possible for a query to exhaust the heap.
The query should be terminated if the amount of memory used breaches the defined figure in dbms.memory.transaction.database_max_size

Steps to Reproduce

  1. Deploy Neo4j 4.3 and enable dbms.memory.transaction.database_max_size=10M
  2. Create a graph using the following:
CREATE CONSTRAINT a_id IF NOT EXISTS ON (a:A) ASSERT a.id IS UNIQUE;
WITH range(0,9) AS ids
UNWIND ids as id
MERGE (a:A {id: id})
SET a.prop = toInteger(10 * rand())
RETURN a;
WITH range(0,0) AS ids
UNWIND ids as id
MATCH (a1:A), (a2:A)
WHERE a1 <> a2
MERGE (a1)-[r:REL {id: id}]->(a2);
  1. Query the graph using apoc.path.expand:
MATCH (srcA:A {id:0})
CALL apoc.path.expand(srcA, “>REL”, “+A”, 1, 100)
YIELD path
WITH srcA,
relationships(path)[0] AS srcRel,
relationships(path)[length(path) - 1] AS dstRel,
nodes(path)[length(path)] AS dstA,
path AS path
RETURN DISTINCT srcA.id, srcA.prop, dstA.id, dstA.prop
LIMIT 1000;

Observed Behaviour

The heap is consumed which triggers large GC pauses. The system performance degrades significantly.
The query is not terminated as expected with dbms.memory.transaction.database_max_size enabled.

Expected Behaviour

As per the documentation on dbms.memory.transaction.database_max_size, the amount of memory the query can use should be restricted. If the set threshold is breach, the query should be terminated.

The apoc.export.graphml.query export additional unwanted nodes

Issue by vga91
Monday May 16, 2022 at 10:05 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2890


If we want to execute something like this use case,
but via apoc.export.graphml.query instead of apoc.export.graphml.data,
the relationships file's export will add additional unwanted nodes.

Expected Behavior (Mandatory)

Should import only relationships

Actual Behavior (Mandatory)

Import start and end nodes as well

How to Reproduce the Problem

Create a simple rel:

CREATE (:Start {id: 1})-[:REL {foo: 'bar'}]->(:End {id: '1'})

Execute this query:

call apoc.export.graphml.query("MATCH (start:Start)-[rel:REL]->(end:End) RETURN rel", null, {stream: true})
YIELD  data

Note that the data result contains <node...> tags::

"<?xml version="1.0" encoding="UTF-8"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key id="labels" for="node" attr.name="labels"/>
<key id="foo" for="edge" attr.name="foo"/>
<key id="label" for="edge" attr.name="label"/>
<graph id="G" edgedefault="directed">
<node id="n14" labels=":Start"><data key="labels">:Start</data><data key="id">1</data></node>
<node id="n15" labels=":End"><data key="labels">:End</data><data key="id">1</data></node>
<edge id="e13" source="n14" target="n15" label="REL"><data key="label">REL</data><data key="foo">bar</data></edge>
</graph>
</graphml>
"

Versions

  • OS: OSX
  • Neo4j: 4.4.0.5
  • Neo4j-Apoc: 4.4.5

Detect missing procedure at build time

Issue by fbiville
Thursday Jun 24, 2021 at 14:26 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2033


Problem

Several releases of APOC have been missing procedures/functions.

The last incident of this kind happened because of an issue with the scope of Guava: it was available in tests,, thus passing CI but was not included in the production core/full JARs, resulting in loading errors for some procedures/functions.

Proposed Solution

Given all APOC procedures and functions are available from APOC sources, we can define a compile-time annotation processor to list and dump these to a flat file.
We can then introduce a new testing module, configured like a regular APOC customer project, with a Docker container configured with APOC. Based on that setup, we can detect any discrepancy between the available procedure/extension from sources and the ones actually available at runtime.

This might be implemented two phases, as standing up a Neo4j containers with all the extra dependencies required by APOC full might be tricky. We can at least start with APOC core and then extend the logic to full.

Not able to import 'JSON' data using apoc.import.json procedure.

Issue by amit-kumaryadav
Thursday Aug 18, 2022 at 05:57 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#3141


I'm trying to import the JSON data(Not Jsonl), which is generated through apoc.export.json.all procedure. But I'm getting Failed to invoke procedure apoc.import.json: Caused by: java.lang.NullPointerException Exception. Even tried to import the ARRAY_JSON data using apoc.import.json but it's also not working. Looks like import is working only with jsonl data. Even in documentation it's clearly mentioned that we can import the data which is generated by apoc.export.json.* procedures.

image

Exported Data:

{"nodes":[{"type":"node","id":"0","labels":["User"],"properties":{"neo4jImportId":"2","age":12}}],"rels":[]}

Issue:

image

Versions

  • OS: Windows
  • Neo4j: 4.4.5
  • Neo4j-Apoc: 4.4.0.3

The `apoc.periodic.truncate` procedure doesn't provide a feedback if fails

Issue by vga91
Wednesday Oct 27, 2021 at 12:26 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2266


The apoc.periodic.truncate procedure doesn't provide a feedback if fails (for example if dbms.memory.transaction.database_max_size is reached)

Expected Behavior (Mandatory)

Should fails

Actual Behavior (Mandatory)

Doesn't return nothing and doesn't cancel all db.

Steps (Mandatory)

  1. Create a relatively big graph, for example unwind range(0, 9999) as range create (:Node)-[:REL]->(:Other)
  2. Set dbms.memory.transaction.database_max_size=10M in neo4j.conf and restart
  3. CALL apoc.periodic.truncate

Versions

  • OS: Mac BigSur
  • Neo4j: 4.3.3
  • Neo4j-Apoc: 4.3.0.3

Exception while adding/removing apoc.trigger in neo4j causal cluster

Issue by SergeyPlatonov
Thursday Jul 21, 2022 at 17:41 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#3073


Guidelines

In cluster mode (Neo4j Causal Cluster), Apoc triggers can only be added/removed using the bolt scheme and IP where the system database has the role of leader.
I think they should work like indexes and constraints - we can apply them using the neo4j scheme with SSR (Server Side Routing) enabled without thinking about who is the leader now.

Expected Behavior (Mandatory)

Add/remove triggers in neo4j causal cluster using neo4j scheme and any server (any IP).

Actual Behavior (Mandatory)

Triggers can only be applied using the bolt scheme and IP where the system database has the leader role.
Using neo4j scheme I see an exception:
No longer possible to write to server at 10.62.62.180:7687.

Using a bolt scheme and not a leader for the system database:
Neo.ClientError.Cluster.NotALeader
No write operations are allowed directly on this database. Writes must pass through the leader. The role of this server is: FOLLOWER

How to Reproduce the Problem

Run remove any trigger (even non-existent one)
CALL apoc.trigger.remove('testTrigger');
You should use the neo4j causal cluster. Connect to neo4j using the scheme neo4j.

Simple Dataset (where it's possibile)

Run remove any trigger (even non-existent one)
CALL apoc.trigger.remove('testTrigger');
You should use the neo4j causal cluster. Connect to neo4j using the scheme neo4j.

Steps (Mandatory)

  1. CALL apoc.trigger.remove('testTrigger');

Screenshots (where it's possibile)

Specifications (Mandatory)

Currently used versions

Versions

  • OS: Windows 10 Enterprise
  • Neo4j: 4.4.2 enterprise
  • Neo4j-Apoc: 4.4.0.4

Add option to include direction of relations in apoc.convert.toTree

Issue by mrveld94
Wednesday Sep 08, 2021 at 09:54 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2199


The output of apoc.convert.toTree does not include the direction of the relations.

For example, consider the following path.
image

I can turn this path into a tree (with the pink node as root node) using the following query.

CALL apoc.convert.toTree(path, false) YIELD output RETURN output

This results in the following json.

{
  "_type": "Resource:prov__Entity:provone__Data",
  "uri": "urn:uuid:output_adaguc_Zmax_202108091550.h5",
  "prov__hadMember": [
    {
      "_type": "Resource:prov__Entity:prov__Collection",
      "uri": "urn:uuid:output_collection_Output_202108091550",
      "prov__wasDerivedFrom": [
        {
          "_type": "Resource:prov__Entity:prov__Collection",
          "uri": "urn:uuid:input_collection_Model_202108091550"
        },
        {
          "_type": "Resource:prov__Entity:prov__Collection",
          "uri": "urn:uuid:input_collection_Radar_202108091550"
        },
        {
          "_type": "Resource:prov__Entity:prov__Collection",
          "uri": "urn:uuid:input_collection_IVS_202108091550"
        }
      ]
    }
  ]
}

Looking at this output I cannot determine if urn:uuid:output_adaguc_Zmax_202108091550.h5 <prov__hadMember urn:uuid:output_collection_Output_202108091550 or if urn:uuid:output_adaguc_Zmax_202108091550.h5 prov__hadMember> urn:uuid:output_collection_Output_202108091550.
In other words I cannot determine which entity is a member of which.

I would like to see an option in apoc.convert.toTree, which would return the direction of the relations. For example by using the following query.

CALL apoc.convert.toTree(path, false, { includeDirection: true }) YIELD output RETURN output

Which would return something like this.

{
  "_type": "Resource:prov__Entity:provone__Data",
  "uri": "urn:uuid:output_adaguc_Zmax_202108091550.h5",
  "<prov__hadMember": [
    {
      "_type": "Resource:prov__Entity:prov__Collection",
      "uri": "urn:uuid:output_collection_Output_202108091550",
      "prov__wasDerivedFrom>": [
        {
          "_type": "Resource:prov__Entity:prov__Collection",
          "uri": "urn:uuid:input_collection_Model_202108091550"
        },
        {
          "_type": "Resource:prov__Entity:prov__Collection",
          "uri": "urn:uuid:input_collection_Radar_202108091550"
        },
        {
          "_type": "Resource:prov__Entity:prov__Collection",
          "uri": "urn:uuid:input_collection_IVS_202108091550"
        }
      ]
    }
  ]
}

Allow to treat JSON numbers as strings

Issue by odessit55
Tuesday Jul 20, 2021 at 19:18 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2109


When processing following JSON file:
{"info_file_properties_modified_time":"18446744062065078016",
"info_file_properties_modified_time2":18446744062065078016}
using
apoc.load.json()
We get following error:
Failed to invoke procedure apoc.load.json: Caused by: com.fasterxml.jackson.core.exc.InputCoercionException: Numeric value (18446744062065078016) out of range of long (-9223372036854775808 - 9223372036854775807)
at [Source: (apoc.export.util.CountingInputStream); line: 3, column: 59]

Following JSON works fine as I removed a trailing "6" from the string
{"info_file_properties_modified_time":"18446744062065078016",
"info_file_properties_modified_time2":1844674406206507801}

It would be great if we can specify that certain JSON fields should be treated as strings/larger numbers or at least to treat all strings as numbers.

Thank you!

[DUPLICATED] Round trip with apoc graphml coerces types to string

Issue by Rosswart
Tuesday Jul 20, 2021 at 10:03 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2099


Expected Behavior

Exporting a graph with

    CALL apoc.export.graphml.all('types.graphml', {useTypes: true, storeNodeIds: false})

into graphml

    <?xml version="1.0" encoding="UTF-8"?>
    <graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
    <key id="b" for="node" attr.name="b" attr.type="string"/>
    <key id="c" for="node" attr.name="c" attr.type="string" attr.list="string"/>
    <key id="d" for="node" attr.name="d" attr.type="long"/>
    <graph id="G" edgedefault="directed">
    <node id="n0"><data key="a">string</data><data key="b">string</data></node>
    <node id="n1"><data key="a">string</data><data key="b">string</data></node>
    <node id="n2"><data key="a">["array","string"]</data><data key="c">["array","string"]</data></node>
    <node id="n3"><data key="a">["array","string"]</data><data key="c">["array","string"]</data></node>
    <node id="n4"><data key="a">123</data><data key="d">123</data></node>
    <node id="n5"><data key="a">123</data><data key="d">123</data></node>
    </graph>
    </graphml>

and then importing this again with

    CALL apoc.import.graphml('types.graphml', {readLabels: true})

should reproduce the same graph

    MATCH (n)
    RETURN
      apoc.meta.type(n.a), n.a,
      apoc.meta.type(n.b), n.b,
      apoc.meta.type(n.c), n.c,
      apoc.meta.type(n.d), n.d

with all properties having the same type as in the original graph:

    ╒═════════════════════╤══════════════════╤═════════════════════╤════════╤═════════════════════╤══════════════════╤═════════════════════╤═════╕
    │"apoc.meta.type(n.a)"│"n.a"             │"apoc.meta.type(n.b)"│"n.b"   │"apoc.meta.type(n.c)"│"n.c"             │"apoc.meta.type(n.d)"│"n.d"│
    ╞═════════════════════╪══════════════════╪═════════════════════╪════════╪═════════════════════╪══════════════════╪═════════════════════╪═════╡
    │"STRING"             │"string"          │"STRING"             │"string"│"NULL"               │null              │"NULL"               │null │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"string"          │"STRING"             │"string"│"NULL"               │null              │"NULL"               │null │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"String[]"           │["array","string"]│"NULL"               │null    │"String[]"           │["array","string"]│"NULL"               │null │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"String[]"           │["array","string"]│"NULL"               │null    │"String[]"           │["array","string"]│"NULL"               │null │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"INTEGER"            │123               │"NULL"               │null    │"NULL"               │null              │"INTEGER"            │123  │
    ├─────────────────────┼──────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"INTEGER"            │123               │"NULL"               │null    │"NULL"               │null              │"INTEGER"            │123  │
    └─────────────────────┴──────────────────┴─────────────────────┴────────┴─────────────────────┴──────────────────┴─────────────────────┴─────┘

Actual Behavior

If the original node properties are of mixed type (see "n.a"), then the imported properties are coerced to string. All properties with pure types do have the original type.

    ╒═════════════════════╤════════════════════╤═════════════════════╤════════╤═════════════════════╤══════════════════╤═════════════════════╤═════╕
    │"apoc.meta.type(n.a)"│"n.a"               │"apoc.meta.type(n.b)"│"n.b"   │"apoc.meta.type(n.c)"│"n.c"             │"apoc.meta.type(n.d)"│"n.d"│
    ╞═════════════════════╪════════════════════╪═════════════════════╪════════╪═════════════════════╪══════════════════╪═════════════════════╪═════╡
    │"STRING"             │"string"            │"STRING"             │"string"│"NULL"               │null              │"NULL"               │null │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"string"            │"STRING"             │"string"│"NULL"               │null              │"NULL"               │null │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"["array","string"]"│"NULL"               │null    │"String[]"           │["array","string"]│"NULL"               │null │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"["array","string"]"│"NULL"               │null    │"String[]"           │["array","string"]│"NULL"               │null │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"123"               │"NULL"               │null    │"NULL"               │null              │"INTEGER"            │123  │
    ├─────────────────────┼────────────────────┼─────────────────────┼────────┼─────────────────────┼──────────────────┼─────────────────────┼─────┤
    │"STRING"             │"123"               │"NULL"               │null    │"NULL"               │null              │"INTEGER"            │123  │
    └─────────────────────┴────────────────────┴─────────────────────┴────────┴─────────────────────┴──────────────────┴─────────────────────┴─────┘

How to Reproduce the Problem

Simple Dataset

    CREATE ({a: "string",           b: "string"})
    CREATE ({a: "string",           b: "string"})
    CREATE ({a: ["array", "string", c: ["array", "string"]]})
    CREATE ({a: ["array", "string", c: ["array", "string"]]})
    CREATE ({a: 123,                d: 123})
    CREATE ({a: 123,                d: 123})

Steps (Mandatory)

  1. Create graph
  2. Export
  3. Import

Specifications

Versions

  • OS: Windows 10 Enterprise 10.0.17763 Build 17763
  • Neo4j: neo4j-3.5.28
  • Neo4j-Apoc: apoc-3.5.0.15

Failed to invoke procedure `apoc.export.cypher.all`:

Issue by ayushKataria
Wednesday Apr 13, 2022 at 09:12 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2747


Getting below error when trying to run apoc.export.cypher.all:

Caused by: org.neo4j.internal.kernel.api.exceptions.EntityNotFoundException: Unable to load NODE with id 783244.

The node id changes every time we run it. If we restart neo4j and rerun this, it works fine but if after a few days start getting this error again.

Neo4j version: 4.2.8
apoc version: 4.0.1

Server no longer accepts writes, during apoc.trigger.add

Hi,

We have 3 cores and are trying to add a trigger via the apoc.trigger.add method. The cypher we are using looks like this CALL apoc.trigger.add('<name>', 'UNWIND apoc.trigger.nodesByLabel($assignedNodeProperties, "<node>") AS node CALL rabbitmq.event("<queue>", labels(node), node) return count(*)', { phase: 'before' })

We have a custom apoc that puts any changes to nodes onto a rabbitmq queue and this is to setup that link.
It works fine in a standalone setup, but with more than 1 core to keeps returning
Server at <server> no longer accepts writes

Is there something I can do to my cypher in order to make it work in a cluster setup?

We are the following setup:
Neo4j version 4.1.2
OS: docker
Host: GKE

Default parallel concurrency of 50 is too high

Issue by wdroste
Wednesday Jun 08, 2022 at 18:27 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2971


Expected Behavior (Mandatory)

For parallel often times there's only 4 CPU or if there's 50 CPU its just too many threads. I would expect a default of 4.
apoc.iterate('delete query', { batchSize: 1000, parallel: true})

I expected this to be 4-6

Actual Behavior (Mandatory)

The actual behavior is 50 threads, which either tends to lock on deletes or over tax the system

How to Reproduce the Problem

Run a delete on 1m nodes on a small box.

apoc.iterate("MATCH (n)", "DETACH DELETE n", { batchSize: 1000, parallel: true})

Versions

  • OS: NA
  • Neo4j: 4.0+
  • Neo4j-Apoc: 4.0+

`apoc.meta.relTypeProperties` doest not return relationship information

Issue by conker84
Wednesday May 25, 2022 at 11:35 GMT
Originally opened as neo4j-contrib/neo4j-apoc-procedures#2934


Expected Behavior (Mandatory)

Given the movie database if we run the following query:

CALL apoc.meta.relTypeProperties({includeRels: ['REVIEWED']})

I would expect the following result

+-------------------------------------------------------------------------------------------------------------------------------------------+
| relType       | sourceNodeLabels | targetNodeLabels | propertyName | propertyTypes | mandatory | propertyObservations | totalObservations |
+-------------------------------------------------------------------------------------------------------------------------------------------+
| ":`REVIEWED`" | ["Person"]       | ["Movie"]        | "rating"     | ["Long"]      | false     | 1                    | 1                 |
| ":`REVIEWED`" | ["Person"]       | ["Movie"]        | "summary"    | ["String"]    | false     | 1                    | 1                 |
+-------------------------------------------------------------------------------------------------------------------------------------------+
2 rows

Actual Behavior (Mandatory)

The actual response from the procedure is:

+-------------------------------------------------------------------------------------------------------------------------------------------+
| relType       | sourceNodeLabels | targetNodeLabels | propertyName | propertyTypes | mandatory | propertyObservations | totalObservations |
+-------------------------------------------------------------------------------------------------------------------------------------------+
+-------------------------------------------------------------------------------------------------------------------------------------------+
0 rows

How to Reproduce the Problem

Simple Dataset (where it's possibile)

Just use the movie database

Steps (Mandatory)

  1. CALL apoc.meta.relTypeProperties({includeRels: ['REVIEWED']})

Versions

  • OS: MacOS
  • Neo4j: AuraDB
  • Neo4j-Apoc: 4.4.0.4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.