Git Product home page Git Product logo

java-ipfs-http-client's Introduction

java-ipfs-http-client

standard-readme compliant

A Java client for the IPFS http api

Table of Contents

Install

Official releases

You can use this project by including ipfs.jar from one of the releases along with the dependencies.

Maven, Gradle, SBT

Package managers are supported through JitPack which supports Maven, Gradle, SBT, etc.

for Maven, add the following sections to your pom.xml (replacing $LATEST_VERSION):

  <repositories>
    <repository>
        <id>jitpack.io</id>
        <url>https://jitpack.io</url>
    </repository>
  </repositories>

  <dependencies>
    <dependency>
      <groupId>com.github.ipfs</groupId>
      <artifactId>java-ipfs-http-client</artifactId>
      <version>$LATEST_VERSION</version>
    </dependency>
  </dependencies>

Building

  • Clone this repository
  • Run ant dist
  • Copy dist/ipfs.jar into your project. Appropriate versions of other dependencies are also included in dist/lib/.
  • Run tests using ant test.

Running tests

To run tests, IPFS daemon must be running on 127.0.0.1 interface.

IPFS installation

Command line

Download ipfs from https://dist.ipfs.io/#go-ipfs and run with ipfs daemon --enable-pubsub-experiment

Docker Compose

Run docker-compose up from the project's root directory. Check docker-compose.yml for more details.

Usage

Create an IPFS instance with:

IPFS ipfs = new IPFS("/ip4/127.0.0.1/tcp/5001");

Then run commands like:

ipfs.refs.local();

To add a file use (the add method returns a list of merklenodes, in this case there is only one element):

NamedStreamable.FileWrapper file = new NamedStreamable.FileWrapper(new File("hello.txt"));
MerkleNode addResult = ipfs.add(file).get(0);

To add a byte[] use:

NamedStreamable.ByteArrayWrapper file = new NamedStreamable.ByteArrayWrapper("hello.txt", "G'day world! IPFS rocks!".getBytes());
MerkleNode addResult = ipfs.add(file).get(0);

To get a file use:

Multihash filePointer = Multihash.fromBase58("QmPZ9gcCEpqKTo6aq61g2nXGUhM4iCL3ewB6LDXZCtioEB");
byte[] fileContents = ipfs.cat(filePointer);

More example usage found here

Dependencies

Current versions of dependencies are included in the ./lib directory.

Releasing

The version number is specified in build.xml and pom.xml and must be changed in both places in order to be accurately reflected in the JAR file manifest. A git tag must be added in the format vx.x.x for JitPack to work.

Contribute

Feel free to join in. All welcome. Open an issue!

This repository falls under the IPFS Code of Conduct.

License

MIT

java-ipfs-http-client's People

Contributors

bastiao avatar celeduc avatar daviddias avatar dbw9580 avatar dependabot[bot] avatar ianopolous avatar kamaci avatar kevodwyer avatar lidel avatar manandbytes avatar mcsinyx avatar mognify avatar nstrelow avatar odisseus avatar ottuzzi avatar richardlitt avatar rokko11 avatar tdiesler avatar web-flow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

java-ipfs-http-client's Issues

Implement Java API in servlet

Hi All,
I want to know how to use java-ipfs-api with servlet page ? is there are any example code available for this ? how can I do this ?

the java-ipfs-api related class documentation

I would like to ask about the java-ipfs-api related class documentation can refer to?
Can't find the relevant class description document. Does this source code need to analyze a little bit?

Want to get some suggestions from everyone

Pubsub API

Any plans to support the pubsub API in Java?

Android Looping on ipfs.add(file)

Hi,

Have written -

NamedStreamable.ByteArrayWrapper ipfsFile = new NamedStreamable.ByteArrayWrapper(usr + ".txt", userJSON.toString().getBytes());
MerkleNode addFile = ipfs.add(ipfsFile);

As given in the README but it goes to the looper class and just loops forever causing the app to crash on MerkleNode addFile = ipfs.add(ipfsFile); have tried with a FileWrapper as well. Can't see anything in the source code of why this would happen.

Any help?

Thanks.

Build issues in Android target API:23

Hello, I have issues with building IPFS library on Android with Java 1.8 support for target api 23. When I'm building against API 24 or 25 all is working fine.
I'm using gradle 3.3 with 2.3.0-beta4 plugin.

Is there any known working configuration?

build.gradle

buildscript {
    repositories {
        jcenter()
    }

    dependencies {
        classpath 'com.android.tools.build:gradle:2.3.0-beta4'
    }
}

apply plugin: 'com.android.application'

repositories {
    jcenter()
    mavenCentral()
    maven { url "https://jitpack.io" }
}

dependencies {
    compile 'com.android.support:support-v4:25.0.1'
    compile 'com.android.support:support-v13:25.0.1'
    compile 'com.android.support:cardview-v7:25.0.1'
    compile 'com.android.support:appcompat-v7:25.0.1'
    compile 'org.ethereum:geth:1.5.7'
    compile 'com.squareup.okhttp3:okhttp:3.6.0'
    compile 'com.github.ipfs:java-ipfs-api:master'
}

// The sample build uses multiple directories to
// keep boilerplate and common code separate from
// the main sample code.
List<String> dirs = [
    'main',     // main sample code; look here for the interesting stuff.
    'common',   // components that are reused by multiple samples
    'template'] // boilerplate code that is generated by the sample template process

android {
    compileSdkVersion 25
    buildToolsVersion '25.0.2'

    defaultConfig {
        minSdkVersion 23
        targetSdkVersion 25

        jackOptions {
            enabled true
        }
    }

    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }

    sourceSets {
        main {
            dirs.each { dir ->
                java.srcDirs "src/${dir}/java"
                res.srcDirs "src/${dir}/res"
            }
        }
        androidTest.setRoot('tests')
        androidTest.java.srcDirs = ['tests/src']

    }

}

Errors:

Information:Gradle tasks [:Application:assembleDebug]
Error:Lambda coming from jar file need their interfaces on the classpath to be compiled, unknown interfaces are java.util.function.Function
...
Error:Lambda coming from jar file need their interfaces on the classpath to be compiled, unknown interfaces are java.util.function.BinaryOperator
Error:Lambda coming from jar file need their interfaces on the classpath to be compiled, unknown interfaces are java.util.function.Supplier
...
Error:Lambda coming from jar file need their interfaces on the classpath to be compiled, unknown interfaces are java.util.function.Function
Error:Lambda coming from jar file need their interfaces on the classpath to be compiled, unknown interfaces are java.util.function.Predicate
...
Error:Lambda coming from jar file need their interfaces on the classpath to be compiled, unknown interfaces are java.util.function.Function
Error:Default method io.ipfs.cid.Cid cid() not supported in Android API level less than 24
Error:Static method io.ipfs.api.IpldNode fromCBOR(io.ipfs.api.cbor.CborObject cbor) not supported in Android API level less than 24
Error:Static method io.ipfs.api.IpldNode fromJSON(java.lang.Object json) not supported in Android API level less than 24
Error:Default method byte[] serialize() not supported in Android API level less than 24
Error:Default method byte[] getContents() not supported in Android API level less than 24
Error:Default method byte[] toByteArray() not supported in Android API level less than 24
Error:Static method io.ipfs.api.cbor.CborObject fromByteArray(byte[] cbor) not supported in Android API level less than 24
Error:Static method io.ipfs.api.cbor.CborObject deserialize(io.ipfs.api.cbor.CborDecoder decoder) not supported in Android API level less than 24
Error:Execution failed for task ':Application:transformClassesWithPreJackPackagedLibrariesForDebug'.
> com.android.build.api.transform.TransformException: com.android.builder.core.JackToolchain$ToolchainException: Jack compilation exception
Information:BUILD FAILED
Information:Total time: 0.564 secs
Information:37 errors
Information:0 warnings
Information:See complete output in console

NOTE: When compiling version v1.0.0 instead of master there are less errors:

Error:Lambda coming from jar file need their interfaces on the classpath to be compiled, unknown interfaces are java.util.function.Function
...
Error:Lambda coming from jar file need their interfaces on the classpath to be compiled, unknown interfaces are java.util.function.Predicate
...
Error:Default method byte[] getContents() not supported in Android API level less than 24
Error:Execution failed for task ':Application:transformClassesWithPreJackPackagedLibrariesForDebug'.
> com.android.build.api.transform.TransformException: com.android.builder.core.JackToolchain$ToolchainException: Jack compilation exception

[pubsub] Resubscribe when losing connection/Reactive Stream support

I currently encounter a problem when being subscribed to a topic and the ipfs nodes goes down.
Consumer threads blocks indefinitely (as intended) as no more elements are pushed into the queue.
Is there any reasonable way to handle such situations?

I have wrapped all calls to sub to return a Reactive Stream but I have no way of detecting EOF.
Here is a quick example (note the missing complete call):

Flux<Map<String, ?>> ipfsMessageFlux = Flux.create(fluxSink -> {
    try {
        Supplier<Object> sub = ipfs.pubsub.sub(topic);

        while (true) {
            Object o = sub.get();
            if (fluxSink.isCancelled()) {
                return;
            }

            if (o != null) {
                if (Map.class.isAssignableFrom(o.getClass())) {
                    Map<String, ?> map = (Map<String, ?>) o;
                    fluxSink.next(map);
                } else {
                    log.warn("Refuse to emit message - object not of instance Map: {}", o);
                }
            }

            if (fluxSink.isCancelled()) {
                return;
            }
        }
    } catch (Exception e) {
        fluxSink.error(e);
    }
});

What do you think about returning a Flux (project reactor) or an Observable (rxjava) in Pubsub? Then we'd be able to signal the end of the stream. Would something like this fit the projects roadmap? I'd be very happy to provide a pull request.

Question๏ผšHow I generate Cid before data stored?

I know the java-ipfs-api just invoke ipfs server.
But now I want to get Cid of byte[] before invoke, I tried java-cid. But the Cid of java-ipfs-api is different from the Cid of java-cid(version 0).

this is my code.

        String ss = "123321";
        // ๆ•ฐๆฎๅญ˜ๅ‚จIPFS้€ป่พ‘
        try {
            IPFS ipfs = new IPFS(Constants.IPFS_URL);//    "/ip4/0.0.0.0/tcp/5001"
            ipfs.refs.local();

            NamedStreamable.ByteArrayWrapper file = new NamedStreamable.ByteArrayWrapper(ss.getBytes());
            MerkleNode addResult = ipfs.add(file).get(0);
            if (null != addResult && null != addResult.hash) {
                System.out.println(addResult.hash.toString());
            }
            MessageDigest hasher = MessageDigest.getInstance("SHA-256");
            byte[] hash = hasher.digest(ss.getBytes());
            Multihash mhash = new Multihash(Multihash.Type.sha2_256, hash);
            Cid cid = Cid.buildCidV0(mhash);
            System.out.println(cid.toString());
        } catch (IOException | IllegalArgumentException | NoSuchAlgorithmException e) {
            e.printStackTrace();
        }

I get different result.

QmZE8aUxitmPdBjriC82rhhUmQbeh7KP4oEbgqBEATMct5
QmZKRiNkuVutGrvxWDr6dhp5zJa7sBM5HPWhuWF2mZgScm

Why? I doubt it maybe the ipfs server not only SHA256 byte[], but also other operation.

If anyone give advices. Thanks a lot!

directory Test on Windows System seems wrong

When i run the directory test on windows, it return like this "DIR14930592264188763745\abcd\3.jpg", it's not a directory, but a file name, under unix/linux, it's work ok. Just can get like this "DIR14930592264188763745/abcd/3.jpg", what should i do ?
qq 20171117102915

MerkelNode is not functioning as intended anymore (solved?)

Note: I have mostly solved this issue now

I have followed along with several examples and tested with old .jar releases.

The function below no longer works as it use to and I can't figure out what the new implementation is supposed to be...

MerkleNode addResult = new MerkleNode(ipfs.add(file));

This function seems to want a string now....but it doesn't...and it thinks the MerkelNode is a list that is not compatible and won't let me cast it. But I'm probably doing something or overlooking something obvious.

I'm also having erroneous errors the below if I make a new project using the newest .jar but it works if I use an old project that was built from the old maven stuff.???? by that I mean I delete the dependency from old project and import the new .jar and it magically works fine.
ipfs.refs.local();

Quite frankly, I'm really at a loss here? Is everything working as intended? Could you possibly release a full example project to let us build off of and see working...

Note that I was able to build and compile the master build from download, And I will do some more testing from here. maybe the .jar was an improper or old build that was included in your last push. As I can see it is clearing working when running the API test....

Thanks!!! Great work btw.

update:
I was finally able to figure this out. I copy and pasted the actual source into my packages instead of using a dependency and then I loaded all the dependencies from your original jar as dependencies to my project directly. Everything seems fine now.

But the MerkleNode has definetly changed!

This is the new formatting if anyone needs to know :)

MerkleNode addResult = ipfs.add(file).get(0);
or
MerkleNode file2 = ipfs.add(new NamedStreamable.ByteArrayWrapper("Some data".getBytes())).get(0);

Here is the current code I was playing with that successfully posted a file to the cloud...and was retrievable via my ipfs node through command prompt.

package rotatewheel;

import io.ipfs.multihash.Multihash;
import java.io.*;
import java.nio.file.*;
import java.util.*;
import io.ipfs.api.*;
import io.ipfs.multiaddr.MultiAddress;
import java.io.File;
import java.io.IOException;

public class ipfstest {

public File ifFile = null;

public void whatfile(File file) {
    ifFile = file;
    ifFile = new File("C://i.mp3");   //comment out this line in production   
    launch();
}

private void launch() {

    IPFS ipfs = new IPFS(new MultiAddress("/ip4/127.0.0.1/tcp/5001"));
    System.out.println("Connected");

    try {
        System.out.println("hostfile test");
        NamedStreamable hostFile = null;
        File holdFile = null;
        if (ifFile == null) {
            Path tempFile = Files.createTempFile("IPFS", "tmp");
            BufferedWriter w = new BufferedWriter(new FileWriter(tempFile.toFile()));
            w.append("My Data is Kenny22gdfgd.");
            w.flush();
            w.close();
            hostFile = new NamedStreamable.FileWrapper(tempFile.toFile());
            holdFile = tempFile.toFile();
        } else {
            hostFile = new NamedStreamable.FileWrapper(ifFile);
        }
        NamedStreamable file = hostFile;

     
        System.out.println("pin test");
        MerkleNode file2 = ipfs.add(file).get(0);


        Multihash hash = file2.hash;
        System.out.println(file2.hash + "hash: ");
        Map<Multihash, Object> ls1 = ipfs.pin.ls(IPFS.PinType.all);
        boolean pinned = ls1.containsKey(hash);
        List<Multihash> rm = ipfs.pin.rm(hash);
        List<Multihash> add2 = ipfs.pin.add(hash);
        List<Multihash> add3 = ipfs.pin.add(hash);
        Map<Multihash, Object> ls = ipfs.pin.ls(IPFS.PinType.recursive);
        ipfs.repo.gc();
        Map<Multihash, Object> ls2 = ipfs.pin.ls(IPFS.PinType.recursive);
        boolean stillPinned = ls2.containsKey(hash);
    } catch (IOException er) {
    }
    System.out.println("file put done: ");
}

public static void main(String[] args) {
    ipfstest app = new ipfstest();
    //app.launch();
    app.whatfile(null);
}

private IPFS ipfs;
}

ipfs.add() with FileWrapper should not URL-encode filenames.

If I try to add a file whose name contains a character affected by URLEncoder.encode, the file name in IPFS ends up as the encoded version of the original. To reproduce:

$ mkdir /tmp/foo
$ touch '/tmp/foo/bar$baz.txt'

Run the following program:

import java.io.File;
import java.io.IOException;
import java.util.List;

import io.ipfs.api.IPFS;
import io.ipfs.api.MerkleNode;
import io.ipfs.api.NamedStreamable;
import io.ipfs.multiaddr.MultiAddress;

public class Test {
  public static void main(String... args) throws IOException {
    IPFS ipfs = new IPFS(new MultiAddress("/ip4/127.0.0.1/tcp/5001"));
    NamedStreamable dir = new NamedStreamable.FileWrapper(new File("/tmp/foo"));
    List<MerkleNode> nodes = ipfs.add(dir);
    System.out.println("/ipfs/" + nodes.get(nodes.size() - 1).hash.toBase58());
  }
}

This prints out: /ipfs/QmSuq7496UJ3b5fZJWWnZknWxtfazGDQ3Gw5gfG1YNfvdd

$ ipfs ls /ipfs/QmSuq7496UJ3b5fZJWWnZknWxtfazGDQ3Gw5gfG1YNfvdd
QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH 6 bar%24baz.txt

Compare this with the behavior using ipfs add -r:

$ ipfs add -r /tmp/foo
added QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH foo/bar$baz.txt
added QmYYohh3cVpiMCwsr4hLpRdN6qmsKrrPDdNJMYVqekk2cq foo
$ ipfs ls /ipfs/QmYYohh3cVpiMCwsr4hLpRdN6qmsKrrPDdNJMYVqekk2cq
QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH 6 bar$baz.txt

Api structure

The html API looks like as it is because the technology (REST) used.
And even that api shows that its designers did think in an object-oriented way.
I believe doing things like this would be more natural:

@Test
public void we_can_add_a_link_to_an_object() throws IOException {
	IPFSObject myOldObj = IPFSObject.getMyObj(ipfs);
	IPFSObject myNewObj = new IPFSObject(ipfs);
	myNewObj.addLink("next", myOldObj);
	assertEquals(myOldObj, myNewObj.getLinked("next"));
}

I am in the process of creating such an api as a side effect of one of my project. Currently it uses java-ipfs-api as a low level api. However I believe that there are things which should be remediated there as well (173 warnings in a single project, and more than 10 classes in a single file are sure signs of it), and I am not even convinced that having the low-level api is a good idea in the first place.
I would be happy to discuss it.

Inner classes of IPFS are not public

The inner classes of the IPFS object aren't public, so they can't be accessed from outside the org.ipfs package.
My current workaround is to put code interfacing with those parts in a class in org.ipfs, but that's not exactly a good solution...

Here's an example of the kind of error, given from attempting

    IPFS myself = new IPFS("/ip4/127.0.0.1/tcp/5001");
    myself.name.resolve(Multihash.fromBase58("QmarsUDwQ5ak34KU6xJbr2iMuatB7e69WfuSjEBG3v7zvu"));

Error:(18, 20) java: resolve(org.ipfs.Multihash) in org.ipfs.IPFS.Name is defined in an inaccessible class or interface
I'm using OpenJDK, javac -version reports "javac 1.8.0_65".
If IDE matters (it shouldn't, given these are compiler errors, but still), i'm using JetBrains IDEA Community Edition.

IPNS support

Hi , May I know is this library support IPNS ?

There is a bug in Windows

When I try to add a dir to IPFS,the Hash of Dirtionary and File Infomation is wrong.
The Ditionary Hash is always QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn,that is a empty dir.and File name will have many \\ like "upload"\\a.txt" "\\subdir\\a.txt"

Issue when calling pub()

I have a solution locally. I will be submitting a merge requested shortly.

The code below will break with an error 400:

        Object ls = ipfs.pubsub.ls();
        Object peers = ipfs.pubsub.peers();
        String topic = "topic" + System.nanoTime();
        Supplier<Object> sub = ipfs.pubsub.sub(topic);
        Object first = sub.get();
        Assert.assertTrue(first.equals(Collections.emptyMap()));
        String data = "Hello World!";
        Object pub = ipfs.pubsub.pub(topic, data);
        Object second = sub.get();
        Assert.assertTrue( ! second.equals(Collections.emptyMap()));
java.lang.RuntimeException: IOException contacting IPFS daemon.
Trailer: null 400 Bad Request
	at io.ipfs.api.IPFS.get(IPFS.java:592)
	at io.ipfs.api.IPFS.retrieve(IPFS.java:571)
	at io.ipfs.api.IPFS.retrieveAndParse(IPFS.java:553)
	at io.ipfs.api.IPFS.access$100(IPFS.java:15)
	at io.ipfs.api.IPFS$Pubsub.pub(IPFS.java:235)
	at io.ipfs.api.APITest.pubsub(APITest.java:429)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.io.IOException: Server returned HTTP response code: 400 for URL: http://127.0.0.1:5001/api/v0/pubsub/pub?arg=topic4554109927506&arg=Hello World!
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1876)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
	at io.ipfs.api.IPFS.get(IPFS.java:580)
	... 27 more

ipfs.dht.findpeer()

Not actually sure if this is an issue or not but i figured it bring it up anyways:

ipfs.dht.findpeer() currently expects a MultiAddress when the CL version expects a peerID:

from the Commands | IPFS Docs:

ipfs dht findpeer

USAGE
ipfs dht findpeer ... - Query the DHT for all of the multiaddresses associated with a Peer ID.

SYNOPSIS
ipfs dht findpeer [--verbose | -v] [--] ...

ARGUMENTS

... - The ID of the peer to search for.

OPTIONS

-v, --verbose bool - Print extra information. Default: false.

DESCRIPTION

Outputs a list of newline-delimited multiaddresses.

Shouldn't this function take in a MultiHash namely the peerID that you are trying to "find"?

Unable to add file to IPFS using add method

I'm using IPFS version 0.4.2 and have cloned branch 0.4.2 . Using this code :

package org.ipfs.api;
import java.io.*;

public class Main {
public static void main(String args[]) throws Exception{

    IPFS m = new IPFS("myhost" , 8080);
    NamedStreamable.ByteArrayWrapper file = new     NamedStreamable.ByteArrayWrapper("hello.txt", "G'day world! IPFS rocks!".getBytes());

try {
        System.out.println(m.add(file));
       } catch (IOException e) {

       e.printStackTrace();  
        }
}
}


```produces error : 

java.io.IOException: Server returned status: 404 with body:  and Trailer header: null
    at org.ipfs.api.Multipart.finish(Multipart.java:106)
    at org.ipfs.api.IPFS.add(IPFS.java:64)
    at org.ipfs.api.IPFS.add(IPFS.java:52)
    at org.ipfs.api.Main.main(Main.java:16)



This should work as is ? Or am I missing something ?

I don't understand the constructor

I started using the API with 127.0.0.1 as address to the IPFS constructor, and all worked fine, because I had my local IPFS node up, of course. But I'd like to try accessing another node, so I changed to a different IP address, and turned off my local node, and got the exception below. I used this constructor argument:

new IPFS("/ip4/ipfs.innovandalism.eu/tcp/5001")

It turns out this wouldn't work anyway, because innovandalism.eu doesn't allow the kind of access I was thinking about, but why would the Java API access localhost if I set it to connect to another machine?


Connected to the target VM, address: '127.0.0.1:45596', transport: 'socket'

Connection refused
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1168)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1104)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:998)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:932)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1282)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1257)
at org.ipfs.api.Multipart.(Multipart.java:27)
at org.ipfs.api.IPFS.add(IPFS.java:54)
at org.ipfs.api.IPFS.add(IPFS.java:50)
at xyztr.IPFSProxy$.send(IPFSProxy.scala:26)
...

IPNS - Publish a CID

Hello,
I'm currently trying to replicate the following behavior with java-ipfs-api:

Add a DAG node to IPFS

$ echo '{"name":"blogpost","documents":[]}'|ipfs dag put
zdpuAknRh1Kro2r2xBDKiXyTiwA3Nu5XcmvjRPA1VNjH41NF7

Publish the CID of the DAG Object to an IPNS namespace

$ ipfs name publish zdpuAknRh1Kro2r2xBDKiXyTiwA3Nu5XcmvjRPA1VNjH41NF7 --key=test-dag
Published to QmdCSru6FJRp3gYSuAcg64az6mkc7SXhSvEHry8aKUeMwu: /ipfs/zdpuAknRh1Kro2r2xBDKiXyTiwA3Nu5XcmvjRPA1VNjH41NF7

Resolve the IPNS namespace and get the DAG node

$ echo $(ipfs name resolve QmdCSru6FJRp3gYSuAcg64az6mkc7SXhSvEHry8aKUeMwu) | ipfs dag get
{"documents":[],"name":"blogpost"}

Using java-ipfs-api, I'm doing

  1. Running a go-ipfs node
$ ipfs daemon
  1. Executing the following code
    @Test
    public void issue() throws Exception {
        
        try { 
            IPFS ipfs = new IPFS("localhost", 5001);
            
            // JSON document
            String json = "{\"name\":\"blogpost\",\"documents\":[]}";
            log.info("json: {}", json);
            
            // Add a DAG node to IPFS
            MerkleNode merkleNode = ipfs.dag.put("json", json.getBytes());
            log.info("store [json: {}] - {}", json, merkleNode.toJSON());
            Assert.assertEquals("expected to be zdpuAknRh1Kro2r2xBDKiXyTiwA3Nu5XcmvjRPA1VNjH41NF7" , "zdpuAvQHo4UMMFvGiYJ5yptX4JFZtX77jz457ebwQiToG26TJ", merkleNode.hash.toString());
            
            // Get a DAG node
            byte[] res = ipfs.dag.get(Cid.buildCidV0(merkleNode.hash));
            log.info("fetch({}): {}", merkleNode.hash.toString(), new String(res));
            Assert.assertEquals("Should be equals", json, new String(res));
            
            // Publish to IPNS
            Map result = ipfs.name.publish(merkleNode.hash);
            log.info("result {}", result);
            
            // Resolve from IPNS
            String resolved = ipfs.name.resolve(Multihash.fromBase58((String) result.get("Name")));
            log.info("resolved {}", resolved);
            Assert.assertEquals("Should be equals", resolved, merkleNode.hash.toBase58());
               
        } catch(Exception e) {
            log.error("Error", e);
            throw e;
        }
    }

Unfortunately it failed on the IPNS publication ipfs.name.publish(merkleNode.hash)

09:55:01.578 [main] INFO TestIPNS - json: {"name":"blogpost","documents":[]}
09:55:01.606 [main] INFO TestIPNS - store [json: {"name":"blogpost","documents":[]}] - {Hash=zdpuAvQHo4UMMFvGiYJ5yptX4JFZtX77jz457ebwQiToG26TJ, Links=[], Size=34}
09:55:01.609 [main] INFO TestIPNS - fetch(zdpuAvQHo4UMMFvGiYJ5yptX4JFZtX77jz457ebwQiToG26TJ): {"name":"blogpost","documents":[]}
09:55:01.616 [main] ERROR TestIPNS - Error
java.lang.RuntimeException: IOException contacting IPFS daemon.
Trailer: [X-Stream-Error] {"Message":"unexpected EOF","Code":0,"Type":"error"}

	at io.ipfs.api.IPFS.get(IPFS.java:592)
	at io.ipfs.api.IPFS.retrieve(IPFS.java:571)
	at io.ipfs.api.IPFS.retrieveAndParse(IPFS.java:553)
	at io.ipfs.api.IPFS.retrieveMap(IPFS.java:549)
	at io.ipfs.api.IPFS.access$200(IPFS.java:15)
	at io.ipfs.api.IPFS$Name.publish(IPFS.java:361)
	at io.ipfs.api.IPFS$Name.publish(IPFS.java:357)
	at TestIPNS.issue(TestIPNS.java:41)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206)
Caused by: java.io.IOException: Server returned HTTP response code: 500 for URL: http://localhost:5001/api/v0/name/publish?arg=/ipfs/zdpuAvQHo4UMMFvGiYJ5yptX4JFZtX77jz457ebwQiToG26TJ
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
	at io.ipfs.api.IPFS.get(IPFS.java:580)
	... 30 common frames omitted

The IPFS daemon doesn't log anything.

I acknowledge the issue might be related to the API itself and not the Java client directly. Still investigating...

Wrapping files

How to wrap file while adding it to ipfs?
or how to set --wrap-with-directory flag in ipfs?

Support for optional url params from API?

Is there any support for optional URL params for an endpoint? for most of the endpoints I'm only seeing the required params as settable. For instance /name/publish only has the "arg" parameter available.

go-ipfs Version v0.4.13 for windows 64bit

Hi,

The Java API library functions fine with the 28/09/2017 version of the go-ipfs client. Now when I updated that to the current version ( Version v0.4.13 for windows 64bit) the JSON messages of the server do not seem to be compatible with the Java API (java-ipfs-api v1.1.1) library anymore.

My code:
NamedStreamable.ByteArrayWrapper file = new NamedStreamable.ByteArrayWrapper(guid,
r1.getCanonicalString().getBytes());
MerkleNode node = ipfs.add(file);
Gets the following error now:

java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer
at io.ipfs.api.MerkleNode.fromJSON(MerkleNode.java:55)
at io.ipfs.api.IPFS.lambda$add$1(IPFS.java:89)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1380)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at io.ipfs.api.IPFS.add(IPFS.java:90)
at io.ipfs.api.IPFS.add(IPFS.java:70)

Kind regards,

J.

MerkleNode.class : ClassCastException (go-ipfs:v0.4.11)

Since the version v0.4.11 of the IPFS client, I got an error in the Merklenode.class file.
The error is a ClassCastException on the following piece of code (lines 69-75).

     if (json.containsKey("Size")) {
        Optional.empty();
        var10000 = Optional.of((Integer)json.get("Size"));
      } else {
        Optional.empty();
        var10000 = Optional.empty();
      }

The ClassCastException indicates that (Integer)json.get("Size") is a String and cannot be cast to Integer.

unit test is not pass

enviroment: jdk 1.8, windows 7, ipfs 0.4.2
there are 5 tests not pass,they are refsTest, pinTest, singleFileTest, fileContentsTest, hostFileTest

Incorrect api call in Pubsub.pub()

Hi,
looks like the current lines #236-238 in IPFS.java should read:
public Object pub(String topic, String data) throws IOException { return IPFS.this.retrieveAndParse("pubsub/pub?arg="+topic + "&arg=" + data); }
instead of
public Object pub(String topic, String data) throws IOException { return IPFS.this.retrieveAndParse("pubsub/peers?arg=" + topic + "&arg=" + data); }
Cheers

Add a method to get the hash only

Need to a new method to only grab the hash without loading the file to IPFS. This can be done by passing a hashonly parameter to the API

add?only-hash=true

Minimal version check critical error

Since the version v0.4.10, the java-ipfs-api library is no longer usable.
In cause the following code :

       try {
            String ipfsVersion = version();
            String[] parts = ipfsVersion.split("\\.");
            String[] minParts = MIN_VERSION.split("\\.");
            if (parts[0].compareTo(minParts[0]) < 0
                    || parts[1].compareTo(minParts[1]) < 0
                    || parts[2].compareTo(minParts[2]) < 0)
                throw new IllegalStateException("You need to use a more recent version of IPFS! >= " + MIN_VERSION);
        } catch (IOException e) {
            throw new RuntimeException(e);
        }

See pull request : ipfs/java-ipfs-api#29

The library is not compatible with version v0.4.10 while this issue is not fixed.

dependencies included in the distribution

having binary blobs which are outputs of other projects is not good.
the transition could be made in the following way:

  1. document which dependency is obtained from where
    my guesses:
    multiaddr.jar multibase.jar multihash.jar: https://github.com/multiformats
    hamcrest-core-1.3.jar junit-4.12.jar: maven repo
    cid.jar: ??? I have found only https://github.com/ipfs/go-cid
  2. modify pom.xml to obtain the dependencies in compile time (and build.xml, but actually you should delete it)
  3. delete the lib folder
  4. nudge multiformats to be good citizens and publish in maven. update pom.xm when done
    You should also publish in maven, but that's another issue.
    If you do 1), i might do 2) for you.

ipfs.add() Seemingly produces random results when adding a directory that contains a .jpeg

The following code seems to produce random results:

boolean flag = false;
		while(!flag){
		addResult = ipfs.add(file);
		System.out.println(addResult.size());
		for(MerkleNode mn: addResult){
			System.out.println(mn.name);
			if(mn.name.get().equals("home")){
				flag = true;
			}
		}
		}

example output:

.jpeg included in directory:

5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
12
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
Optional[home/trifocal.jpeg]
Optional[home/undo.txt]
Optional[home/Zexamples/Zexample]
Optional[home/Zexamples]
Optional[home/examples+in+examples/examples+in+examples+in+examples]
Optional[home/examples+in+examples]
Optional[home]

2nd run:
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
12
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
Optional[home/trifocal.jpeg]
Optional[home/undo.txt]
Optional[home/Zexamples/Zexample]
Optional[home/Zexamples]
Optional[home/examples+in+examples/examples+in+examples+in+examples]
Optional[home/examples+in+examples]
Optional[home]

3rd run:

5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
5
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
12
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
Optional[home/trifocal.jpeg]
Optional[home/undo.txt]
Optional[home/Zexamples/Zexample]
Optional[home/Zexamples]
Optional[home/examples+in+examples/examples+in+examples+in+examples]
Optional[home/examples+in+examples]
Optional[home]

results from a smaller directory (just an example text file. the .jpeg, and the undo.txt file which just contains the previous hash from my /ipns/peerid:

2
Optional[home/example]
Optional[home/trifocal.jpeg]
4
Optional[home/example]
Optional[home/trifocal.jpeg]
Optional[home/undo.txt]
Optional[home]

2nd run on same smaller dir:
1
Optional[home/example]
4
Optional[home/example]
Optional[home/trifocal.jpeg]
Optional[home/undo.txt]
Optional[home]

3rd run on same smaller dir:

1
Optional[home/example]
2
Optional[home/example]
Optional[home/trifocal.jpeg]
2
Optional[home/example]
Optional[home/trifocal.jpeg]
1
Optional[home/example]
4
Optional[home/example]
Optional[home/trifocal.jpeg]
Optional[home/undo.txt]
Optional[home]

run with the .jpeg removed from the dir:
11
Optional[home/examples+in+examples/examples+in+examples+in+examples/Exampleception]
Optional[home/examples+in+examples/Examples%5E2]
Optional[home/hello.txt]
Optional[home/hello2.txt]
Optional[home/hello3]
Optional[home/undo.txt]
Optional[home/Zexamples/Zexample]
Optional[home/Zexamples]
Optional[home/examples+in+examples/examples+in+examples+in+examples]
Optional[home/examples+in+examples]
Optional[home]

run on smaller dir with jpeg removed:

3
Optional[home/example]
Optional[home/undo.txt]
Optional[home]

add() seems to have no issues adding simple text files but something is wonky with adding pics.

I will post updates to this as i try other file formats (png,mp4 are mainly the two other file types i plan on trying)

java.lang.OutOfMemoryError: Java heap space when adding file

When adding a "large" file of 100M got this error

SO Windows

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3236) at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118) at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153) at sun.net.www.http.PosterOutputStream.write(PosterOutputStream.java:78) at io.ipfs.api.Multipart.addFilePart(Multipart.java:102) at io.ipfs.api.IPFS.add(IPFS.java:83) at io.ipfs.api.IPFS.add(IPFS.java:74) at io.ipfs.api.IPFS.add(IPFS.java:70)
IPFS ipfs = new IPFS("/ip4/127.0.0.1/tcp/5001"); NamedStreamable streamable = new NamedStreamable.FileWrapper(new File("D:\\testfile")); List<MerkleNode> addParts = ipfs.add(streamable);

Three failing tests when I try to build for 0.4.0-dev

Environment:
Ubuntu 15.10
Plenty of RAM, plenty of disk

openjdk version "1.8.0_66-internal"
OpenJDK Runtime Environment (build 1.8.0_66-internal-b17)
OpenJDK 64-Bit Server VM (build 25.66-b17, mixed mode)

Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T17:41:47+01:00)
Maven home: /home/mats/programs/apache-maven-3.3.9
Java version: 1.8.0_45, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-8-oracle/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.2.0-30-generic", arch: "amd64", family: "unix"

Could it be some Java version mixup...????

Here is the full log from my run:

mats@mats /code/java-ipfs-api[master*]$ mvn test
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building java-ipfs-api 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ api ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /code/java-ipfs-api/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ api ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ api ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /code/java-ipfs-api/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ api ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to /code/java-ipfs-api/target/test-classes
[WARNING] /code/java-ipfs-api/src/test/java/org/ipfs/api/Test.java: /code/java-ipfs-api/src/test/java/org/ipfs/api/Test.java uses unchecked or unsafe operations.
[WARNING] /code/java-ipfs-api/src/test/java/org/ipfs/api/Test.java: Recompile with -Xlint:unchecked for details.
[INFO]
[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ api ---
[INFO] Surefire report directory: /code/java-ipfs-api/target/surefire-reports


T E S T S

Running org.ipfs.api.Test
[/ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z, /ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM, /ip4/128.199.219.111/tcp/14001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu, /ip4/162.243.248.213/tcp/4001/ipfs/QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm, /ip4/178.62.158.247/tcp/14001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd, /ip4/188.226.242.195/tcp/4001/ipfs/QmbFUxDeSVt1ec5DHg2P84b34dX6uXREDhruWYUEAMrfFu, /ip4/212.227.249.191/tcp/4001/ipfs/QmeRsC5MK1sqVJuzVu1VZYMb2ZUtZtPxS2A8ohFdW5Hwih, /ip4/42.159.241.143/tcp/5986/ipfs/QmR5Cr1vsspa8tdwYEUNCfgpY6BkSR5PSYjTkhyquxwv7Z, /ip4/46.101.117.27/tcp/4001/ipfs/QmZWbhnx78ed2g9pBq368oTNDHFLXfeoBREVA5ZvgcYcQk, /ip4/70.81.176.29/tcp/35098/ipfs/QmNfCubGpwYZAQxX8LQDsYgB48C4GbfZHuYdexpX9mbNyT, /ip4/93.233.180.132/tcp/4001/ipfs/QmduXhVzjvsVguUZ1vg25BRAiEV2AAhYTm8x8vqrNvWXfe]
[/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ, /ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z, /ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM, /ip4/162.243.248.213/tcp/4001/ipfs/QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm, /ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu, /ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64, /ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd, /ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3, /ip4/104.236.151.122/tcp/4001/ipfs/QmSoLju6m7xTh3DuokvT3886QRYqxAzb1kShaanJgW36yx]

{QmZGPrS9wcmEzj7sETGWzuJY7Dvmfj3TmsUz1BfSTC7Xss={Type=recursive}, QmR6i6avfPFFPw12JHhz9uPmSxmbGcKxM32sBHMbssTUv8={Type=recursive}, QmVv4Wz46JaZJeH5PMV4LGbRiiMKEmszPYY3g6fjGnVXBS={Type=recursive}, QmXarR6rgkQ2fDSHjSY5nM2kuCXKYGViky5nohtwgF65Ec={Type=recursive}, QmaNEmEHsZxhgcSoR5yjw6RkqxV7P5uxTwq6AotFZuXaCB={Type=recursive}, QmVxpwGgMjDipJWChbqGyGQd459U5ctFDzjWBE9h79jpGT={Type=recursive}, QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn={Type=recursive}}
Tests run: 18, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 109.6 sec <<< FAILURE!
indirectPinTest(org.ipfs.api.Test) Time elapsed: 0.012 sec <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Server returned status: 500 with body: and Trailer header: [X-Stream-Error]
at org.ipfs.api.Test.indirectPinTest(Test.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: java.io.IOException: Server returned status: 500 with body: and Trailer header: [X-Stream-Error]
at org.ipfs.api.Multipart.finish(Multipart.java:106)
at org.ipfs.api.IPFS$IPFSObject.patch(IPFS.java:236)
at org.ipfs.api.Test.indirectPinTest(Test.java:122)
... 29 more

objectPatch(org.ipfs.api.Test) Time elapsed: 0.054 sec <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Server returned status: 500 with body: and Trailer header: [X-Stream-Error]
at org.ipfs.api.Test.objectPatch(Test.java:182)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: java.io.IOException: Server returned status: 500 with body: and Trailer header: [X-Stream-Error]
at org.ipfs.api.Multipart.finish(Multipart.java:106)
at org.ipfs.api.IPFS$IPFSObject.patch(IPFS.java:236)
at org.ipfs.api.Test.objectPatch(Test.java:168)
... 29 more

fileTest(org.ipfs.api.Test) Time elapsed: 0.018 sec <<< ERROR!
java.lang.RuntimeException: Trailer: [X-Stream-Error]
at org.ipfs.api.IPFS.get(IPFS.java:436)
at org.ipfs.api.IPFS.retrieve(IPFS.java:418)
at org.ipfs.api.IPFS.retrieveAndParse(IPFS.java:412)
at org.ipfs.api.IPFS.retrieveMap(IPFS.java:408)
at org.ipfs.api.IPFS.access$200(IPFS.java:8)
at org.ipfs.api.IPFS$File.ls(IPFS.java:284)
at org.ipfs.api.Test.fileTest(Test.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: java.io.IOException: Server returned HTTP response code: 500 for URL: http://127.0.0.1:5001/api/v0/file/ls?arg=QmWRi12tVKmQpgqWEfAfhRKLgRnPNJqB825RRmzRq967LS
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1839)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at org.ipfs.api.IPFS.get(IPFS.java:427)
... 35 more

Results :

Tests in error:
indirectPinTest(org.ipfs.api.Test): java.io.IOException: Server returned status: 500 with body: and Trailer header: [X-Stream-Error]
objectPatch(org.ipfs.api.Test): java.io.IOException: Server returned status: 500 with body: and Trailer header: [X-Stream-Error]
fileTest(org.ipfs.api.Test): Trailer: [X-Stream-Error]

Tests run: 18, Failures: 0, Errors: 3, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:51 min
[INFO] Finished at: 2016-03-13T17:47:11+01:00
[INFO] Final Memory: 16M/296M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test (default-test) on project api: There are test failures.
[ERROR]
[ERROR] Please refer to /code/java-ipfs-api/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
mats@mats /code/java-ipfs-api[master*]$ git status

go-ipfs 0.4.11-rc1: Crashing error when adding file

Version: 1.1.1
Incompatibility with go-ipfs 0.4.11.rc1

java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer
at io.ipfs.api.MerkleNode.fromJSON(MerkleNode.java:55)
at io.ipfs.api.IPFS.lambda$add$1(IPFS.java:89)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at io.ipfs.api.IPFS.add(IPFS.java:90)
at io.ipfs.api.IPFS.add(IPFS.java:70)

pubsub has very high latency

I was testing the simplest form of pub/sub with the java api.

on localhost I experience 250-300 milliseconds latency on message, while when I use the go tools, its down to just a few milliseconds. Is there a way to get low latency for the java implementation as well?

PubSub API

Could some one explain how I'm supposed to properly use the subscribe part of the pubsub api? I'm new to multithreading and concurrency and I'm finding it really confusing. I'm using scala right now, but any java examples would be a huge help!

Thank you in advance

javadoc

Hey
Is there any javadoc associated with this project?

ipfs.add() throws IOException

It seems that the latest pubsub implementation broke the ipfs.add() functionality:
MerkleNode addResult = ipfs.add(new NamedStreamable.FileWrapper(path.toFile()));

The above statement leads to the following exception:

IOException contacting IPFS daemon.
Trailer: [X-Stream-Error] {"Message":"merkledag node was not a directory or shard","Code":0}

java.lang.RuntimeException: IOException contacting IPFS daemon.
Trailer: [X-Stream-Error] {"Message":"merkledag node was not a directory or shard","Code":0}

at io.ipfs.api.IPFS.get(IPFS.java:594)
at io.ipfs.api.IPFS.retrieve(IPFS.java:573)
at io.ipfs.api.IPFS.retrieveAndParse(IPFS.java:555)
at io.ipfs.api.IPFS.retrieveMap(IPFS.java:551)
at io.ipfs.api.IPFS.ls(IPFS.java:94)
at com.ipfs.server.cratesapp.App.addFolderToIPFS(App.java:21)
at com.ipfs.server.cratesapp.App.lambda$0(App.java:34)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at com.ipfs.server.cratesapp.App.walkFolderFiles(App.java:32)
at com.ipfs.server.cratesapp.App.main(App.java:51)

Caused by: java.io.IOException: Server returned HTTP response code: 500 for URL: http://127.0.0.1:5001/api/v0/ls/QmaQqhi4FjmeRqKoTvEcJMHCKxPZCbSShZE8G5TdTdwh4e
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1876)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at io.ipfs.api.IPFS.get(IPFS.java:582)
... 18 more

Also relevant unit test is failed:

directoryTest(io.ipfs.api.APITest) Time elapsed: 0.594 sec <<< ERROR!
java.lang.RuntimeException:
IOException contacting IPFS daemon.
Trailer: [X-Stream-Error] {"Message":"merkledag node was not a directory or shard","Code":0}

ipfs.id()

Not too much of an issue but here we go:

So running ipfs id in the command prompt/terminal prints out my local ipfs id/public key but when running ipfs.id() in java it throws an error because the function expects a string. Sending it a blank string (ipfs.id("")) just causes it to fail to find that blank id instead of returning my local information.

Is there any particular reason why we cant use this function to get our local information?

Am i missing something and doing this wrong?

id command description from the Commands | IPFS Docs:

DESCRIPTION

Prints out information about the specified peer.
If no peer is specified, prints out information for local peers.

'ipfs id' supports the format option for output with the following keys:
: The peers id.
: Agent version.
: Protocol version.
: Public key.
: Addresses (newline delimited).

EXAMPLE:

  ipfs id Qmece2RkXhsKe5CRooNisBTh4SK119KrXXGmoK6V3kb8aH -f="<addrs>\n"

Trying to use with android studio error

Hi,

Sorry if this is stupid but I am getting an error - "Error:(39, 21) error: cannot access MultiAddress
class file for io.ipfs.multiaddr.MultiAddress not found" when using -
IPFS ipfs = new IPFS("/ip4/127.0.0.1/tcp/5001");

I have included the jar file in the project, I just want to be able to upload and read files from IPFS. Any help would be great.

Cheers!

Non-runtime dependencies included in distribution

During ant dist all of the JAR files in the ./lib directory are copied to the ./dist directory. Multihash.jar and Multiaddr.jar are dependencies, but junit-4.11.jar and hamcrest-core-1.3.jar are not required at runtime: they are used only during tests.

Android SDK 25 and Lower Fix for java.nio.file

Hi,

I have made a few changes just to get rid of java.nio.file because it is not supported lower than android SDK 26 (o). It was very quick and I haven't really tested it but it seems to be working. If anyone wants to see it just comment.

Cheers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.