Git Product home page Git Product logo

fraunhoferisst / dataspaceconnector Goto Github PK

View Code? Open in Web Editor NEW
102.0 11.0 74.0 28.12 MB

This is an IDS Connector reference implementation.

Home Page: https://www.isst.fraunhofer.de/de/geschaeftsfelder/datenwirtschaft/technologien/Dataspace-Connector.html

License: Apache License 2.0

Dockerfile 0.15% Java 97.54% Shell 0.24% Python 1.66% Smarty 0.13% FreeMarker 0.28%
ids-connector data-sovereignty ids data-exchange ids-ready

dataspaceconnector's Introduction

UPDATE TO V7.X.X
Before updating, please read this guide!

Please note that, as of now, sovity maintains the Dataspace Connector and is responsible for further developments covering bug fixes, security issues, or feature implementations. Fraunhofer ISST will no longer implement any new features within this repository.


Logo
Dataspace Connector

ContactContributeDocsIssuesLicense

The Dataspace Connector is an implementation of an IDS connector component following the IDS Reference Architecture Model. It integrates the IDS Information Model and uses the IDS Messaging Services for IDS functionalities and message handling. The core component in this repository provides a REST API for loading, updating, and deleting resources with local or remote data enriched by its metadata. It supports IDS conform message handling with other IDS connectors and components and implements usage control for selected IDS usage policy patterns.



Quick Start

The official Docker images of the Dataspace Connector can be found here.

For an easy deployment, make sure that you have Docker installed. Then, execute the following command:

docker run -p 8080:8080 --name connector ghcr.io/international-data-spaces-association/dataspace-connector:latest

If everything worked fine, the connector is available at https://localhost:8080/. The API can be accessed at https://localhost:8080/api. The Swagger UI can be found at https://localhost:8080/api/docs.

For certain REST endpoints, you will be asked to log in. The default credentials are admin and password. Please take care to change these when deploying and hosting the connector yourself!

For a more detailed explanation of deployment and configurations, see here.

Next, please take a look at our communication guide.

Note: For a more detailed or advanced Docker or Kubernetes deployment, as well as a full setup with the Connector and its GUI, see here. If you want to build and run locally, follow these steps.

Security and Verification

The Docker images are signed using cosign. The public key of the Dataspace Connector can be found at the root of the project structure (here).

For verifying that you have received an official image from a trusted source, run:

cosign verify --key dsc.pub ghcr.io/international-data-spaces-association/dataspace-connector:latest

Software Bill of Material (SBoM)

The Software Bill of Material (SBoM) for every Docker image is supplied as SPDX-JSON and can be found by appending -sbom to the image tag. For example, the SBoM for ghcr.io/international-data-spaces-association/dataspace-connector:latest is ghcr.io/international-data-spaces-association/dataspace-connector:latest-sbom.

The SBoM can be pulled via tools like oras.

oras pull ghcr.io/international-data-spaces-association/dataspace-connector:latest-sbom -a

Note: Also the SBoM images can be validated using cosign as shown above.

Contributing

You are very welcome to contribute to this project when you find a bug, want to suggest an improvement, or have an idea for a useful feature. Please find a set of guidelines at the CONTRIBUTING.md and the CODE_OF_CONDUCT.md.

The core development is driven by

with significant contributions, comments, and support by (in alphabetical order):

License

Copyright © 2020-2022 Fraunhofer ISST. This project is licensed under the Apache License 2.0 - see here for details.

dataspaceconnector's People

Contributors

brianjahnke avatar dependabot[bot] avatar domreuter avatar edgardmarx avatar goekhankahriman avatar heinrichpet avatar hqarawlus avatar juliapampus avatar phertweck avatar renebrinkhege avatar ronjaquensel avatar sebastianopriel avatar steffen-biehs avatar tmberthold avatar vbasem avatar vdakker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dataspaceconnector's Issues

Testing Error: Message Deserialization

6 of 12 test classes fail on mvn clean package

Failures: 
  ArtifactRequestMessageHandlingTest.requestArtifact_invalidId:140 expected:<https://w3id.org/idsa/code/NOT_FOUND> but was:<https://w3id.org/idsa/code/MALFORMED_MESSAGE>
  ArtifactRequestMessageHandlingTest.requestArtifact_validId_provisionInhibited:120 expected:<https://w3id.org/idsa/code/NOT_AUTHORIZED> but was:<https://w3id.org/idsa/code/MALFORMED_MESSAGE>
  DescriptionRequestMessageHandlingTest.requestArtifactDescription_invalidId:110 expected:<https://w3id.org/idsa/code/NOT_FOUND> but was:<https://w3id.org/idsa/code/MALFORMED_MESSAGE>
Errors: 
  ArtifactRequestMessageHandlingTest.requestArtifact_validId_provisionAllowed:93 » InvalidTypeId
  DescriptionRequestMessageHandlingTest.requestArtifactDescription_validId:89 » InvalidTypeId
  DescriptionRequestMessageHandlingTest.requestSelfDescription:67 » InvalidTypeId

DSC_error_20201016_JPa.txt

Improve function uuidFromUri

The function is quite simple but is missing exception handling and some functionality such as selecting which of the uuids in an uri should be selected.

Add default policy

Creating a new resource in swagger GUI with the default settings creates a resource without default policy. This leads to Metadata could not be saved: Cannot invoke "de.fraunhofer.iais.eis.ContractOffer.toRdf()" because the return value of "java.util.ArrayList.get(int)" is null when requesting that resource at the admin/api/request/description endpoint.

Error 500 when requesting another connector's description

Dear DataspaceConnector-Team,

we've setup two connectors on a server. After succesfully registering a data resource and corresponding representation on one, we tried accessing it from the other one (which runs on another port). First, we tried requesting the connector's description using POST/admin/api/request/description, passing the url http://{ip-address}:{port}/api/ids/data. We configured the relevant files in the repository for http to work.

This worked in older versions. With the latest commit however, we obtain an error message in the response body:

Failed to send description request message.

Did anyone else encounter this yet?

Configurable configuration

The configuration, as far as I can see, is now static within the JAR file of the project. Which entails that each time the configuration changes, the JAR has to be built up again.

Is it an idea to have an option to mount configuration on the file system of a Docker container? This enables the connector to use the same Docker image for different configurations, which enhances the ability to practically use the DataspaceConnector.

Provide ARM compatible Docker Images

"For our Supply Chain Execution Project, I want to use the IDS Connector on an ARM based Raspbery Pi 4 B+ 4GB with Raspbian 32 Bit. I can't use the provided docker images. It would be good to provide some ARM images in the future."

Add detailed logging

On resource creation, changes, metadata/data requests, contract negotiation, data accesses etc.

  • Log locally
  • Log to external file

Introduce simple Maven build profiles

By using maven build profiles different build configurations can be enabled/disabled. By doing so different build goals may be reached while the build complexity and thereby the build time of the project can be modified.

The following build profiles should be introduced:

  • No testing
  • No documentation
  • Release build

No testing:
This profile should disable the test phase of the build. While implementing new features running the test with every compilation adds significant amount of build time.

No documentation:
This profile should disable the update and generation of the documentation. While implementing new features updating and generating the documentation is not necessary for every compilation. By skipping this step the build time can be reduced resulting in faster access to the application.

Release build:
The release build should enable the developer to build the application with the strictest rule sets and may run additional steps that ensure the quality of the build (e.g. Do not allow the application to build while there are unchecked warnings).

Nullpointer on test run (loading of truststore fails)

Test run fails, same applies for building docker container.
Build and run with -DskipTests works.

Details:
pathes to keystore and truststore are provided as URIs, e.g. "@id" : "file:///conf/keystore-localhost.p12"
KeyStoreManager extracts path but does not remove leading / as expected by classLoader.getResourceAsStream(...)

Stacktrace:

Caused by: java.lang.NullPointerException
	at de.fraunhofer.isst.ids.framework.configuration.KeyStoreManager.loadKeyStore(KeyStoreManager.java:105)
	at de.fraunhofer.isst.ids.framework.configuration.KeyStoreManager.<init>(KeyStoreManager.java:65)
	at de.fraunhofer.isst.ids.framework.spring.starter.ConfigProducer.<init>(ConfigProducer.java:52)

Moritz Keppler [email protected], Daimler TSS GmbH, legal info/Impressum

Merging information in project.properties into application.properties

The pom information is currently passed through via the project.properties file. The file application.properties already contains configuration settings. By merging the files the configuration becomes more centralized.

The problem (and the reason two files exist in the first place) that needs to be investigated is that the file filtering produces problems when turned on for the resource containing the application.properties file.

build single jar for docker container and make build faster

We suggest the following docker build improvements/fixes

  • remove javadoc jar to build a single jar, cp *.jar will fail otherwise
  • removed testing in mvn build
  • use multi-staged docker file: stage 1 maven build, stage 2 runtime
  • introduce a separate layer for mvn dependencies to speed up build process: this layer can be reused in subsequent builds - as long as dependencies stay the same

Moritz Keppler [email protected], Daimler TSS GmbH, legal info/Impressum

Unable to publish OR read resource data string in demo application

When following the tutorial "Hands-on IDS Communication" using the provided demo application running in Docker, it is not possible to publish a resource data string OR to read this data string.

First, we register a resource through POST /admin/api/resources/resource using the Swagger UI running on localhost:8080/admin. We use the provided JSON as request body.

Subsequently, we register a corresponding resource representation using POST /admin/api/resources/{resource-id}/representation, passing the returned UUID from the previous step. We amended the JSON in the request body as follows:

{ "type": "json", "byteSize": 105, "sourceType": "local", "source": { "username": "-", "password": "-" } }

Subsequently, we try publishing a resource data string "Test Data" using PUT /admin/api/resources/{resource-id}/data passing the previously returned UUID. We receive a 201 Code with the response "Resource published".

However, once we try requesting the data string using GET /admin/api/resources/{resource-id}/data, we receive an error 404, telling us "Resource not found".

Project version passthrough

The version of the dataspace connector has to be manually set in multiple project files. By doing so there is a high risk that the version is not updated in all files. This could lead to major problems down the line since the version is used for runtime descriptions of the connector.

I suggest that the version is only set once in the pom.xml. The other locations may pull the information from the pom.

There are currently (at least) three locations where the version has to be set manually:

  1. pom.xml
  2. ConnectorApplication.java
  3. config.json

H2 console error: No suitable driver found for 08001/0.

The h2 console cannot be accessed. The console reports H2 console error: No suitable driver found for 08001/0.

The problem seems to stem from the usage of custom filtering.

Hotfix: Disable the @component at the HttpTraceFilter. It is the only custom filter at the moment. This will disable http tracing!

Check the project's dependencies

After running the maven licence plugin and the maven dependency plugin there seem to be unused included dependencies.

Getting rid of those dependencies removes unused licence dependencies. Also the build time of images will be faster since the dependencies wont be downloaded.

Proxy in config file

The default config.json contains a proxy definition with hard-coded address http://proxy.dortmund.isst.fraunhofer.de:3128. I cannot reach this proxy and I don't think it is necessary for users outside the ISST. In fact, not being able to connect to the proxy causes issues when trying to run the connector. Would it be possible to remove the proxy definition? Or did I oversee another use case for this? Thank you :)

Endpoint for online status

For the Configuration Manager and probably other services, it would be interesting to regularly check whether the connector is still "online"/accessible.

How to deal with dynamic/complex REST services

Hi All,
This issue is mostly to document our discussion.
Currently, when using the DataspaceConnector on the server side, to expose an existing REST service, one has to specify the exact REST request that corresponds to a given Resource. However, many REST APIs require the passing of query parameters to be useful.

For example the service at https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1 provides air quality measurements for all of Europe. A typical use-case for this service would be:

  1. Request Things from https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1/Things
    Either all, or using a filter.
    • Stations that measure NO2: https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1/Things?$filter=Datastreams/ObservedProperty/Name eq 'NO2'
    • Stations in a certain bounding box: https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1/Things?$filter=geo.intersects(Locations/location,geography'POLYGON ((0 55.7,0 52.4,5.6 52.4,5.6 55.7,0 55.7))')
  2. Fetch, when needed, for a station the Datastreams, with ObservedProperty
    https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1/Things(2281)/Datastreams?$select=name,id,unitOfMeasurement,observationType,properties&$expand=ObservedProperty
    The 2281 would be replaced for each call.
  3. Fetch the Observations from a Datastream, for a given time-frame.
    https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1/Datastreams(245)/Observations?$filter=phenomenonTime ge 2020-08-01T00:00:00Z and phenomenonTime lt 2020-09-01T20:00:00Z&$select=result,phenomenonTime&$orderby=phenomenonTime asc
    Again, the 245 would be replaced for each call, the times depend on the use case.

In all of these requests, other parameters can be used to tune the data that is returned, both the minimise the number of requests required, and to minimise the amount of data that needs to be sent. For instance, with the $select and $expand parameters.

Furthermore, for each request that returns a List of items, the response can be a subset of the list, and contain a @iot.nextLink property, that points to the next subset. This nextLink is dynamically generated by the server. The DataspaceConnector on the client side would have to replace those links with links pointing to itself.

The service contains data from January 2018 till now, about 300 Million Observations, in 17974 Datastreams, for 4382 Stations.
A demo map with the data of this service can be found at https://datacoveeu.github.io/API4INSPIRE/maps/AirQuality.html
The complete API standard document can be found here: https://www.ogc.org/standards/sensorthings

Add missing version tags

The last git tagged version is "v.3.2.1" while the current dataspace connector is at version 3.3.0. The tags from v.3.2.1 forward should be added.

Improve metadata deserialization

The payload of the DescriptionResponseMessage is mapped to the DSC's metadata model. This causes errors when communicating with other connectors.

Persist data in Docker setup

Currently all data stored in the PostgreSQL database in the docker setup is lost after restarting. Add a persistent volume to keep data across restarts.

rejectionReason: MALFORMED_MESSAGE after including IDS-Certificate

I've problems with including the IDS-Certificate. Done steps:

  1. added new certificate to conf-directory
  2. modified application.properties:
    server.ssl.key-store=classpath:conf/connector.p12
  3. changed deployMode und keystore in config.json:
    "ids:connectorDeployMode" : { "@id" : "idsc:PRODUCTIVE_DEPLOYMENT" },
    "ids:keyStore" : { "@id" : "file:///conf/ieeconnector.p12" }

Starting the connector works without problems. On making a Description Request to another Connector, I get the answer

"ids:rejectionReason" : { "@id" : "idsc:MALFORMED_MESSAGE" }, "ids:securityToken" : { "@type" : "ids:DynamicAttributeToken", "@id" : "https://w3id.org/idsa/autogen/dynamicAttributeToken/60027237-5eff-465f-afeb-71dfd7769086", "ids:tokenValue" : "rejected!", "ids:tokenFormat" : { "@id" : "idsc:JWT" } }, "ids:senderAgent" : { "@id" : "https://w3id.org/idsa/autogen/baseConnector/42d834ec-855b-456e-8cac-009d5d56593a" }, "ids:correlationMessage" : { "@id" : "https://INVALID" },
and the text
Token could not be parsed!JWT strings must contain exactly 2 period characters. Found: 0

The console produce following error:

2020-11-26 09:47:20 ERROR TokenManagerService:164 - Error retrieving token: Unexpected code Response{protocol=http/1.1, code=400, message=Bad Request, url=https://daps.aisec.fraunhofer.de/v2/token}

Where is the problem?

Description Request results in HTTP 500 response if resource does not exist on requested connector

If a description request (/admin/api/request/description) is sent to the connector with a requested resource URI that does not exist in the requested connector, then the connector returns a HTTP Status Code 500 with the message.

In the example setup with two connectors the following request to the data-consumer connector:

https://localhost:8080/admin/api/request/description?recipient=https%3A%2F%2Flocalhost%3A8081%2Fapi%2Fids%2Fdata&requestedArtifact=https%3A%2F%2Fw3id.org%2Fidsa%2Fautogen%2FpublicKey%2F78eb73a3-3a2a-4626-a0ff-631ab50a00f9

would give the following response:

Metadata could not be saved: Could not resolve type id 'ids:RejectionMessage' as a subtype of [simple type, class de.fraunhofer.iais.eis.DescriptionResponseMessage]: known type ids = [ids:DescriptionResponseMessage] at [Source: (String)"{"@context":{"ids":"...

Expected:
Status Code 404 with the rejection message from the requested connector as payload.

Add Basic Policy Negotiation

Two connectors automatically negotiate a contract before the actual data is exchanged.

  • Fill out a contract
  • Exchange contract: reject or accept
  • Send contract to clearing house
  • Test the compatibility with EI Connector
  • Optional: Edit Policy Enforcement

Readme mentions files that don't exist

The readme mentions two zip files that seem to be missing: java-setup.zip and docker-setup.zip
This makes it impossible to follow the "Getting Started" chapter

Add database table for connectors

For advanced policy negotiation, it would be helpful that requested self-description are deserialized and interpreted or at least that a user is able to add known connectors to a list that is persisted by the connector.

A connector object in the repository could have the following information:

  • UUID id
  • URI connectorId
  • URI: maintainer
  • URI: accessUrl

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.