Git Product home page Git Product logo

container's Introduction

OpenTOSCA Container - TOSCA Runtime

Java CI with Maven License Codacy Badge

Part of the OpenTOSCA Ecosystem

Info

The OpenTOSCA Container is java/maven based runtime for deploying and managing TOSCA-based applications. The backend uses Winery therefore all CSAR exported from a Winery repository should be compatible within the runtime.

Development & Stable Versions

master is the main development branch, the stable branch represents the latest stable branch and is also available as tags

Build

  1. Run git update-index --assume-unchanged ./org.opentosca.container.core/src/main/resources/application.properties to ignore custom configuration changes inside the application.properties.
  2. Update application.properties and replace localhost with your external IP address, e.g., 192.168.1.100.
  3. Run mvn package -DskipTests inside the root folder to build without tests (See also Tests in the next section).
  4. Afterwards, the OpenTOSCA-container.war can be deployed using a tomcat webserver.

Tests

  1. Update application.properties and replace localhost with your external IP address, e.g., 192.168.1.100.
  2. Be sure that your Dockerengine is running and is accessible via its REST API on tcp://your-ip:2375 (Or you change the port in the test cases under org.opentosca.container.war/src/test).
  3. Make sure the docker containers defined in ./test.yml are running (E.g. via docker compose -f test.yml up) and the ports match the ports in your application.properties.
  4. Afterwards, you can either start the test via mvn package or start the JUnit tests under org.opentosca.container.war/src/test within your preferred IDE directly.
  5. The test cases download the test applications from the test repository themselves. However, you can configure to use a local clone or another test repository by adding the line org.opentosca.test.local.repository.path=/path/to/repository/tosca-definitions-test-applications to application.properties.

Setup in IntelliJ

  1. Open the project using File > Open and navigate to the container folder.
  2. Right click the pom.xml and select Add as Maven project.
  3. Run the Container run configuration.

Setup in Eclipse

  1. Import project via Import existing maven projects..
  2. Add created war file of project org.opentosca.container.war to suitable server configured within your eclipse, e.g., Tomcat
  3. (AdditionalInfo) Usually the application runs on port 1337 and without a prefix in the path -> change port of tomcat to 1337 and remove the path of the added WAR project

Run via SpringBoot

  1. Run mvn install in root of project
  2. Go to directory org.opentosca.container.war and run mvn spring-boot:run and the runtime should be available under localhost:1337

Creating a new stable tag

  1. Run mvn release:update-versions -DautoVersionSubmodules=true and set the version to the prefered version for the container, or just use mvn --batch-mode release:update-versions -DautoVersionSubmodules=true to increment the current version. Remove -SNAPSHOT via mvn versions:set -DremoveSnapshot More Info
  2. Lock winery SNAPSHOT version via mvn versions:lock-snapshots
  3. Then run git tag <tagname> where tagname is the version and if a major release add name to it, afterwards run git push origin --tags

Disclaimer of Warranty

Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

Haftungsausschluss

Dies ist ein Forschungsprototyp. Die Haftung für entgangenen Gewinn, Produktionsausfall, Betriebsunterbrechung, entgangene Nutzungen, Verlust von Daten und Informationen, Finanzierungsaufwendungen sowie sonstige Vermögens- und Folgeschäden ist, außer in Fällen von grober Fahrlässigkeit, Vorsatz und Personenschäden, ausgeschlossen.

Acknowledgements

The initial code contribution has been supported by the Federal Ministry for Economic Affairs and Energy as part of the CloudCycle project (01MD11023). Current development is supported by the Federal Ministry for Economic Affairs and Energy as part of the the PlanQK project (01MK20005N), the DFG (Deutsche Forschungsgemeinschaft) project ReSUS (425911815), as well as the DFG’s Excellence Initiative project SimTech (EXC 2075 - 390740016). Additional development has been funded by the Federal Ministry for Economic Affairs and Energy projects SmartOrchestra (01MD16001F) and SePiA.Pro (01MD16013F), as well as by the DFG projects SustainLife (641730) and ADDCompliance (636503). Further development is also funded by the European Union’s Horizon 2020 project RADON (825040).

container's People

Contributors

ana-silva avatar berhest avatar binzts avatar brainsucker92 avatar dependabot[bot] avatar endrescn avatar fakepk avatar ghareeb-falazi avatar hahnml avatar jojow avatar kleinech avatar koppor avatar laviniastiliadou avatar legion2 avatar lharzenetter avatar mar-be avatar mathonto avatar milesstoetzner avatar miwurster avatar mkepp avatar nyuuyn avatar philwun avatar rossojo avatar saatkamp avatar shican avatar snisnisniksonah avatar vogel612 avatar wagnerdk avatar wederbn avatar zimmerml avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

container's Issues

Invalid JSON response when requesting NodeTemplate Resource

Current Behavior:
Requesting a NodeTemplate Resource returns an invalid JSON response.
In detail, the response contains multiple "_links" entries that are simply empty:

[...]
   "interfaces":{
      "interfaces":[
         {
            "name":"ContainerManagementInterface",
            "operations":{},
            "_links"
         },
         {
            "name":"http://opentosca.org/interfaces/connections",
            "operations":{},
            "_links"
         }
      ],
      "_links"
   },
[...]

Expected Behavior:
Receiving a well-formatted/valid JSON response (e.g. by omitting "_links" or by returning an empty object/array)

Steps to Reproduce:

  1. Upload an application to Container
  2. Use Postman to perform GET request (e.g. http://localhost:1337/csars/MyTinyToDo_Bare_Docker.csar/servicetemplates/%257Bhttp%253A%252F%252Fopentosca.org%252Fservicetemplates%257DMyTinyToDo_Bare_Docker/nodetemplates/MyTinyToDoDockerContainer)

NullPointer in CSARsResource.uploadCSARAdminUI

This error occurs in the injector-branch:

INTERNAL_SERVER_ERROR

java.lang.NullPointerException at org.opentosca.toscaengine.service.impl.resolver.ServiceTemplateResolver.resolveTopologyTemplate(ServiceTemplateResolver.java:207)
	at org.opentosca.toscaengine.service.impl.resolver.ServiceTemplateResolver.resolve(ServiceTemplateResolver.java:103)
	at org.opentosca.toscaengine.service.impl.resolver.DefinitionsResolver.resolveDefinitions(DefinitionsResolver.java:129)
	at org.opentosca.toscaengine.service.impl.ToscaEngineServiceImpl.resolveDefinitions(ToscaEngineServiceImpl.java:106)
	at org.opentosca.opentoscacontrol.service.impl.OpenToscaControlServiceImpl.invokeTOSCAProcessing(OpenToscaControlServiceImpl.java:74)
	at org.opentosca.containerapi.resources.csar.CSARsResource.handleCSAR(CSARsResource.java:316)
	at org.opentosca.containerapi.resources.csar.CSARsResource.uploadCSARAdminUI(CSARsResource.java:227)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
	at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
	at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
	at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
	at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1511)
	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1442)
	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1391)
	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1381)
	at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)
	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
	at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
	at org.eclipse.equinox.http.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:38)
	at org.opentosca.containerapi.CorsFilter.doFilter(CorsFilter.java:32)
	at org.eclipse.equinox.http.servlet.internal.FilterRegistration.doFilter(FilterRegistration.java:81)
	at org.eclipse.equinox.http.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:35)
	at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:130)
	at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
	at org.eclipse.equinox.http.jetty.internal.HttpServerManager$InternalHttpServiceServlet.service(HttpServerManager.java:317)
	at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:390)
	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
	at org.mortbay.jetty.Server.handle(Server.java:326)
	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
	at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:939)
	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
	at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

BpsConnector : RemoteException: Server not available

Provision New instance of MyTinyToDo_Bare_Docker fails with error:

o.o.c.connector.bps.BpsConnector         : Setting address
o.o.c.connector.bps.BpsConnector         : Setting login data
o.o.c.connector.bps.BpsConnector         : Logging in to BPS
o.o.c.connector.bps.BpsConnector         : RemoteException: Server not available
org.apache.axis2.AxisFault: Connection refused (Connection refused)
	at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430) ~[na:na]
	at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:197) ~[na:na]
 	at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75) ~[na:na]
 	at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:404) ~[na:na]
	at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:231) ~[na:na]
	at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:443) ~[na:na]
 	at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:406) ~[na:na]
 	at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:229) ~[na:na]
	at org.apache.axis2.client.OperationClient.execute(OperationClient.java:165) ~[na:na]
 	at org.wso2.carbon.core.services.authentication.AuthenticationAdminStub.login(AuthenticationAdminStub.java:539) ~[na:na]
 	at org.opentosca.container.connector.bps.BpsConnector.login(BpsConnector.java:453) ~[org.opentosca.container.connector.bps_1.0.0.201706291221.jar:na]
	at org.opentosca.container.connector.bps.BpsConnector.deploy(BpsConnector.java:132) ~[org.opentosca.container.connector.bps_1.0.0.201706291221.jar:na]

Docker Container of the runtime fails to start with Nullpointer on Ubuntu VM

Current Behavior:
The runtime doesn't seem to be able to startup on some plattforms (atleast on ubuntu vm's on bwCloud) even though we use docker. It seem like a problem with log4j or slf4j. It could be because the startup order when initializing the runtime is a little different on each plattform, maybe some "race condition-esque" problem

Here the logs within tomcat:

root@d256ad76a66d:/usr/local/tomcat/logs# cat localhost.2020-08-10.log
10-Aug-2020 09:02:54.579 INFO [main] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
10-Aug-2020 09:02:54.698 INFO [main] org.apache.catalina.core.ApplicationContext.log Set web app root system property: 'webapp.root' = [/usr/local/tomcat/webapps/ROOT/]
10-Aug-2020 09:02:54.706 INFO [main] org.apache.catalina.core.ApplicationContext.log Initializing Logback from [classpath:logback.xml]
10-Aug-2020 09:02:54.710 SEVERE [main] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [ch.qos.logback.ext.spring.web.LogbackConfigListener]
java.lang.ClassCastException: org.slf4j.impl.Log4jLoggerFactory cannot be cast to ch.qos.logback.classic.LoggerContext
at ch.qos.logback.ext.spring.LogbackConfigurer.initLogging(Unknown Source)
at ch.qos.logback.ext.spring.web.WebLogbackConfigurer.initLogging(Unknown Source)
at ch.qos.logback.ext.spring.web.LogbackConfigListener.contextInitialized(Unknown Source)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4678)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5139)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:717)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1133)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1866)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1045)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:429)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1576)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:309)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423)
at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:936)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:421)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.startup.Catalina.start(Catalina.java:738)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:342)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:473)
10-Aug-2020 09:02:54.713 INFO [main] org.apache.catalina.core.ApplicationContext.log Initializing Spring root WebApplicationContext
10-Aug-2020 09:03:10.625 INFO [main] org.apache.catalina.core.ApplicationContext.log Closing Spring root WebApplicationContext
10-Aug-2020 09:03:10.922 INFO [main] org.apache.catalina.core.ApplicationContext.log Uninstalling JUL to SLF4J bridge
10-Aug-2020 09:03:10.922 INFO [main] org.apache.catalina.core.ApplicationContext.log Shutting down Logback
10-Aug-2020 09:03:10.928 SEVERE [main] org.apache.catalina.core.StandardContext.listenerStop Exception sending context destroyed event to listener instance of class [ch.qos.logback.ext.spring.web.LogbackConfigListener]
java.lang.NullPointerException
at ch.qos.logback.ext.spring.LogbackConfigurer.shutdownLogging(Unknown Source)
at ch.qos.logback.ext.spring.web.WebLogbackConfigurer.shutdownLogging(Unknown Source)
at ch.qos.logback.ext.spring.web.LogbackConfigListener.contextDestroyed(Unknown Source)
at org.apache.catalina.core.StandardContext.listenerStop(StandardContext.java:4724)
at org.apache.catalina.core.StandardContext.stopInternal(StandardContext.java:5395)
at org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:257)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:187)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:717)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1133)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1866)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1045)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:429)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1576)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:309)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423)
at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:936)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:421)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.startup.Catalina.start(Catalina.java:738)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:342)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:473)

Expected Behavior:
It should start up

Steps to Reproduce:

  1. Use a Ubuntu Virtual Machine on bwCloud for example
  2. Install Docker Engine and Docker Compose
  3. Use the docker-compose from https://github.com/OpenTOSCA/opentosca-docker

Deploying a CSAR with contained Plans Does Not Work

If a CSAR contains plans that have been generated, e.g., using Winery's Generate Plans button, the deployment does not work as the container generates the plans again and deploys both, the generated ones as well as the already contained ones.

image

container_1         | o.o.c.c.impl.service.CsarStorageServiceImpl       :196  : Successfully stored Csar as PetClinic_MySQL-OpenStack-w1.csar
container_1         | o.o.c.control.winery.WineryConnector              :87   : Exception while checking for availability of Container Repository:
container_1         | iceTemplateBoundaryPropertyMappingsToOutputHandler:173  : ServiceTemplate has no Properties defined
container_1         | iceTemplateBoundaryPropertyMappingsToOutputHandler:64   : Couldn't generate mapping, BuildPlan Output may be empty
container_1         | o.o.p.c.b.t.BPELBuildProcessBuilder               :238  : Created 1 build plans for CSAR PetClinic_MySQL-OpenStack-w1.csar
container_1         | o.o.p.c.b.t.BPELTerminationProcessBuilder         :211  : Created 1 termination plans for CSAR PetClinic_MySQL-OpenStack-w1.csar
container_1         | o.o.p.c.b.t.BPELBackupManagementProcessBuilder    :289  : Created 1 backup plans for CSAR PetClinic_MySQL-OpenStack-w1.csar
container_1         | o.o.c.p.d.plugin.bpel.BpelPlanEnginePlugin        :146  : Deploying Plan: PetClinic_MySQL-OpenStack-w1_backupManagementPlan.zip
container_1         | o.o.c.p.d.plugin.bpel.BpelPlanEnginePlugin        :146  : Deploying Plan: PetClinic_MySQL-OpenStack-w1_buildPlan.zip
container_1         | o.o.c.p.d.plugin.bpel.BpelPlanEnginePlugin        :146  : Deploying Plan: PetClinic_MySQL-OpenStack-w1_terminationPlan.zip
container_1         | o.o.c.p.d.plugin.bpel.BpelPlanEnginePlugin        :146  : Deploying Plan: PetClinic_MySQL-OpenStack-w1_buildPlan.zip
container_1         | o.o.c.p.d.plugin.bpel.BpelPlanEnginePlugin        :146  : Deploying Plan: PetClinic_MySQL-OpenStack-w1_terminationPlan.zip
container_1         | o.o.c.p.d.plugin.bpel.BpelPlanEnginePlugin        :146  : Deploying Plan: PetClinic_MySQL-OpenStack-w1_backupManagementPlan.zip
container_1         | o.o.container.api.controller.CsarController       :333  : Uploading and storing CSAR "PetClinic_MySQL-OpenStack-w1.csar" was successful
container_1         | o.o.container.api.controller.CsarController       :337  : Csar handling took 0:0:26

POST operation for deletion of instance

Idee: POST {delete} auf servicetemplates/instances. Dann wird termination plan aufgerufen, dann macht der ein DELETE auf servicetemplates/instances/ID

Grund: Wenn man DELETE auf ID macht, wird ein Termination Plan aufgerufen, der (bei Erfolg!) wieder ein DELETE darauf machen müsste. Was bei Miserfolg? -- Also lieber so WSDL-Style mit POST als Command.

Spaces in CSAR Filename

Current Behavior:
When a CSAR has spaces in its filename the plan generates variables etc. that have names with spaces in it, which breaks the deployment as the generated BPEL plan can't be deployed.

Expected Behavior:
Spaces shouldn't break the deployment

No consumers available on endpoint: Endpoint[direct://Async-WS-Callback]

Hi.

After running for a few days, contianer got an error like below. After start/stop, everything is ok.
it looks like the issue in https://issues.apache.org/jira/browse/JAMES-1026 very much. and in the page it says "upgraded to camel 2.4.0 which fixes the problem". But i found container depends on camel 2.10.4. could you help to locate how did this happen? Thx.

Error:
2016-09-20 17:32:59,883 DEBUG o.o.s.p.s.s.i.p.CallbackProcessor:70 Found MessageID: ID-mano-44356-1474281956908-8-107
2016-09-20 17:32:59,887 DEBUG org.apache.camel.processor.SendProcessor:121 >>>> Endpoint[stream://out] Exchange[Message: <sch:invokeResponse xmlns:sch="http://siserver.org/schema"> sch:MessageIDID-mano-44356-1474281956908-8-107/sch:MessageID 133.168.1.16 false succss /sch:invokeResponse]
2016-09-20 17:32:59,887 DEBUG o.a.c.component.stream.StreamProducer:127 Writing as byte[]: [60, 115, 99, 104, 58, 105, 110, 118, 111, 107, 101, 82, 101, 115, 112, 111, 110, 115, 101, 32, 120, 109, 108, 110, 115, 58, 115, 99, 104, 61, 34, 104, 116, 116, 112, 58, 47, 47, 115, 105, 115, 101, 114, 118, 101, 114, 46, 111, 114, 103, 47, 115, 99, 104, 101, 109, 97, 34, 62, 32, 32, 32, 32, 32, 32, 32, 32, 32, 60, 115, 99, 104, 58, 77, 101, 115, 115, 97, 103, 101, 73, 68, 62, 73, 68, 45, 109, 97, 110, 111, 45, 52, 52, 51, 53, 54, 45, 49, 52, 55, 52, 50, 56, 49, 57, 53, 54, 57, 48, 56, 45, 56, 45, 49, 48, 55, 60, 47, 115, 99, 104, 58, 77, 101, 115, 115, 97, 103, 101, 73, 68, 62, 32, 32, 32, 32, 32, 32, 32, 32, 32, 60, 110, 101, 116, 119, 111, 114, 107, 73, 112, 62, 49, 51, 51, 46, 49, 54, 56, 46, 49, 46, 49, 54, 60, 47, 110, 101, 116, 119, 111, 114, 107, 73, 112, 62, 32, 32, 32, 32, 32, 32, 32, 32, 32, 60, 105, 115, 83, 117, 99, 99, 101, 115, 115, 62, 102, 97, 108, 115, 101, 60, 47, 105, 115, 83, 117, 99, 99, 101, 115, 115, 62, 32, 32, 32, 32, 32, 32, 32, 32, 32, 60, 115, 116, 97, 116, 117, 115, 62, 115, 117, 99, 99, 115, 115, 60, 47, 115, 116, 97, 116, 117, 115, 62, 32, 32, 32, 32, 32, 32, 60, 47, 115, 99, 104, 58, 105, 110, 118, 111, 107, 101, 82, 101, 115, 112, 111, 110, 115, 101, 62] to java.io.PrintStream@2c68d32d
2016-09-20 17:32:59,888 DEBUG o.apache.camel.processor.ChoiceProcessor:75 #0 - header{header(AvailableMessageID)} == true matches: true for: Exchange[Message: <sch:invokeResponse xmlns:sch="http://siserver.org/schema"> sch:MessageIDID-mano-44356-1474281956908-8-107/sch:MessageID 133.168.1.16 false succss /sch:invokeResponse]
2016-09-20 17:32:59,889 DEBUG o.a.camel.processor.WireTapProcessor:97 >>>> (wiretap) Endpoint[direct://Async-WS-Callback] Exchange[Message: <sch:invokeResponse xmlns:sch="http://siserver.org/schema"> sch:MessageIDID-mano-44356-1474281956908-8-107/sch:MessageID 133.168.1.16 false succss /sch:invokeResponse]
2016-09-20 17:32:59,891 WARN o.a.c.component.direct.DirectProducer:54 No consumers available on endpoint: Endpoint[direct://Async-WS-Callback] to process: Exchange[Message: <sch:invokeResponse xmlns:sch="http://siserver.org/schema"> sch:MessageIDID-mano-44356-1474281956908-8-107/sch:MessageID 133.168.1.16 false succss /sch:invokeResponse]
2016-09-20 17:32:59,907 DEBUG o.a.camel.processor.DefaultErrorHandler:170 Failed delivery for (MessageId: ID-mano-44356-1474281956908-10-198 on ExchangeId: ID-mano-44356-1474281956908-10-199). On delivery attempt: 0 caught: org.apache.camel.CamelExchangeException: No consumers available on endpoint: Endpoint[direct://Async-WS-Callback]. Exchange[Message: <sch:invokeResponse xmlns:sch="http://siserver.org/schema"> sch:MessageIDID-mano-44356-1474281956908-8-107/sch:MessageID 133.168.1.16 false succss /sch:invokeResponse]
2016-09-20 17:32:59,912 DEBUG o.a.c.component.cxf.CxfClientCallback:64 default-workqueue-2 calling handleResponse
2016-09-20 17:32:59,912 DEBUG org.apache.camel.impl.ConsumerCache:102 <<<< Endpoint[direct://Async-WS-Callback]
2016-09-20 17:32:59,913 DEBUG org.apache.camel.impl.ConsumerCache:92 Adding to consumer cache with key: Endpoint[direct://Async-WS-Callback] for consumer: PollingConsumer on Endpoint[direct://Async-WS-Callback]
2016-09-20 17:32:59,929 ERROR o.a.camel.processor.DefaultErrorHandler:215 Failed delivery for (MessageId: ID-mano-44356-1474281956908-10-198 on ExchangeId: ID-mano-44356-1474281956908-10-199). Exhausted after delivery attempt: 1 caught: org.apache.camel.CamelExchangeException: No consumers available on endpoint: Endpoint[direct://Async-WS-Callback]. Exchange[Message: <sch:invokeResponse xmlns:sch="http://siserver.org/schema"> sch:MessageIDID-mano-44356-1474281956908-8-107/sch:MessageID 133.168.1.16 false succss /sch:invokeResponse]
org.apache.camel.CamelExchangeException: No consumers available on endpoint: Endpoint[direct://Async-WS-Callback]. Exchange[Message: <sch:invokeResponse xmlns:sch="http://siserver.org/schema"> sch:MessageIDID-mano-44356-1474281956908-8-107/sch:MessageID 133.168.1.16 false succss /sch:invokeResponse]
at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:56) ~[org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.processor.RedeliveryErrorHandler.processErrorHandler(RedeliveryErrorHandler.java:334) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:220) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.processor.RouteContextProcessor.processNext(RouteContextProcessor.java:46) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.processor.UnitOfWorkProcessor.processAsync(UnitOfWorkProcessor.java:150) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.processor.UnitOfWorkProcessor.process(UnitOfWorkProcessor.java:117) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:99) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:86) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.processor.WireTapProcessor$1.call(WireTapProcessor.java:98) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at org.apache.camel.processor.WireTapProcessor$1.call(WireTapProcessor.java:94) [org.apache.camel.camel-core_2.10.4.jar:2.10.4]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]

Regards.

Return correct mime-types

Resource /containerapi/CSARs/{c}/Content/SELFSERVICE-Metadata/image.jpg should respond with the correct mime-type (image/jpeg) in order to display the image properly.

Support Devices for Docker Containers

Should be possible to mount devices into docker containers.

  • adjust management bus
  • adjust docker container node type
  • adjust docker engine node type
  • write test for starting a container wihtout any mounts
  • write test for starting a container with a mounted device

soap request callback match messageId error

Hi,

I found an issue while i called soap ia by container. The reason is the callbackProcessor matches the webservice IA by message.contain(messageId) (org.opentosca.siengine.plugins.soaphttp.service.impl.processor.CallbackProcessor: line 67).

Below is my scenario:
First, for somehow, there was a messageId 'ID-host-172-66-1-78-39831-1472835311831-8-5' stored, and then a new request B was send to webservice ia with the messageId 'ID-host-172-66-1-78-39831-1472835311831-8-55'.
Then the request of B callback with the messageId 'ID-host-172-66-1-78-39831-1472835311831-8-55'. The callbackProcessor mathes the stored messageIds to the request B's callback message and found it contians 'ID-host-172-66-1-78-39831-1472835311831-8-5'. so it thought this is the callback of 'ID-host-172-66-1-78-39831-1472835311831-8-5'. but in fact this is the callback of 'ID-host-172-66-1-78-39831-1472835311831-8-55'.

Below is part of logs:
Stored messageIDs: [ID-host-172-66-1-78-39831-1472835311831-8-5, ID-host-172-66-1-78-39831-1472835311831-8-55]
Found MessageID: ID-host-172-66-1-78-39831-1472835311831-8-5

Regards.

Failures in IA Deployment aren't reflected in the API

Current Behavior:
Basically, when the IA Engine tries to deploy an IA and fails (which is clearly stated in the logs), in the UI it is still marked as a successful deployment.

Expected Behavior:
An error should be thrown from the API to UI

Steps to Reproduce:
Take any CSAR, configure the container with wrong endpoints for the Tomcat, deploy the CSAR, see the deployment marked as successful although no IA was ever deployed.

System Information:
Tested with local setup (Container in Eclipse, Tomcat and BPS running locally) on Mac

PlanInvocation fails repeatedly after first fail of invocation

As the MBEventHandler class has this synchronized block beginning at line 119:

synchronized (this) {
	try {					
		consumer.start();
                Exchange exchange = consumer.receive("direct-vm:" + Activator.apiID);
		response = exchange.getIn().getBody();
		callbackMessageID = exchange.getIn().getMessageId();				
		consumer.stop();				
	} catch (Exception e) {
		MBEventHandler.LOG.error("Some error occured.");
		e.printStackTrace();
	}
}

The whole object is getting blocked when no proper response (success or fail) is returned which blocks basically the whole invocation of plans thereafter.

I just can't compiled, lot of errors.

errors like:
The method debug(String, Object, Object) in the type Logger is not applicable for the arguments (String, int, String, CSARID) CSARContent.java /container/org.opentosca.core.model.csar/src/org/opentosca/core/model/csar line 252

The method getLogger(String) in the type LoggerFactory is not applicable for the arguments (Class)

I'm using eclipse lunar(JDK1.7), and changed to jdk1.6 same problems.

Support Interface Inheritance

Currently, the Plan Builder does not scan through the inerhitance hierarchy to find a specific interface and operation.

Thus, the plan is to implement this as a utility method in Winery and use it here.

Also consider the implemtation for this in the Management Bus (as it is already capable of resolving the iterfaces inheritance).

Why is it called container?

By the source code inside and description, this repo contains an orchestration app, like terraform for Terraform and kubectl for Kubernetes, or ansible-playbook if we talk about Ansible.

But the tool doesn't orchestrate containers, such as Docker, CRI-O, LXD etc. Neither it provides a runtime/VM to run containers in. So why call it container?

ServiceInstances are not properly deleted after CSAR deletion

Current Behavior:
If a CSAR is deleted and it has/had running ServiceInstances, the instances won't be deleted. If you reupload the CSAR the ServiceInstance data is back again => no real delete in the backend

Expected Behavior:
After deleting a CSAR and reuploading it there shouldn't be any ServiceInstance available afterwards

Steps to Reproduce:

  1. Use any CSAR and create an Instance from the Service within (and optionally terminate the instance)
  2. Undeploy the CSAR and redeploy it
  3. You will clearly see that the ServiceInstance from Step 1 is still there

System Information:
Issue is independent of Platform

Support Privileged Mode for Docker Containers

Should be possible to run docker container in privileged mode.
For example, to initalize a vcan inside the container.

  • adjust management bus
  • adjust docker container node type
  • adjust docker engine node type
  • write test for starting a non privileged container
  • write test for starting a privileged container

Idle Timeout

Current Behavior:
For some reason, there appears this error sometimes during the execution of a plan:

container_1    | 2021-11-17 12:50:10.316 ERROR [qtp244180360-142] o.a.c.c.jetty.CamelContinuationServlet   : Error processing request
container_1    | java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout expired: 30000/30000 ms
container_1    | 	at org.eclipse.jetty.util.SharedBlockingCallback$Blocker.block(SharedBlockingCallback.java:234)
container_1    | 	at org.eclipse.jetty.server.HttpOutput.channelWrite(HttpOutput.java:269)
container_1    | 	at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:861)
container_1    | 	at org.apache.camel.util.IOHelper.copy(IOHelper.java:193)
container_1    | 	at org.apache.camel.util.IOHelper.copy(IOHelper.java:148)
container_1    | 	at org.apache.camel.http.common.DefaultHttpBinding.copyStream(DefaultHttpBinding.java:492)
container_1    | 	at org.apache.camel.http.common.DefaultHttpBinding.doWriteDirectResponse(DefaultHttpBinding.java:558)
container_1    | 	at org.apache.camel.http.common.DefaultHttpBinding.doWriteResponse(DefaultHttpBinding.java:431)
container_1    | 	at org.apache.camel.http.common.DefaultHttpBinding.writeResponse(DefaultHttpBinding.java:354)
container_1    | 	at org.apache.camel.component.jetty.CamelContinuationServlet.doService(CamelContinuationServlet.java:262)
container_1    | 	at org.apache.camel.http.common.CamelServlet.service(CamelServlet.java:130)
container_1    | 	at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
container_1    | 	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791)
container_1    | 	at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626)
container_1    | 	at org.apache.camel.component.jetty.CamelFilterWrapper.doFilter(CamelFilterWrapper.java:47)
container_1    | 	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
container_1    | 	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
container_1    | 	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
container_1    | 	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
container_1    | 	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1435)
container_1    | 	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
container_1    | 	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
container_1    | 	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
container_1    | 	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1350)
container_1    | 	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
container_1    | 	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
container_1    | 	at org.eclipse.jetty.server.Server.handleAsync(Server.java:559)
container_1    | 	at org.eclipse.jetty.server.HttpChannel.lambda$handle$2(HttpChannel.java:396)
container_1    | 	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)
container_1    | 	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:396)
container_1    | 	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
container_1    | 	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
container_1    | 	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
container_1    | 	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
container_1    | 	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:773)
container_1    | 	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:905)
container_1    | 	at java.base/java.lang.Thread.run(Thread.java:829)
container_1    | Caused by: java.util.concurrent.TimeoutException: Idle timeout expired: 30000/30000 ms
container_1    | 	at org.eclipse.jetty.io.IdleTimeout.checkIdleTimeout(IdleTimeout.java:171)
container_1    | 	at org.eclipse.jetty.io.IdleTimeout.idleCheck(IdleTimeout.java:113)
container_1    | 	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
container_1    | 	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
container_1    | 	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
container_1    | 	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
container_1    | 	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
container_1    | 	... 1 common frames omitted
container_1    | 2021-11-17 12:50:10.317 WARN  [qtp244180360-142] org.eclipse.jetty.server.HttpChannel     : handleException /callback java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout expired: 30000/30000 ms

I don't know how to reproduce it or when it happens. It just occurs sometimes in the logs but does not seem to break anything. Yet.

connectTo gets executed multiple times

Current Behavior:

grafik

The connectTo operation between the QHana-Backend and the QHana-PluginRunner gets executed twice.
Both times code gets executed at the node of the QHana-Backend.
Once, the QHana-Backend is the source node, and the QHana-PluginRunner is the target node (expected behavior).
Secondly, the QHana-Backend itself is the target node.
The connectTo operation has two properties, IP and Port, that match both nodes, the QHana-Backend and the QHana-PluginRunner.
Presumably, the second execution comes from the other connection between QHana-UI and QHana-Backend.

Expected Behavior:

The connectTo operation between the QHana-Backend and the QHana-PluginRunner should be executed only once at the QHana-Backend with the QHana-PluginRunner as the target node.

local csars were deleted for they were not accessed too long

Hi.

I'm not so sure whether this is an issue or it is a temp strategy.

Right now, the CSARs will be stored to a temp path(java.io.tmpdir). But the files under temp path will be deleted automatically. For example, in my system, temp files will be deleted if they are not accessed for 10 days. But CSARs are application data, the should not be deleted automatically. Whether the store location should be changed a new path, so it should not be deleted for unknown?

Regards.

NullPointer in CSARsResource.uploadCSARAdminUI for ModelUtils.hasOpenRequirements Call

Error occurs in injector-branch:

INTERNAL_SERVER_ERROR

	at org.opentosca.containerapi.resources.utilities.ModelUtils.hasOpenRequirements(ModelUtils.java:32)
	at org.opentosca.containerapi.resources.csar.CSARsResource.handleCSAR(CSARsResource.java:318)
	at org.opentosca.containerapi.resources.csar.CSARsResource.uploadCSARAdminUI(CSARsResource.java:227)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
	at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
	at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
	at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
	at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1511)
	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1442)
	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1391)
	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1381)
	at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)
	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
	at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
	at org.eclipse.equinox.http.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:38)
	at org.opentosca.containerapi.CorsFilter.doFilter(CorsFilter.java:32)
	at org.eclipse.equinox.http.servlet.internal.FilterRegistration.doFilter(FilterRegistration.java:81)
	at org.eclipse.equinox.http.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:35)
	at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:130)
	at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
	at org.eclipse.equinox.http.jetty.internal.HttpServerManager$InternalHttpServiceServlet.service(HttpServerManager.java:317)
	at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:390)
	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
	at org.mortbay.jetty.Server.handle(Server.java:326)
	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
	at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:939)
	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
	at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

Planning fails if DeploymentArtifacts XML Tag is empty

If there is an empty DeploymentArtifacts XML Tag inside a Node Type Implementation then planning throws a Null Pointer Exception.
This happens e.g. when a Deployment Artifact is added in Winery and then removed.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Definitions targetNamespace="http://opentosca.org/nodetypeimplementations" id="nodeTypeImplementations-DockerContainer-Implementation_w1" xmlns="http://docs.oasis-open.org/tosca/ns/2011/12" xmlns:yml="http://docs.oasis-open.org/tosca/ns/simple/yaml/1.3" xmlns:selfservice="http://www.eclipse.org/winery/model/selfservice" xmlns:winery="http://www.opentosca.org/winery/extensions/tosca/2013/02/12" xmlns:testwineryopentoscaorg="http://test.winery.opentosca.org">
    <NodeTypeImplementation targetNamespace="http://opentosca.org/nodetypeimplementations" name="DockerContainer-Implementation_w1" abstract="no" final="no" nodeType="nodeTypes:DockerContainer_w1" xmlns:nodeTypes="http://opentosca.org/nodetypes">
        <ImplementationArtifacts>
            <ImplementationArtifact interfaceName="ContainerManagementInterface" name="DockerContainer_ContainerManagementInterface_IA" artifactType="artifactTypes:WAR-Java8" artifactRef="artifactTemplates:DockerContainer_ContainerManagementInterface-w1" xmlns:artifactTemplates="http://opentosca.org/artifacttemplates" xmlns:artifactTypes="http://opentosca.org/artifacttypes"/>
        </ImplementationArtifacts>
        <DeploymentArtifacts/>   <---- THIS WILL BREAK
    </NodeTypeImplementation>
</Definitions>

Provide status information about OpenTOSCA components

Current Behavior:
Starting OpenTOSCA inside docker containers on openstack vms takes up to 35min (wso2server.sh).
If the user starts provisioning new instances before all OpenTOSCA components are ready org.opentosca.container.connector.bps.BpsConnector.login fails.

Expected Behavior:
The user should be informed about the status of all OpenTOSCA components and provisioning new instances of applications must be blocked until all components are ready.

Steps to Reproduce:

  1. ssh user@openstackVM
  2. git clone https://github.com/OpenTOSCA/opentosca-dockerfiles.git && cd opentosca-dockerfiles
  3. sudo docker-compose build
  4. sudo docker-compose up
  5. open :8088
  6. Upload New Application
  7. Provision New Instance
  8. Error

Other Information:

  • Possible Solution: Analyze log files StartupFinalizerServiceComponent - WSO2 Carbon started in 64 sec or test org.opentosca.container.connector.bps.BpsConnector.login
  • Block Provision New Instance until all OpenTOSCA components are ready
  • Additional: Test if components work properly (BSP server available + Docker-Container-component can create new container with dns working)

System Information:

  • Openstack VM:
    • image: ubuntu-16.04-LTS-xenial-server-cloudimg
    • flavor: m1-lager( 8VCPUs, 16Gb RAM)

containerapi service is not available

Hi:
everythink are OK, but from the main page click Administrative UI, or Vinothek Self-Service Portal
the following error display:

Vinothek failed to connect to 'http://127.0.0.1:1337/containerapi/CSARs'! Please make sure OpenTOSCA
runs properly on this machine and is accessible. In particular, check if the listed port is
accessible from the outside, i.e., from the machine Vinothek is running on and from the
user's machine. If OpenTOSCA runs (in its default configuration) on a virtual machine, you
need to configure the firewall so at least ports 22, 1337, 8080, 9443 and 9763 are open.

I then use a brower to browse http://127.0.0.1:1337/containerapi/CSARs, the following error:
HTTP ERROR 404

Problem accessing /containerapi/CSARs. Reason:

ProxyServlet: /containerapi/CSARs

I had start the OpenToscaContainer throught eclipse launch ContainerAPI_ALL.launch, It display launch successed.

Add feature to cleanup the container at startup

Right now we have a bundle that is able to cleanup the core by removing every CSAR inside. The problem is that the bundle will run that logic when it is started, without any control by the user.
It would be better for the user if the would implement a check with the settings bundle for a "-clean" parameter and then do the cleanup.

Refactor list resource /servicetemplates to object resource

Current Behavior:
CSAR resource of api delivers _links.servicetemplates.

Expected Behavior:
Since we can only handle one servicetemplate per csar, the key should be _links.servicetemplate for proper semantics.

Steps to Reproduce:

  1. Upload MyTinyToDo_Bare_Docker.csar to container
  2. Request http.get to http://localhost:1337/csars/MyTinyToDo_Bare_Docker.csar with header 'accept':'application/json'

Other Information:

{
    "id": "MyTinyToDo_Bare_Docker.csar",
    "name": "MyTinyToDo_Bare_Docker.csar",
    "display_name": "MyTinyToDo Docker Service",
    "authors": [],
    "description": "Installs the PHP Application MyTinyToDo as a Docker Container on a locally installed DockerEngine.<br>",
    "icon_url": "http://localhost:1337/containerapi/CSARs/MyTinyToDo_Bare_Docker.csar/Content/SELFSERVICE-Metadata/icon.jpg",
    "image_url": "http://localhost:1337/containerapi/CSARs/MyTinyToDo_Bare_Docker.csar/Content/SELFSERVICE-Metadata/image.jpg",
    "_links": {
        "servicetemplates": {
            "href": "http://localhost:1337/csars/MyTinyToDo_Bare_Docker.csar/servicetemplates"
        },
        "self": {
            "href": "http://localhost:1337/csars/MyTinyToDo_Bare_Docker.csar"
        }
    }
}

System Information:
Current master of container

Instance stuck at migration during transformation

Current Behavior:
Create two versions of MyTinyToDo_Bare_Docker (one with only DockerEngine, one with DockerEngine and MyTinyToDo), taken from tosca-definitions-public.
When transforming a running instance between two different versions of MyTinyToDo_Bare_Docker, transformation fails and instance remains in state MIGRATING

Expected Behavior:
Old instance has state MIGRATED (listed under MyTinyToDo_Bare_Docker_DockerEngine.csar) and new instance is CREATED (listed under MyTinyToDo_Bare_Docker_Complete.csar)

Steps to Reproduce:

  1. Upload both CSARs to Container
  2. Create new instance using MyTinyToDo_Bare_Docker_DockerEngine.csar
  3. Create transformation plan (POST request to /csars/transform with MyTinyToDo_Bare_Docker_Only_DockerEngine.csar as source and MyTinyToDo_Bare_Docker_Complete.csar as target)
  4. Execute transformation plan (listed under instance created in step 2)
  5. Transformation is stuck at migration

Other Information:
ODE returns the following error message:

'//*[local-name()='NodeTemplateInstanceResources']/*[local-name()='NodeTemplateInstances']/*[local-name()='NodeTemplateInstance'][1]/*[local-name()='Links']/*[local-name()='Link']/@*[local-name()='href']/string()' against '<?xml version="1.0" encoding="UTF-8"?>
<NodeTemplateInstanceResources><Links><Link href="http://192.168.2.213:1337/csars/MyTinyToDo_Bare_Docker_DockerEngine.csar/servicetemplates/%257Bhttp%253A%252F%252Fopentosca.org%252Fservicetemplates%257DMyTinyToDo_Bare_Docker_w1-wip1/nodetemplates/DockerEngine/instances" rel="self"/></Links><NodeTemplateInstances/></NodeTemplateInstanceResources>'

Performing a GET request to the respective URL returns the DockerEngine node instance with state DELETED

System Information:
Operating System: Win10
Container locally hosted using Eclipse (master branch)
Remaining components (UI, engine-plan, ...) run using docker-compose
CONTAINER_BUS_MANAGEMENT_MOCK set to true

Hibernate.initialize() shouldn't be used

Current Behavior:
Currently when object can't be loaded in lazy-mode from the databse they are loaded via Hibernate.initialize() method,
see e.g. here:

Expected Behavior:
We shouldn't load objects by relying on underlying implementaion of JPA. A solution would be to deactivate lazy loading via JPA annotations, but then again, we lose lazy loading.

Refactor Application/Management Bus

Current Behavior:
Currently the management bus became a mess, as it combines multiple techniques, styles and so on.
E.g. we use dependency injection, camel and the bus also calls itself, making it really hard to debug, especially when we consider that even the core has some parts of logic in it.

Expected Behavior:
Maybe(!) a proper way to refactor the bus (and maybe the whole control) would be by using only camel, instead of mixing multiple things all together

Steps to Reproduce:

  1. Just try to debug a single deployment and look at all messages running through the bus
    OR
  2. Try to debug the distributed features of the container such choreographies or messaging via mqtt

Other Information:
We could also bump up the version of camel as our current version has problems with security

Situationtriggers aren't reflected correctly in the API

Hello everyone,

I am currently integrating situations and situationtriggers into the OpenTOSCA UI as part of my bachelor thesis. Unfortunately, the following problems sometimes occur when creating and managing situation triggers:

  1. The situations of situation triggers sometimes disappear.
  2. Input parameters disappear after the POST request.
  3. The CsarID disappears after the POST request.

Steps to Reproduce:

<?xml version="1.0" encoding="UTF-8"?>
<Situation>
    <ThingId>Kalle</ThingId>
    <SituationTemplateId>AtHome</SituationTemplateId>
    <Active>false</Active>
</Situation>
<?xml version="1.0" encoding="UTF-8"?>
<SituationTrigger>
    <Situations><SituationId>74</SituationId></Situations>
    <CsarId>MyTinyToDo_Bare_Docker.csar</CsarId>
<onActivation>true</onActivation>
<isSingleInstance>true</isSingleInstance>
<InterfaceName>OpenTOSCA-Lifecycle-Interface</InterfaceName>
<OperationName>initiate</OperationName>
<InputParameters>
    <InputParameter>
        <name>ApplicationPort</name>
        <Value>9990</Value>
        <Type>String</Type>
    </InputParameter>
    <InputParameter>
        <name>ContainerSSHPort</name>
        <Value>9991</Value>
        <Type>String</Type>
    </InputParameter>
    <InputParameter>
        <name>DockerEngineURL</name>
        <Value>tcp://dind:2375</Value>
        <Type>String</Type>
    </InputParameter>
</InputParameters>
</SituationTrigger>

After GET Request:

{
    "id": 343,
    "situation_ids": [
        74
    ],
    "on_activation": true,
    "interface_name": "OpenTOSCA-Lifecycle-Interface",
    "operation_name": "initiate",
    "input_params": [],
    "event_probability": -1.0,
    "single_instance": false,
    "_links"
}

Thank you in advance.

Running DB migration in a bash script

One of the OpenTOSCA presentaton slides mention that data migration is not covered by TOSCA standard. So that means I had to implement the logic myself.

I already got my CI/CD bash script running python migrate with some logic setting environment variables.

What do I need to do to execute the same operation (bash script + python) in OpenTOSCA during deployment.

P.S. I am a DevOps and I am trying to get a hands on understanding of how to use OpenTOSCA to extend or replace existing pipeline.

Windows Line Endings throws "No such file or directory" during Execution

If the ScriptArtifact contains windows line endings the execution throws the following error.

�[36mengine-ia-jdk8_1        |�[0m sudo: unable to execute /virtual-Raspberry-Software-Minimal_w1-wip1.csar/artifacttemplates/http%253A%252F%252Fwww.example.org%252Ftosca%252Fartifacttemplates/virtual-ECU-Software-Configure_configure-w1-wip1/files/configure.sh: No such file or directory

The error is missleading since the file actually exists but bash can not process it correctly.
We need to discuss how to mitigate this problem.

We could clean the script before execution using e.g. sed -i -e 's/\r$//' scriptname.sh.
This only applies to shell scripts.
What is about Python scripts etc.
See https://stackoverflow.com/a/39527986 for more information.

We could improve logging so that its clear during debugging that the file actually exists and that the problem might indicate a line endings problem.

We could warn the user in winery that the uploaded script has windows line endings.

What do you think?


Tasks

  • Docker Container
  • Ubuntu VM
  • Documentation

Xml convert error while InvokeIA

Hi,
I'm testing to invoke IA after i imported the csar. The invoke successed after I started the container. But I found if i continuously invoke IA, there will be an issue .This issue happens after a few invokes while I invoke IA in a loop. Once this issue happens it will happen every time when i invoke IA. And when i add a time interval between two invokes, this issue never happens.

Hope u may help me with this problem, thx.

the exceptoins like below:

2016-06-01 14:56:28,497 ERROR o.a.camel.processor.DefaultErrorHandler Failed delivery for (MessageId: ID-host-10-10-1-67-45893-1464763858788-4-54 on ExchangeId: ID-host-10-10-1-67-45893-1464763858788-4-55). Exhausted after delivery attempt: 1 caught: java.lang.ClassCastException: org.opentosca.model.tosca.TInterface cannot be cast to com.sun.xml.internal.bind.v2.runtime.unmarshaller.DomLoader$State

at org.opentosca.toscaengine.xmlserializer.service.impl.XMLSerializer.unmarshal(XMLSerializer.java:282) ~[na:na]
at org.opentosca.toscaengine.service.impl.toscareferencemapping.ToscaReferenceMapper.getJAXBReference(ToscaReferenceMapper.java:335) ~[na:na]
at org.opentosca.toscaengine.service.impl.ToscaEngineServiceImpl.getNodeTypeOfNodeTemplate(ToscaEngineServiceImpl.java:1412) ~[na:na]
at org.opentosca.siengine.service.impl.SIEngineServiceImpl.invokeIA(SIEngineServiceImpl.java:149) ~[na:na]

Wrong response entity for State

Resource /containerapi/CSARs/{c}/ServiceTemplates/{t}/Instances/{id}/State should return the correct response entity. Currently Properties are returned by this resource...

Plan Service to generate plans via API not working

Current Behavior:
When the winery requests to generate plans the container throws a 404 error

Expected Behavior:
The plan service should retrieve a CSAR and generate the needed plans, afterwards push the plans to winery

Other Information:
I was informed that it doesn't work because of namespaces encodings

Query node / relation instances based on service instances (not only the state or relation)

Current Behavior:
Querying of node/relationship template instances by the container API is done using the state and without an option to set the service instance id.

Expected Behavior:
Querying of node/relationship template instances by the container API should be done using the state and the service instance id.

Steps to Reproduce:
Look here:
org.opentosca.container.api.controller.NodeTemplateInstanceController.getNodeTemplateInstances(List, List)

bundle dependency problem

When I start containerapi bundle, It need org.opentosca.planbuilder.export(which is not configured in config.ini, so I added),
but when start org.opentosca.planbuilder.export, it need Import-Package: org.apache.ode.activityrecovery; this package(bundle) is not available.

when I download a ode bundle(ode-jbi-bundle-1.3.5.jar), but it need jbi...

OpenAPI spec generates duplicate parameters for operations

Current Behavior:
The generated OpenAPI spec under ..:1337/openapi.yaml generates an invalid spec it contains operations with duplicate parameters, e.g.:

/csars/{csar}/servicetemplates/{servicetemplate}/nodetemplates/{nodetemplate}:
    get:
      operationId: getNodeTemplate
      parameters:
      - name: csar
        in: path
        required: true
        schema:
          type: string
      - name: servicetemplate
        in: path
        required: true
        schema:
          type: string
      - name: nodetemplate
        in: path
        required: true
        schema:
          type: string
      - name: csar
        in: path
        required: true
        schema:
          type: string
      - name: servicetemplate
        in: path
        required: true
        schema:
          type: string
      responses:
        default:
          description: default response
          content:
            application/json: {}
            application/xml: {}

Expected Behavior:
Duplicated parameters should be there

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.