Git Product home page Git Product logo

carp.core-kotlin's People

Contributors

cortinico avatar jakdan99 avatar jeyjey626 avatar ltj avatar patrickusiewicz avatar whathecode avatar xelahalo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

carp.core-kotlin's Issues

Can 'multiplatform' structure for domain submodules be cleaned up?

Now that all 'common' non-domain specific code has been moved to carp.common and carp.test, the js and jvm parts of the submodules that do contain domain specific code (carp.protocols, carp.deployment, carp.studies) no longer contain any code. All code is common.

Opening these projects in the project hierarchy is really verbose.

  • Do we ever expect jvm or js code in domain submodules? I would presume most of that (as happened now) would become part of carp.common (e.g., having to rely on platform-specific date/time representations).
  • Is there a nicer project structure when we only expect common code in those submodules?

(How) should data types be serialized?

Data types are the 'glue' between measure requests and device capabilities:

  • Devices will define the available data streams (by data type).
  • Measures specify the type of data they want to measure.

entity_model

During study protocol creation this allows triggering protocol warnings/errors when measures are triggered to devices which do not support said measures. At startup of the study environment (e.g., smartphone) after loading the study protocol, a warning can be triggered when not all data streams to be measured have a suitable 'probe' registered to be used for data collection.

Both devices and measures thus define data types in the study protocol. However, (how) should these be serialized?

  • In the case of device data streams, we could simply hard code them so they are available if the corresponding device classes are loaded. The only down side is we cannot interpret data streams of unknown devices (specified in extending libraries) on the server side. Is there a use case where we have to?
  • In essence, we only really need the type name in order to 'match' device data streams with measures. Any additional data in data types is likely only needed once we want to start specifying 'secondary' data streams which process other data streams on the device, e.g., higher-order interpretations of data. If we want to write triggers which are reusable for 'custom' data streams they will need 'some' info on how to interpret said data; this was the initial purpose of a 'data type' definition.

Wrap unsupported protocol types upon deserialization of protocol snapshot

Currently, when types unknown at compile time are attempted to be deserialized, deserialization fails. This is undesirable since we want to be able to deserialize a study protocol to interpret the relations between devices, tasks, and triggers, without having to know about the specifics (the specific types). Concretely, a study deployment server should be able to interpret a study protocol so it knows which triggers and tasks to send to which device but (for now) does not need to know about the concrete types.This way, a common deployment server can be used for any concrete study implementation.

I propose to deal with this by wrapping unknown types in a 'Unknpwn' wrapper extending from the base type and extracting the base member information for the common interface, while storing the full serialized type alongside it. Important to note is that this type should only exist in deserialized form. When serializing it, the original (unsupported type) serialization should be used. This ensures study protocol snapshot equality, while allowing systems unaware about the concrete types to interpret the base interface and redistribute the full type.

Unnecessary @Transient serialization attributes in abstract classes

Currently, there is a bug in kotlinx.serialization which causes serialization to fail when abstract class members are overridden. To circumvent this, I found out that adding @Transient to those specific members in the abstract class definitions resolves the problem.

However, since this is now recognized as a bug by Kotlin developers this is likely to change in the future. I would expect that the attribute should simply be dropped. In fact, (depending on the specification) keeping it might just as well prevent the overridden members from being serialized in the future.

Prevent deriving types from Measure having to specify DataTypeSerializer

Currently, since DataType is polymorph, we use DataTypeSerializer in order to support loading any extending types unknown at runtime by wrapping them in CustomDataType upon deserialization.

To make this work, all types extending from Measure need to specify to use the DataTypeSerializer:

@Serializable
data class StubMeasure(
    @Serializable( with = DataTypeSerializer::class )
    override val type: DataType = StubDataType() ) : Measure()

At the time of writing, there seems no support to specify this serializer on the base property of the abstract Measure class.

Can we deal with this in a nicer way?

Use or remove custom `InvalidConfigurationError` for study protocol?

During the implementation of the 'Deployment' domain model, I relied on the default IllegalArgumentException instead of a custom exception as I did for StudyProtocol (InvalidConfigurationError).

Coming to think of it now, it is not all that clear why some method calls in StudyProtocol don't simply rely on IllegalArgumentException as well. Is there a good motivation on where to draw the line between relying on custom exceptions in this case?

JS `UnknownPolymorphicSerializer` does not enforce that wrapper types should implement `UnknownPolymorphicWrapper`

Reflection in JavaScript runtime is limited and therefore I currently see no way to access implemented interfaces of the wrapper type to verify whether it implements UnkownPolymorphicWrapper (required for the code to work).

This causes a discrepancy in runtime behavior between JVM and JS. On JS, the verifyUnknownPolymorphicWrapper parameter of the UnknownPolymorphicSerializer constructor is unused.

However, this is not a major issue since JVM runtime unit tests do cover this.

Build can fail the first time (class X must have no arg constructor); rerunning the build makes it succeed

Seemingly due to some bug in the kotlinx.serialization compiler plugin, the build can fail at times: Kotlin/kotlinx.serialization#152

The cause mentioned is a particular non-serializable base class needs a constructor without parameters. However, the base class in question has a default parameter specified, so a constructor without parameters exists. Even more, the base class is specified as @Serializable.

Rerunning the build a second time 'resolves' the issue. From then on, the build error no longer happens until you modify the class (or source file?) where the error came from.

Turn request objects into sealed class once serialization supports it

Currently, kotlinx.serialization does not fully support serialization of sealed classes.

Therefore, request objects (such as ProtocolServiceRequest's) are specified as extending from an abstract class, and all the extending classes need to be loaded in a SerialModule.

Once kotlinx.serialization provides full support for sealed classes, all application service request objects (which can be found in carp.<subsystem>.infrastructure.<service>Requests.kt) can be turned into sealed classes, and the polymorphic registration of extending classes may be removed.

Reuse one node installation across different modules

Right now, for each JS module in this repository, an individual node installation is requested when building (to install the JS test running: Mocha).

Does this make sense? Could one node installation be shared across modules?

Log when running unit tests shows exceptions

Although all tests seem to run just fine, some warnings are logged while running them. They each follow the following form:

Jan 11, 2018 4:38:19 PM org.junit.platform.commons.util.ClasspathScanner logMalformedClassName
WARNING: The java.lang.Class loaded from path [C:\src\bhrp.studyprotocol.domain\out\test\bhrp.studyprotocol.domain\bhrp\studyprotocol\domain\devices\DeviceConfigurationTest$can't addConnectedDevice for non-existing master device$1.class] has a malformed class name [bhrp.studyprotocol.domain.devices.DeviceConfigurationTest$can't addConnectedDevice for non-existing master device$1].
java.lang.InternalError: Malformed class name
at java.lang.Class.getSimpleName(Class.java:1330)
at java.lang.Class.isAnonymousClass(Class.java:1411)
at java.lang.Class.isLocalClass(Class.java:1422)
...

It seems like a warning like this is triggered for each unit test defined in an interface (for now, TaskConfigurationTest and DeviceConfigurationTest) which contains an assertFailsWith check.

Regardless, the tests deriving from these interfaces (e.g., EmptyDeviceConfigurationTest and EmptyTaskConfigurationTest) run just fine, thus for now there seems to be no major problem.

Reduce duplicate code in `Mock` and `ServiceInvoker`

Due to limited reflection on Kotlin JS targets, the quickly created makeshift Mock and ServiceInvoker classes have quite a bit of redundant code.

Mock keeps track of which methods are called. However, JS targets only have access to the method name, and no further parameters. This causes problems for overloaded methods, since they cannot be discerned from one another. Therefore, for each track call, I included an additional trackOverloaded method which you can pass an 'overload identifier' to in order to identify the specific overloaded method.

Similarly, ServiceInvocator includes overloaded methods which include an 'overload identifier', so that they can be unit tested by using Mock.

Can this be done in a cleaner way? We might have to wait until full reflection is supported for JS targets. At that point, it is also likely the mockk library will be upgraded to support JS targets, in which case our makeshift Mock class should simply be removed.

Ambiguity between `DeviceRegistration` used by `DeploymentService` and client 'configuration'

There is a difference between 'registering' a device and 'configuring' it.

Registration of master devices is part of ensuring that the server-side and other participating master devices in a deployment can all communicate with one another.

However, configuring a connected device involves specifying the necessary information so that a master device can connect to it. Such a configuration is thus not essential to be done through the deployment service (server-side) but could potentially solely happen on the client (smartphone). But, connected devices can be preconfigured on the deployment service, so that the client already has a suitable configuration once it retrieves its MasterDeviceDeployment.

Currently, this is very ambiguous and we need to make this distinction clearer. In addition, is it possible to register a master device from a different device than the master device for which the MasterDeviceDeployment is intended?

Allow triggers to stop tasks

Right now, it is presumed that triggered tasks are always started. We need to also model the ability to let triggers stop currently running tasks.

Probably, this should not be modeled in Trigger, but in StudyProtocol. This information could be stored in TriggeredTask. We might have to rename addTriggeredTask to startTaskOnTrigger, and add a stopTaskOnTrigger.

Accessing `ServiceInvoker` of deserialized request objects requires `copy()`

Currently, kotlinx.serialization does not take into account class delegation as part of (de)serialization. The synthetic field which holds the object which the interface calls are delegated to is not initialized upon deserialization, resuling in NullPointerException when trying to access members defined on the delegated interface: Kotlin/kotlinx.serialization#241 (comment)

This is used as part of the ServiceInvoker interface on request objects (e.g., ProtocolServiceRequests).

As a current workaround, @ltj discovered that copy() can be used to properly initialize this synthetic backing field. Since all request objects are data objects (which introduce copy), this works for now.

However, we should keep an eye on this in case we can ever find a cleaner solution. A unit test is added to verify the current NullPointerException and workaround in ProtocolServiceRequestsTest.

Transition to kotlinx.serialization runtime 0.14

Sealed class serializer and it's implicit polymorphic ditto is desired downstream but carp currently blocks this via it's implicit dependency of now-deprecated UnionKind (sealed and polymorphic kinds now uses PolymorphicKind)

Ktor serialization also uses the v0.14 runtime now, so this also blocks upgrade to ktor 1.2.6

Support triggers based on data streams originating from multiple devices.

Right now, a Trigger is tied to exactly one device, specified through sourceDeviceRoleName: "The device role name from which the trigger originates."

However, this neglects use cases where something might need to be triggered based on data collected from multiple, separate, devices. For example: once two devices are within close proximity of a beacon, a survey needs to be sent out to both devices.

What is the best way to ensure the domain model supports expressing such protocols? Rather than storing a single sourceDeviceRoleName, should we store a set?

Immutable base class does not enforce immutable implementation in JS

Kotlin currently does not provide reflection for JS targets. Therefore, the Immutable base class could not be implemented as it is for the JVM target.

This is only a minor problem. The immutable implementation of classes will still be guaranteed by running JVM tests. However, if Kotlin ever supports reflection for JS targets, the expected Immutable class can probably be moved entirely to the common submodule.

Should we prevent multiple deployments of a master device?

Currently, the study deployment API can return the same MasterDeviceDeployment multiple times. It could thus also be initiated on multiple devices.

Should we somehow disallow this? On the other hand, we probably want to be able to 'redeploy' (uninstall, reinstall), or even switch from one device to another midway through.

UPDATE: Some of this has been addressed since; see comment below.

Move common test resources to a central location

While adding tests for the new carp.deployment submodule, I had to create test classes and functions to initialize common study protocols.

This is very similar to the test classes and objects in carp.protocols. Can these be moved to common test resources somehow? This might not be as straightforward due to the multiplatform setup (of course, creating yet another library would be an option, but this seems a bit excessive). Moving them to the existing carp.test library would not be possible since then a dependency on carp.protocols would need to be added to carp.test, resulting in a two-way dependency.

How to deal with failed device deployments?

What if registration succeeds for a given device in a study deployment, but deployment fails since not all required plugins are available? The user might want to update the client application, or possibly even use a different phone so that device deployment might succeed.

Currently, we only allow registering a device once. Thus, the registration will succeed and be stored on the server side. Any registration for the same device in the study deployment after will fail.

This is by design, since 'reregistering' devices might cause discrepancies between the 'contact information' sent to a dependent device and the one that is eventually registered.

Should we think of incorporating some 'reset' logic for study deployments?

Clean up duplication in `build.gradle` files

At the moment there is a fair amount of duplication between build.gradle files of individual JS and JVM submodules of the different subsystems (carp.protocols, carp.deployment, carp.studies).

This can most likely be centralized somehow, but my knowledge of Gradle is lacking on how to do this at the moment.

Use microsecond-level accuracy for timestamps rather than 100-nanoseconds

I previously borrowed the precision of .NET's TimeStamp data type for CARP's TimeStamp: https://github.com/cph-cachet/carp.core-kotlin/blob/develop/carp.common/src/commonMain/kotlin/dk/cachet/carp/common/TimeSpan.kt

However, we will not need this type of precision any time soon. On the other hand, we will also not need the greater range which can be achieved by deciding on microseconds or something less granular.

But, there is something to say for picking a 'round' unit as opposed to a non-default '100 nanoseconds' for the sake of legibility. Therefore, we have decided to use microsecond-level timestamping. Modern sensors like BioPack can achieve up to 2000 Hz, so ms level is not sufficient to be able to upload raw data for all sensors which are currently in use.

Release using `nexus-publish-plugin` rather than `maven-publish`

The nexus-publish-plugin provides a better integration with the Sonatype Nexus Repository to release packages to Maven.

Gradle Plugin that explicitly creates a Staging Repository before publishing to Nexus. This solves the problem that frequently occurs when uploading to Nexus from Travis, namely split staging repositories.

Currently, the gradle-nexus-staging-plugin is used to close and release staging repositories after having successfully published to them using maven-publish. Although this succeeds, this gives a deprecation warning that the staging repository ID has not been provided which is the recommended way moving forward. Currently, it uses some heuristics to decide which repository to close (I'm guessing it tries to close the latest one).

Support JS compilation

In the future, there will be an opportunity to compile this library to Javascript. This would allow it to be used in React and React Native.

This issue tries to keep track of the current known hurdles which will prevent us from targeting JS at the moment:

Remove npm lodash installation after mocha ensures a version >=4.17.13

There is a security vulnerability in lodash which has been addressed starting from version 4.17.13.

This dependency is pulled in through mocha, which depends on yargs-unparser, which depends on lodash.

Related ticket on yargs-unparser: yargs/yargs-unparser#35
Related ticket on mocha: mochajs/mocha#3965

This thus only affects dependencies during testing, and not of the actual library, and is therefore not a major issue. Furthermore, a fix for this should arrive soon enough as it simply implies upgrading a version dependency.

Store trigger information as separate data stream

Currently, triggerId is encompassed within the header information of data points.

However, after some brainstorming @ltj and I concluded it might be best to upload such information as a separate data stream. Data points would only denote a certain type of data collected from a specific device for a specific study deployment, but no longer contains the context/reason for data collection. That is stored separately.

Considering 'trigger' information a disjunct data stream gives us more flexibility in terms of task-trigger-measure multiplicities (e.g., the same data point could be requested by two concurrent tasks). There would be a TriggerDataPoint with:

  • triggerId (the trigger which was triggered at the time specified in the header)
  • taskName (the task to start)
  • targetDevice (the device the task has to start on)
  • state (start or stop)

Default queries for data would thus simply be queries for a certain data type, but additional queries can be added which pull in data from the trigger data stream to query by task name and trigger.

Support other formats than JSON serialization

This has no priority at all, but, currently the core library presumes the infrastructure layer will rely on JSON serialization to pass data in between submodules (snapshot classes, such as StudyProtocolSnapshot). That is not an unreasonable constraint, given the benefit of not having to worry about serialization at all when using the core libraries which compile to multiple platforms.

However, kotlinx.serialization by itself does not presume any one particular serialization format. The only reason we are currently constrained to using JSON is because of UnknownPolymorphicSerializer, which relies on JSON serialization to work. This serializer is used to be able to deserialize types which are unknown at runtime, thus enabling client-side extensions to be dealt with completely transparently on a server-side that does not have client extensions available at runtime.

Could this implementation become agnostic of the serialization format used? If so, this could make an interesting addition to the kotlinx.serialization library.

Serializable classes defined in external libraries require the serializer to be specified explicitly

After upgrading from kotlinx.serialization v0.5.0 to v0.5.1, part of StudyProtocolSnapshot serialization broke. Measure was no longer serialized properly.

This is due to a newly introduced bug in kotlinx.serialization: Incorrect serialization of class defined in an external library after upgrade to 0.5.1.

For now, I managed to work around this by specifying the serializer to be used explicitely. Concretely, I introduced MeasuresSerializer, which can be removed (as well as its usages) once this bug is fixed (planned in next release).

Add data streams to device descriptors

Devices should specify which data streams are available on them. The study protocol can verify whether DataStreamMeasures specified as part of a TaskDescriptor are supported on the device they are triggered for.

Important to keep in mind that 'secondary' data streams which process other data streams will later be added to MasterDeviceDescriptor as well.

Once `kotlinx.serialization` stabilizes, choose a fixed formatting scheme

At the moment, kotlinx.serialization is still in active development. Therefore, the way they serialize objects to JSON is not fixed/stable yet.

To still allow consumers of the library to have a stable way things are serialized they added a JsonConfiguration.STABLE configuration option when initializing Json. Once the library reaches 1.0.0, we should probably switch this to JsonConfiguration.DEFAULT in https://github.com/cph-cachet/carp.core-kotlin/blob/upgrade-serialization-0.11.0/carp.common/src/commonMain/kotlin/dk/cachet/carp/common/serialization/Serialization.kt

Alternatively, we create a fixed custom configuration.

Multiplatform documentation configuration

Currently, generating documentation only works for the JVM target (Javadoc). The plugin used to generate this documentation does not support Javascript targets yet. But, it seems like this will be supported in the future: https://discuss.kotlinlang.org/t/how-to-configure-dokka-for-kotlin-multiplatform/9834

Since Dokka currently does not support JS yet, a few hacks are included in the codebase:

  • The main build file overrides default (failing) dokka behavior ('HACK' section).
  • Individual project build files (e.g., 'carp.deployment.core') need to include sources of the other projects they depend on. Otherwise, these types are not included in the generated documentation.

This can likely be removed once Dokka supports multiplatform targets.

Reintroduce `DeviceDescriptor` type aliases once kotlinx.serialization bug is resolved

Due to a bug in kotlinx.serialization which was introduced after Kotlin 1.3.41 and kotlinx.serialization 0.11.1, we could not upgrade to later versions. I reported this bug here: Kotlin/kotlinx.serialization#563

Since this inhibited all consuming libraries from using more recent Kotlin versions, we decided to remove all type aliases for the time being.

This was done in this commit, which hopefully can easily be rolled back once the previously mentioned bug is fixed: ff76cb3

Rename `Deployment` to `StudyDeployment`?

Currently Deployment in the Deployment subsystem is "[a] single instantiation of a StudyProtocol, taking care of common concerns when 'running' a study. I.e., a Deployment is responsible for registering the physical devices described in the StudyProtocol, enabling a connection between them, tracking device connection issues, assessing data quality, and registering participant consent."

Now, we need a DeviceDeployment, which contains all the necessary information for a study to run on one particular device part of the overall study setup. I have referred to this previously as 'device configuration', I believe, among other names.

Perhaps it is a good idea to better disambiguate between the two by calling the former StudyDeployment?

Unknown connected master devices in StudyProtocolSnapshot are deserialized as normal devices

I currently fixed this by adding an internal isMasterDevice property to MasterDeviceDescriptor. When the connected devices in snapshot are deserialized, either CustomDeviceDescriptor or CustomMasterDeviceDescriptor is loaded by looking up this property in JSON.

However, currently this fix does not yet work (a failing unit test is available: "unknown connected master device is deserialized as a master device") since isMasterDevice is not included in the JSON output upon deserializing extending classes of MasterDeviceDescriptor. This is due to a bug in kotlinx.serialization which causes base properties of classes defined in external libraries not to be included in serialization: Kotlin/kotlinx.serialization#153 (comment)

Once this is fixed (announced in next release of kotlinx.serialization) the failing unit test should pass.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.