Git Product home page Git Product logo

Comments (11)

austinbhale avatar austinbhale commented on May 23, 2024 2

Hi Dan, thanks for your feedback. To be clear, my examples are about an AR application using Psi. I've also noticed in our AR applications that pipeline construction takes about a second. This application uses pipelines to fuse sensor data streams together with other Psi components, save streams to a store, and retrieve streams from a store for rendering. The pipeline construction is similar to the HoloLensCaptureApp in Psi's samples repository.

Thanks for the tips about multiple Psi stores and clearing up our understanding of how PsiStudio works (we thought it runs a pipeline instead of just advancing the cursor).

To demonstrate the proposed pause functionality (Option 2), we have set up an example project based on the wiki (https://github.com/microsoft/psi/wiki/Brief-Introduction#3-saving-data):

void RecordPipeline()
{
    var p = Pipeline.Create();
    // Create a store to write data to (change this path as you wish - the data will be stored there)
    var store = PsiStore.Create(p, "demo", "c:\\recordings");

    var sequence = Generators.Sequence(p, 0d, x => x + 0.1, TimeSpan.FromMilliseconds(100));

    var sin = sequence.Select(t => Math.Sin(t));
    var cos = sequence.Select(t => Math.Cos(t));

    // Write the sin and cos streams to the store
    sequence.Write("Sequence", store);
    sin.Write("Sin", store);
    cos.Write("Cos", store);

    // total_time = time taken since the initial pipeline run (ms)
    // orig_time = originating time in the current pipeline run (ms)
    p.RunAsync();

    // do nothing for 3 seconds while still recording
    Task.Delay(3000);

    // stop writing to store (e.g,. total_time = 3000, orig_time = 3000)
    p.Stop();

    // do nothing for 4 seconds
    Task.Delay(4000);

    // run the same pipeline as before with new messages' originating
    // times showing +4 seconds 
    // (e.g., total_time = 7000, orig_time = 3000)
    // this should run within 50 ms, not the ~1s pipeline construction
    p.RunAsync();

    // do nothing for 2 seconds while still recording
    Task.Delay(2000);

    // destroy the pipeline, total recording length will be 3 + 2 = 5 seconds
    p.Dispose();
}

void ReplayPipeline()
{
    var p = Pipeline.Create();

    // Open the store
    var store = PsiStore.Open(p, "demo", "c:\\recordings");

    // Open the Sequence stream
    var sequence = store.OpenStream<double>("Sequence");

    // Compute derived streams
    var sin = sequence.Select(Math.Sin).Do((t, e) => Console.WriteLine($"Sin: {t} at time {e.OriginatingTime}"));
    var cos = sequence.Select(Math.Cos);

    // run the pipeline
    p.RunAsync();

    // do nothing while running the pipeline for 2 seconds
    Task.Delay(2000);

    // stop the pipeline without destroying its connections
    p.Stop();

    // after stop, set the replay interval to [4000ms, end]
    ReplayDescriptor newReplayDescriptor = new(p.ReplayDescriptor.Start.AddMilliseconds(4000), p.ReplayDescriptor.End);

    // run the pipeline with the same construction as before.
    // again, taking less than 50 ms
    p.Run(newReplayDescriptor);
}

In RecordPipeline, the user starts a pipeline, records for 3 seconds, stops the recording for 4 seconds, and then resumes recording for another 2 seconds before stopping the pipeline. This is similar to taking a video on an Android phone (https://youtu.be/ax7iL5jLM9M) where you can pause and resume your recording without any noticeable delay, creating one single video file in the process.

In ReplayPipeline, the user starts to play the pipeline, pauses the playback after 2 seconds, forwards the recording to the 4-sec mark, and without a noticeable delay, resumes the recording from there till the end (similar to a normal video player like YouTube).

Solution 1: Single psi store

If we place everything in a single psi store, we would need to handle the difference in the timestamps of the streams (total_time and orig_time in the code above). It is also helpful for us if we can “replay” the streams by constructing a playback pipeline instead of just setting the cursor like in PsiStudio. This allows us to take advantage of pipelines' stream fusion and lossy delivery policies to maintain the synchronization of reading, processing, and rendering the captured sensor data streams.

Solution 2: Multiple psi stores

Creating a new psi store each time we hit stop would be also fine, if the pipeline construction takes less than ~50ms, and doesn’t lead to a noticeable delay when stopping and starting the pipeline. For our scenarios, a user needs to play and pause both recording and playback quite often (like when playing a youtube video), so even a 500ms delay quickly becomes very noticeable. For that solution, we also have the question, when combining the different stores in a dataset, how the developer handles the time discrepancies between stores? Not sure atm how we do this, would we adjust the originating times so that "unpaused" streams subtract the amount of time that had been paused?

I hope the code example makes our use case a bit clearer, sorry if I missed addressing any of your questions in the reply above, happy to elaborate further.

from psi.

danbohus avatar danbohus commented on May 23, 2024 1

Thanks @austinbhale for all the explanations. I think I now understand better what you are trying to accomplish.

It seems to me though that Option 3 might better model (though still problematic) what you are describing. Unfortunately, while doable, implementing both Option 2 and Option 3 at the runtime level are quite complex tasks with many downstream implications. Given that, and considering the semantics of what you are accomplishing vs. the semantics of originating times in the \psi runtime, my sense is than an alternative solution based on store concatenation might fit better.

I will explain each of the three statements above in more detail below (my comments below are written from the perspective of aiming for a pause/resume type feature in the runtime that would work structurally in a more general case, beyond the specific use case you have presented)

1. Option 2, Option 3 and Stream Semantics

For the sake of discussion, suppose you start the first run of the pipeline at 1pm and stop it at 2pm, then start at 3pm and stop it at 4pm. At the end of this you want to obtain a single 2-hour store. The question is what should the originating times in those messages be? It sounds like you want the originating times to start from 1pm and end at 3pm, i.e., you would want those messages to be back-to-back, as you want to create a "stitching" effect.

However, this is at odds with the notion of originating times in \psi. Originating times in psi streams are meant to semantically capture the real time in the world corresponding to an event happening, like when a video frame happened, or when a piece of audio happened. In a natural implementation of option 2 therefore, when the second RunAsync() is called the runtime would normally respond with actual wall-clock times, e.g., starting at 3pm as the camera queries the runtime for timestamps to assign to the frames. As a result, you would have something in the store with originating times between 1 and 2pm and then again between 3 and 4pm, and it doesn’t sound like that’s you want for your case.

Now, one could change how RunAsync() works to somehow remember the end-time from the previous call to RunAsync(), and shift the virtual clock that way, but the next question then becomes: is that really catering to this particular scenario, or is that a general solution, i.e., is that something everyone else would expect/want from that API? Keep in mind that RunAsync() is a simplified form of that call which implies a ReplayDescriptor.ReplayAllRealTime, but what should happen for instance if on the second RunAsync() the user specifies an actual specific replay descriptor? How would that interact with the "previous end time computation/shift"? It seems like shifting the time this way would create an API that works in a very specific way, which does not look natural to me at least for Option 2. For option 2, I think one would expect a store to contain 1 to 2pm and 3 to 4pm (which, if I understand correctly is not what you're looking for).

As I mentioned in my post above, the envisioned Option 2 should behave identically an implementation that disposes and then reconstructs and re-runs the pipeline (it would just save you the time of wiring things up and initializing components, and somehow it would use the same set of log files for export).

Now, with Option 3, if we do a RunAsync(), Pause(), Resume(), one could argue that the expected semantics for the Pause() API is to freeze the universe, i.e., freeze the time passing. There is only the initial replay descriptor which specifies how time runs, and Resume() would not provide an opportunity to specify a new replay descriptor. Time would resume from wherever it left off. This seems to me to match the semantics more closely of what you're looking for, but I still have a problem with it in that the resulting streams still violate the originating time semantics I described above that are generally what the \psi world assumes. So, if you are capturing images from a camera that’s looking at a clock, you would have a video frame in this store with the originating time of 12:01 and the image would contain the clock showing 12:01, but also a video frame with the originating time of 13:01 and the image would contain a clock showing 14:01. I realize that this is not a problem for you (is irrelevant for your scenario), but we need to consider the broader implications of such a change.

Basically, we would no longer maintain the assumption that originating times refer to an actual time when something happened in the world. My concern with this is that this moves the design more towards the space of powerful but also dangerous (one can shoot themselves in the foot by not realizing the underlying assumptions) … Perhaps there’s an Option 3 implementation where Pause()/Resume() are only available under certain replay descriptors (e.g., when replaying data, not when running live, etc.), but that distinction is not super well formalized currently either.

2. Option 2 and 3 imply big structural runtime changes, with many downstream implications

Leaving aside for a moment the originating time semantics issue I mentioned above, there are multiple reasons why implementing either option 2 or option 3 in a robust/general way is tricky. I haven't thought through all the implications, but right off the bat some things that come to mind are:

  • As @sandrist has already mentioned, it would require a fundamental change in the ISourceComponent interface, which would have to propagate through the entire eco-system of components; i.e., other folks that have their own psi repos with components would need to update all their source components for compatibility; a migration path would be to perhaps add a second IPausableSourceComponent interface, but that comes with the cost also of complicating APIs and multiplying the number of concepts component writers now need to think about.

  • Figuring out how to keep the Exporter open and what all the implications of that are while the pipeline is stopped (for Option 2) are not immediately clear. For instance, PsiStudio can read and render data from live stores that are actively been written to. Should PsiStudio know somehow from the store that the pipeline is stopped/paused and will start again? How would that be signaled? If it doesn't know, will that cause problems?

  • In the most general case, shutting down a pipeline is a complex process as the runtime has to flush the pipeline, make sure no new messages get generated, interrupt message loops, etc., etc. Pausing (Option 3) would need to accomplish something similar, but it's not clear off the bat (would need to be proven) whether we could always guarantee a normal resume. For instance, in shutting down, we might actually have messages that were already emitted before the shutdown was in effect, but with an originating time after the shutdown time. What does that imply during resume, and how might that impact things if a resume happens right away, etc.

There are probably other considerations that would need to be taken into account, so overall accomplishing a general and robust solution to this problem is quite a sizable task.

At the end of the day, the originating time semantics problem I flagged above would still I think be an issue for Option 2 and 3, regardless of implementation. One could imagine an option 4, where Pause() does not stop the clock, it just pauses the source components, i.e., they don’t emit more messages. That would keep the originating time semantics, the messages in the store would be from 1 to 2pm and 3 to 4pm, but it wouldn't solve your problem (thought it might be helpful to others -- for instance I can imagine scenarios where the App at some point does not want to consume resources for a while)

3. An alternative solution based on store concatenation

Given the originating time issue above, I wonder if a more appropriate solution that keeps the originating time semantics at least at the runtime level (i.e., while the pipeline is running), is one where you do collect multiple stores, and have a way to concatenate them, while shifting the time. PsiStoreTool already implements a store concatenate function, and it might be possible to add a time-shift parameter per store (we can discuss this more if it’s of interest – it may have some gotchas of its own), or some flag that sets a glue-style concatenate where the time-shift is automatically determined from the start/stop of the stores. Then you'd have a store from 1 to 2pm, and another store from 3 to 4pm. You could concatenate them to get a store from 1 to 3pm. Of course, this new store would still violate the originating time semantics described above, but at least it's something you've done via quite an advanced tool, not something a naive user might stumble upon while running a simple pipeline and then get confused about the results (which I feel would be more the danger with the Option 3 APIs)

p.s. as for pipeline construction times, it would be interesting to perf analyze that a bit to see where most of the time goes and whether that could be shortened significantly ...

from psi.

sandrist avatar sandrist commented on May 23, 2024

We've actually discussed as a team the possibility of adding "pause" functionality a few times in the past, but have always concluded that the idea would be far trickier to implement than it seems. Simply stopping a pipeline is already a complicated procedure: deactivating components, disabling source components from generating new messages, pausing for quiescence, etc.

It's also not entirely clear what behavior would be expected when pausing and resuming a pipeline in different scenarios, e.g., a live pipeline vs a pipeline that is replaying from a store. What are the semantics of "pausing" a pipeline? Should it "freeze the universe", halting the (virtual) clock entirely? What happens to messages that are in flight?

These are questions that might be answerable, but it would require a very careful design. And it might be overkill for what you're really trying to achieve. What is the concrete use case that you have in mind exactly? There might be other, more targeted ways of achieving what you're looking for, without implementing "pause" at the pipeline level. For example, if it's primarily a replay scenario, perhaps the PsiStoreReader could be extended to allow for pausing in playback?

from psi.

austinbhale avatar austinbhale commented on May 23, 2024

After diving into the code, my initial question should have been aimed toward exposing the pipeline's Stop functionality. My intentions are to stop a pipeline without losing its pipeline configuration. As it is already documented: "The pipeline configuration is not changed and the pipeline can be restarted later." Exposing this functionality was not so straightforward - it required the careful reset of variables and the ability to finalize a component without losing its connection.

To address your questions:

What are the semantics of "pausing" a pipeline?

To stop a pipeline and finalize its components without completely closing them off (emitters, receivers) from future use.

Should it "freeze the universe", halting the (virtual) clock entirely?

The clock will be in the same state as it is when a pipeline completes, essentially terminated.

What happens to messages that are in flight?

A new sequencing id is specified to close emitters without resetting it as a connected component in the current pipeline configuration. You are then able to finalize all nodes with the possibility to keep the current pipeline's connections.

Major changes

fix: defaultHandler

Captures the initial PipelineRun events so that you can still nullify any recursive invocations.

feat: Close(..., shouldResetComponent)

In addition to the closing sequence id, I specify a stopping sequence id for when you finalize a component without losing its subscribers and without raising events on close. This prevents the production and receival of messages without fully closing the emitter which can be used again later.

feat: OnPipelineDisposed

A new handler for completing a pipeline easily distinguishes between when a Stop versus Dispose of the pipeline had been called. If a pipeline has already been previously stopped (completed), we need to know when the disposal happens since the completed handler will not be fired again.

For a targeted use case of these additions, I made simple changes to StereoKitComponent => 6084520. I only add the stepper for the first run of a pipeline and only remove the stepper when disposing the pipeline. Thus, calls to Stop simply stop the pipeline without removing its configuration and connected components, basically mimicking a "Pause" functionality, since no lengthy construction or disposal is required.

The changes can be found in this forked repository: master...austinbhale:psi:master. Would you all be interested in collabing over a pull request for these changes? I welcome any of your expert feedback as well :)

from psi.

sandrist avatar sandrist commented on May 23, 2024

I'm glad you were able to come up with a solution that works for your scenario! But let's hold off on potentially fleshing out and integrating these changes and features until we push out our next release (hopefully coming very soon). We have some changes in flight that might intersect with this in tricky ways, around resolving some remaining ambiguity and incorrect semantics we have for our pipeline start and stop times, and an individual stream's open and close times. Let us resolve those existing issues, and then come back to this.

One thing I'm concerned about is how this "stop and resume" behavior would affect existing components, not just ours, but any components that others have written in their own repos. Unfortunately, the existing implicit contract between the pipeline and a component was basically that once "stop" was called, the component should not post any more messages ever, and can go ahead and start tearing down and disposing internal resources. So there are many components that would need to be rewritten and extended to allow for resuming. Perhaps we would need something like an "IResumeable" interface that allows a component to mark that it is safe to be resumed. And if a component does not implement that interface, then all bets are off. Not sure if that's the best approach, just one idea.

from psi.

sandrist avatar sandrist commented on May 23, 2024

Thanks also for pointing out the wrong and misleading comment in our documentation for Stop! At the very least, we will edit and clarify that comment to reflect the current reality.

from psi.

danbohus avatar danbohus commented on May 23, 2024

To make sure we're on the same page, before we dive deeper into implications and design constraints for this feature, can you first clarify which one of the following 3 options do you need for your scenario?

Option 1: Separate Stop from Dispose

pipeline.RunAysnc(replayDescriptor)

// at some later point
pipeline.Stop()

// some more code here, for instance reading some final state off of some components

// at some later point
pipeline.Dispose()

Decoupling Stop from Dispose would presumably enable you to read some final state off of some components (after Stop but before Dispose), and maybe organize your code in certain ways that might be not easy under the current implementation (when Stop and Dispose are entangled). But in this option once Stop is called, you wouldn't be able to call Run() again (let's stay Run would throw if you tried it)

Option 2: Stop a pipeline and Run it again (without disposing and reconstructing it)

pipeline.RunAsync(replayDescriptor)

// at some later point
pipeline.Stop()

// at some later point
pipeline.RunAsync(newReplayDescriptor)

// at some later point
pipeline.Stop()
pipeline.Dispose()

This would allow you to stop a pipeline and then later be able to call Run again on it. This means however this would be an entirely new run, and would behave in the same way as if you reconstructed the whole pipeline you had before and ran the new pipeline again. This would basically save you from having to construct the pipeline all over again.

Option 3: Pause a pipeline and Resume it

pipeline.RunAsync(replayDescriptor)

// at some later point
pipeline.Pause()    

// at some later point
pipeline.Resume()

// at some later point
pipeline.Dispose()

This would in essence somehow stop messages from flowing through the pipeline after Pause is called, and then messages would resume flowing through the pipeline after Resume is called. Dispose would stop and dispose the pipeline like before.

I know in the long run all of the above might be desirable, but the implementation requirements for these three options are different and it would be good to first understand which one of them you need right now.

In addition, it would be great to understand why you need this functionality. Can you say a bit more about why this is needed in your specific use case? (Overall, we want to be cautious about adding runtime features that increase complexity and the number of failure points/modes and would like to understand in which way the current implementation is not sufficient and whether there are any alternative solutions to the problem you have under the current implementation.)

from psi.

austinbhale avatar austinbhale commented on May 23, 2024

Thank you both for getting back on this @sandrist and @danbohus!

The specific use cases for this functionality would be the following:


1. Record

Imagine you are writing streams to a psi store to be played back later. Now, during this recording, the user might want to temporarily stop recording to, for example, arrange the next task and continue where the recording previously paused. Thus, they wouldn't have to create a new psi store for the next task by constructing the pipeline again. It'd be unnecessary extra work to reconstruct the same pipeline and import the new streams into the initial psi store.

2. Playback

The user plays a visualization of a psi store and wants to be able to pause the current state in time of the cursor reading from the store. The playback functionality shown in PsiStudio, for example, requires users to restart a store whenever they click the "stop" and "play" buttons. Additionally, most video playback software includes options to "seek" different times in a video without reloading the entire video. If the pipeline does not require complete disposal, we can simply run the pipeline again without our time-consuming pipeline creation, which ruins the immersion of our AR application.

3. Stream

Similar to recording, except you're stopping the current visualization of the application for it to be resumed later!


Option 2 is ideal for giving us flexibility. My fork of psi attempts to implement this, but as @sandrist mentions, most components would have to be rewritten to adapt this behavior. Though, I hope it can help spark an idea of how it can be approached.

Option 3 would also satisfy our needs if it happens to be much easier to implement in the framework's current state. If we can change the cursor's time when reading from a data store, that would be just as effective as Option 2. Then, most of the components wouldn't have to be rewritten :)

Let me know if I should clarify further! Very happy to help on this issue, as I think it will be valuable for all.

from psi.

danbohus avatar danbohus commented on May 23, 2024

Thanks @austinbhale. This helps, but I'm still not exactly on the same page I think. Here are some follow-up clarification questions and observations:

Re 1. Record

"It'd be unnecessary extra work to reconstruct the same pipeline and import the new streams into the initial psi store."

I'm trying to understand why reconstructing the pipeline is much work (you seem to hint to that in 2.Playback as well). Typically constructing a psi pipeline should be very short, on the order of 1 second or so. We have in our own work done AR apps where at the top level we have a StereoKit menu, the user pushes a Run button, which constructs and runs a psi pipeline, what runs until the user pushes a "Stop" button. At that point the pipeline is disposed and the "Run" button shows up again in the mixed reality view. The experience is pretty straightforward. Is there something in your case that precludes this? Does your pipeline construction take a long time for some reason?

Re: "import new streams into the initial store". Can you say a bit more about why this is a requirement? Why having separate stores is not sufficient? On a related note, I assume you are familiar with how multiple successive sessions can be combined in a dataset in psi, with the ability to run batch tasks over all of them or visualize in PsiStudio? Would those facilities help in your case, or is there a reason why these runs of the pipeline need to be in the same store?

Re 2. Playback

"The playback functionality shown in PsiStudio, for example, requires users to restart a store whenever they click the "stop" and "play" buttons." I'm not sure I understand what you mean by "requires users to restart a store". PsiStudio does not run pipelines or play stores. When you click Play in psistudio the cursor is simply advanced and the corresponding data is shown (PsiStudio does random access to the data).

"Additionally, most video playback software includes options to "seek" different times in a video without reloading the entire video". In playback mode, PsiStudio does indeed also seek to different times as it shows the data. The time we seek is the time of the cursor (which can be driven by the mouse, or by a timer that advances it when the user hits the "Play" button)

"If the pipeline does not require complete disposal, we can simply run the pipeline again without our time-consuming pipeline creation, which ruins the immersion of our AR application." Can you explain more what is ruined in the immersion of the AR app? Is the problem that the pipeline creation takes a long time, or are you somehow wanting to show some parts in AR that are shown by components in the pipeline (rather than by code outside the pipeline) and when the pipeline goes down those things go down?

Re 3. Stream

I'm sorry but I didn't quite follow this. Can you explain more? What is the current visualization? Are we talking about a (live) PsiStudio visualization, or about various AR objects rendered by the pipeline components?

from psi.

cwule avatar cwule commented on May 23, 2024

This would be a great feature. Are there any updates on this?

from psi.

austinbhale avatar austinbhale commented on May 23, 2024

Thank you for the detailed and well-thought-out response. It's important to have a generalized solution, so the ambiguities for options 2 & 3 would be too confusing and time consuming for developers.

By staying true to the \psi principle of capturing events in real time, it seems a feasible runtime feature presented here is something like an option 4, where Pause() would prevent messages from being sent from the source. The implication is that, if you have a thread that starts a camera and continuously receives buffer info, this data would still be available in that thread (i.e. the camera is still on) but the "posting" of the buffer info would have no effect, saving some resources. Or, is it possible to handle the pausing event in the \psi component, so that the cameras can be stopped as well? For example, MediaFrameReader has stopping functionality that could be called when the pause event is raised. Then, the resume event would start it again.

Correct me if I'm wrong, but the current behavior of closing an Emitter is that it can never be reopened in the pipeline, as it removes all subscribers. However, a call to Pause() should temporarily "close" the emitter with the possibility to reopen it. So would this initial call to Pause() essentially send a special pausing id and not remove any subscribers? I implemented something similar, though this solution may be narrow in that it was trying to fit our specific use case:

/// <inheritdoc />
public void Close(DateTime originatingTime, bool shouldResetComponent = true)
{
    if (shouldResetComponent)
    {
        if (this.lastEnvelope.SequenceId != this.closingSeqId)
        {
            var e = this.CreateEnvelope(originatingTime);
            e.SequenceId = this.closingSeqId; // special "closing" ID
            this.Deliver(new Message<T>(default, e));

            lock (this.receiversLock)
            {
                this.receivers = new Receiver<T>[0];
            }

            foreach (var handler in this.closedHandlers)
            {
                PipelineElement.TrackStateObjectOnContext(() => handler(originatingTime), this.Owner, this.pipeline).Invoke();
            }
        }
    }
    else
    {
        if (this.lastEnvelope.SequenceId != this.stoppingSeqId)
        {
            var e = this.CreateEnvelope(originatingTime);
            e.SequenceId = this.stoppingSeqId; // special "stopping" ID
            this.Deliver(new Message<T>(default, e));
        }
    }
}

In this scenario, a call to Pause() could set the shouldResetComponent to false in that it doesn't remove the subscribers but tells the emitter to stop delivering messages and tells its receivers to ignore messages when in this state. After our discussion, I'd be curious toward implementing a different, more robust approach to Option 4 and attach it here, if it helps. Adding this feature may also need to be further discussed and expanded on with your team, so I understand if this is best left to you all.

I like the sound of a tool that concatenates stores, especially if it can perform the time stitching and saving of the new store in the background. If option 4 gets implemented, I'm imagining a more flexible tool that can handle either the time stitching of separate stores or the time gaps in a single store. For the single store, a flag would be needed to indicate when the pause occurs. However, for the separate stores, no flag is needed, and one can look at the start and stop times of the two stores and see if a time stitch is possible. For example, a store that goes from 1 pm start_time_A to 3 pm end_time_A cannot be merged with a store going from 2 pm start_time_B to 3 pm end_time_B. I think a safe assumption would be to adhere to the principle of start_time_B >= end_time_A. Thus, we would ignore cases where someone would try to merge two stores going from 1-3 and 2-4. Once the two stores are validated to merge, we take the first store's pipeline completion time and use that as our originating time for the second store's pipeline starting time. Here, we see two separate stores shifting their originating times to merge into a single store:

Store 1: Pipeline Starts -> Pipeline Ends
Store 2:                    Pipeline Starts -> Pipeline Ends
Store M: Pipeline Starts --------------------> Pipeline Ends

As you say, this changing of originating times would violate the originating time's meaning, but the expectation would be there if the developer decided to use this tool. Do you think it is enough to simply use the starting and ending of pipelines as the basis for time-stitching multiple stores into a new store (as seen in the multiple store scenario above)?

This implementation could also look inside individual stores for a stream of flags that indicates messages were being sent in that section of the pipeline. If a stream of pause flags does not exist, we assume it as one complete run. Otherwise, we translate the flag cycles (i.e., 1->0, 0->1) as indicators to shift the times, similar to the merging of end_time_A with start_time_B. Here, we see a single store with a stream of pause flags and then the tool treating the flag cycles like separate stores:

Single Store: Pipeline Starts -----> Pause() ------> Play() ------> Pipeline Ends
Paused Flag:                  1---->0-------------->1------------->

The single-store scenario can be translated as separate stores, for every flag cycle.
Store 1:      Pipeline Starts 1----> Pipeline Ends
                                   |
Store 2:                           1-------------> Pipeline Ends
Store M:      Pipeline Starts -------------------> Pipeline Ends

Also, thanks for suggesting to analyze the pipeline construction times. I've optimized our construction time, which takes a second or so to initially start cameras on the first run, but then takes less than 80 ms to record and under 20 ms to playback on successive runs! So that should be suitable for our playback needs, e.g., pause/play or moving the cursor in time. I figured a builder design pattern would be suitable for the construction of the mixed reality renderers, since you create a new instance for every pipeline. With this approach, you only need to construct the renderers once, so on every new instance of a renderer, it is simply connecting pipeline components together.

from psi.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.