Git Product home page Git Product logo

bluesky-pods's Introduction

CI Coverage PyPI License

Bluesky โ€” An Experiment Specification & Orchestration Engine

Source https://github.com/bluesky/bluesky
PyPI pip install bluesky
Documentation https://bluesky.github.io/bluesky
Releases https://github.com/bluesky/bluesky/releases

Bluesky is a library for experiment control and collection of scientific data and metadata. It emphasizes the following virtues:

  • Live, Streaming Data: Available for inline visualization and processing.
  • Rich Metadata: Captured and organized to facilitate reproducibility and searchability.
  • Experiment Generality: Seamlessly reuse a procedure on completely different hardware.
  • Interruption Recovery: Experiments are "rewindable," recovering cleanly from interruptions.
  • Automated Suspend/Resume: Experiments can be run unattended, automatically suspending and resuming if needed.
  • Pluggable I/O: Export data (live) into any desired format or database.
  • Customizability: Integrate custom experimental procedures and commands, and get the I/O and interruption features for free.
  • Integration with Scientific Python: Interface naturally with numpy and Python scientific stack.

Bluesky Documentation.

The Bluesky Project enables experimental science at the lab-bench or facility scale. It is a collection of Python libraries that are co-developed but independently useful and may be adopted a la carte.

Bluesky Project Documentation.

See https://bluesky.github.io/bluesky for more detailed documentation.

bluesky-pods's People

Contributors

cjtitus avatar cryos avatar danielballan avatar dmgav avatar gwbischof avatar jacobfilik avatar junaishima avatar klauer avatar maffettone avatar stuartcampbell avatar tacaswell avatar

Stargazers

 avatar

Watchers

 avatar  avatar

bluesky-pods's Issues

Pull low-level steps into bash scripts separate from buildah

Notes on a conversation with @MikeHart85:

It would be nice if these components were usable as:

  1. Bash scripts that you can run directly inside a VM, such as a CI VM, or even on a local machine if you want.
  2. Containers
  3. Pods

In particular encoding the low-level steps as

buildah run ...
buildah run ...
buildah run ...

does not seem to add anything compared to a single

buildah run ... bash_script_with_the_actual_steps.sh

Error when running launch_bluesky_headless.sh

I followed all instructions to build and launch everything, when running bash launch_bluesky_headless.sh I see the following error:

mhanwell@unobtanium ~/src/bluesky-pods (main) $ bash launch_bluesky_headless.sh
+ '[' '' '!=' '' ']'
+ imagename=bluesky
++ pwd
+ podman run --pod acquisition -ti --rm -v /home/mhanwell/src/bluesky-pods:/app -w /app -v ./bluesky_config/ipython:/usr/local/share/ipython -v ./bluesky_config/databroker:/usr/local/share/intake -v ./bluesky_config/happi:/usr/local/share/happi -e EPICS_CA_ADDR_LIST=10.0.2.255 -e EPICS_CA_AUTO_ADDR_LIST=no bluesky ipython3 --ipython-dir=/usr/local/share/ipython
Python 3.8.5 (default, Aug 12 2020, 00:00:00) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.12.0 -- An enhanced Interactive Python. Type '?' for help.
[TerminalIPythonApp] WARNING | Unknown error in handling startup files:
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
/usr/lib/python3.8/site-packages/IPython/core/shellapp.py in _exec_file(self, fname, shell_futures)
    335                     else:
    336                         # default to python, even without extension
--> 337                         self.shell.safe_execfile(full_filename,
    338                                                  self.shell.user_ns,
    339                                                  shell_futures=shell_futures,

/usr/lib/python3.8/site-packages/IPython/core/interactiveshell.py in safe_execfile(self, fname, exit_ignore, raise_exceptions, shell_futures, *where)
   2718             try:
   2719                 glob, loc = (where + (None, ))[:2]
-> 2720                 py3compat.execfile(
   2721                     fname, glob, loc,
   2722                     self.compile if shell_futures else None)

/usr/lib/python3.8/site-packages/IPython/utils/py3compat.py in execfile(fname, glob, loc, compiler)
    166     with open(fname, 'rb') as f:
    167         compiler = compiler or compile
--> 168         exec(compiler(f.read(), fname, 'exec'), glob, loc)
    169 
    170 # Refactor print statements in doctests.

/usr/local/share/ipython/profile_default/startup/00-base.py in <module>
     19 from bluesky_adaptive.per_start import adaptive_plan
     20 
---> 21 from bluesky_queueserver.plan import configure_plan
     22 
     23 import databroker

ModuleNotFoundError: No module named 'bluesky_queueserver.plan'

Documentation should detail contents more clearly

We should at least add a table to the readme of what containers are expected to be spun up, how they interact with each other, and what images they depend on. I can imagine the majority of applications will just execute the bash script and poke around, but a few may want to do some repurposing or extension. Even walking through the compose file, it took a bit of time for me to understand what was getting spun up, and from where.

These pods are too big and should be split up

This is a longer-term concern and is not on the critical path for our MVP.

Just to record some thoughts from separate conversations I've had with @stuartcampbell and @tacaswell:

We currently have a couple very large pods. This meets our needs at present and works quite well, so I'm no great hurry to change it, but I think we should consider restructuring it in the future.

  1. We like to use tools the way they a meant to be used. We are abusing the pod abstraction here by stuffing so many services into one pod. To zeroth order, a pod should have one container. To first order, pods can have a container and additional containers running support services whose data stays local to the pod (i.e. worker processes, an nginx proxy). Large services like MongoDB or Kafka whose data is of interest to multiple other services should in general get their own pods. Some external validation on that opinion:

    The primary purpose of a multi-container Pod is to support co-located, co-managed helper processes for a primary application.

    Source: https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/

    You might say our current structure is using "pod" to mean "private network" rather than "pod".

  2. When pods are small, restarting one is less disruptive to the rest of the system.

  3. If we ever want to build something comparable in production (which is still very much an open question) we will definitely not want large pods because we'll want the large services on dedicated nodes. To maintain correspondence between dev and production, we should use a more idiomatic pod structure.

One possible pod grouping:

  • Beamline Pod: {IOCs + queue server parts + local redis}
  • Message Broker Pod: {zookeeper + kafka}
  • Database Pod {mongo + mongoconsumer}
  • Data Broker Server Pod {databroker server + nginx}

Mount git repositories to enable development using pods

In previous work I have used docker-compose with mounting of git repositories to facilitate development, and imagine a similar workflow could be achieved with podman. In the development of either the bluesky or databroker stacks I imagine the need for:

  • Server running in one container, with reload enabled for development
  • Client running in another container with reload enabled for development
  • Server to combine the server and client behind a single port (probably NGINX)

There would be a (probably default) production version where it would not enable reload, and would simply build the web application into an optimized static bundle using everything within the container/image.

Split up the compose file and add variations

There appears to be include https://docs.docker.com/compose/multiple-compose-files/include/ andextends https://docs.docker.com/compose/multiple-compose-files/extends/ directives in the vocabulary and a way to merge https://docs.docker.com/compose/multiple-compose-files/merge/ compose files when invoking up (or automagically by putting semantics in filenames ๐Ÿคฏ ).

This needs a bit of ivestigation, but my current thinking is we want to use include so that we can have shared compose file fore:

  • core data services (mongo, postgres, kafka, tiled, jlab)
  • epics serivces (archiver, saverestore, ...)
  • set(s) of IOCs
  • bluesky / qs / CSS configuration

So that we can have a couple of variations {just ophyd.sim, a bunch of mock caproto IOCs, ADsim + motorsim, beamline-analog, blackhole-IOC, ...} without having to copy-paste a lot of yaml.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.