micro-manager / mmcoreanddevices Goto Github PK
View Code? Open in Web Editor NEWMicro-Manager's device control layer, written in C++
Micro-Manager's device control layer, written in C++
We have a number of applications where we would like to change the state of the microscope after the camera exposure in a sequenced acquisition. We would set up the microscope state to Channel0 before the acquisition begins and set up sequencing on the falling edge of the camera exposure signal to cycle through Channel1, Channel2, etc. We set the acquisition framerate to the time needed for exposure plus the time needed to change the hardware to the new state.
This is nearly possible using the TriggerScope with sequencing on the Falling edge of the exposure signal. The problem is that for a 3-channel acquisition the acquired images correspond to Channel0-Channel0-Channel1, instead of Channel0-Channel1-Channel2. The feature that is missing is shifting the sequence of states by 1 position when triggering on the falling edge, as implemented for example in the MCL NanoDrive device adapter.
There may be an argument for always shifting the sequence position by 1 when triggering on the falling edge. As a step further, the microscope could be set to Channel0 through software commands before the sequenced acquisition starts - in case the user did not do that. Are there applications where that's not desirable?
The majority of the methods in MMCoreJ are defined with throws Exception
when really there are probably only a narrow set of exception types that each method can throw. It is generally agreed that it is best to keep try-catch statements as narrowly defined as possible to avoid inadvertently catching other exceptions that you would really like to allow to propagate. An example of this are InterruptedException
s which are used to cancel an operation running a separate thread. For example image you want to set a property and log an error if it occurs. The obvious thing to write is:
try {
doStuff();
core.setProperty(devName, propName, propVal);
} catch (Exception e) {
ReportingUtils.logError(e);
}
but if this code is run on a separate thread and the thread is interrupted you will find that this broad catch
statement will catch the InterruptedException
and log it rather than allow it to actually cancel the thread.
This can potentially be solved by specifically catching any specific exceptions that you expect might crop up and then rethrowing them..
try {
doStuff();
core.setProperty(devName, propName, propVal);
} catch (InterruptedException) {
Thread.currentThread().interrupt();
throw new InterruptedException();
} catch (Exception e) {
ReportingUtils.logError(e);
}
However it would be much better if exceptions from MMCoreJ could be narrowly caught allowing other exceptions to automatically propagate:
try {
doStuff();
core.setProperty(devName, propName, propVal);
} catch (CMMException e) {
ReportingUtils.logError(e);
}
While the main purpose of caching property values is to improve performance, if the cache is refreshed too frequently it can do more harm than good. On systems with many connected devices refreshing the cache can result in noticeable freezing of the GUI.
The onPropertiesChanged
handler of CoreEventCallback.java
involves a full update of the property cache. This can be particularly problematic with certain device adapters which make frequent calls to OnPropertiesChanged()
. The PVCam adapter is one that comes to mind. Even just changing the exposure seems to cause the whole cache to be refreshed.
Even though the java CoreEventCallback::onPropertiesChanged
method refreshes the whole cache it is worth noting that the C++ function that triggers the java event callback, CoreCallback::OnPropertiesChanged(MM::Device* caller);
contains the following comment:
// TODO It is inconsistent that we do not update the system state cache in
// this case. However, doing so would be time-consuming (if not unsafe).
I propose that CoreCallback::OnPropertiesChanged
should have proper cache updating added to it's implementation such that only the properties of the MM::Device
that called it are updated. Then CoreEventCallback.java
will no longer need to go through a full update of the property cache. Is there a reason that this change should not be made?
Providing some way for device adapters to determine how many bytes are in the RX serial buffer would be very useful. Currently, as far as I'm aware, there is no way to determine if a command may be waiting in the buffer is to try to read
, if nothing is there you end up waiting for the function to timeout before it returns.
Other useful functionality would be peek
allowing you too examine the contents of the serial buffer without removing it from the buffer.
Other examples of useful general purpose serial functionality may be found here: https://www.arduino.cc/reference/en/language/functions/communication/serial/
Posted by Lukas Hille on the mailing list:
this is some experience report.
Its about getting the GigE camera adapter to work on Windows 10 for GigE
Mono cameras.
The GigE camera adapter:
https://urldefense.proofpoint.com/v2/url?u=https-3A__micro-2Dmanager.org_wiki_GigECamera&d=DwIGaQ&c=iORugZls2LlYyCAZRB3XLg&r=UwP8SWqih8VHO1LwZpgcx83I4o21yLj6V6QD-25Dt4I&m=MTWtv2oBtWluvOBNZWBBndcKUDQjvHtHkXwqXu-wKn0&s=Yt_VPd69eEG4HCUkQb_A_kyTWc5_cGoHd-5sEtcnhmc&e=
is compiled to JAI SDK and Control Tool 1.4.1
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.jai.com_support-2Dsoftware_jai-2Dsoftware&d=DwIGaQ&c=iORugZls2LlYyCAZRB3XLg&r=UwP8SWqih8VHO1LwZpgcx83I4o21yLj6V6QD-25Dt4I&m=MTWtv2oBtWluvOBNZWBBndcKUDQjvHtHkXwqXu-wKn0&s=TqZPuHriEhiBacK8az6JtxRn1GYAjg2npdgD9Hyzzuw&e=
I had trouble to get the drivers to work on Windows 10 and the new JAI
SDK didn't work with the adapter.
Therefore i compiled the camera adapter with the last SDK tools:
JAI SDK and Control Tool (64-bit) - 3.0.7 (Windows) - 111 MB
The compile worked without any warning or error message.
The drivers and the Control Tools in version 3.0.7 work on Windows 10.
Unfortunate i didn't get any Images.
The relevant debugging message for this was: J_Image_MallocEx() failed
In my case i only use Mono8 / Mono16 cameras in the lab.
I figured out that in GigECameraAcqu.cpp the function int
CGigECamera::aquireImage(J_tIMAGE_INFO* imageInfo, uint8_t *buffer)
checks for the pixel format:
if (BufferInfo.iPixelType == J_GVSP_PIX_MONO8 || BufferInfo.iPixelType
== J_GVSP_PIX_MONO16)
to directly copy the pixel data and not convert them with
J_Image_MallocEx() if they are already MONO8 or MONO16.
(converting pixel data which are already in the right format is causing
the MallocEx() function to fail)
With the new version of the JAI SDK i get a "custom" PixelType back (on
3 different types of GigE cameras the same):
BufferInfo.iPixelType == 0x81080001 for 8 bit || BufferInfo.iPixelType
== 0x81100007 for 16 bit.
To overcome this I just added this two pixel types to the if statement
and the adapter works now fine for me on Windows 10.
I wasn't able to work out why the PixelType now includes the "custom" tag.
The definitions in Jai_Factory.h are:
// Indicate if pixel is monochrome or RGB
#define J_GVSP_PIX_MONO 0x01000000
#define J_GVSP_PIX_RGB 0x02000000
#define J_GVSP_PIX_COLOR 0x02000000
#define J_GVSP_PIX_CUSTOM 0x80000000
#define J_GVSP_PIX_COLOR_MASK 0xFF000000
// Indicate effective number of bits occupied by the pixel (including
padding).
// This can be used to compute amount of memory required to store an image.
#define J_GVSP_PIX_OCCUPY8BIT 0x00080000
#define J_GVSP_PIX_OCCUPY12BIT 0x000C0000
#define J_GVSP_PIX_OCCUPY16BIT 0x00100000
#define J_GVSP_PIX_OCCUPY24BIT 0x00180000
#define J_GVSP_PIX_OCCUPY32BIT 0x00200000
#define J_GVSP_PIX_OCCUPY36BIT 0x00240000
#define J_GVSP_PIX_OCCUPY48BIT 0x00300000
#define J_GVSP_PIX_EFFECTIVE_PIXEL_SIZE_MASK 0x00FF0000
#define J_GVSP_PIX_EFFECTIVE_PIXEL_SIZE_SHIFT 16
// Pixel ID: lower 16-bit of the pixel type
#define J_GVSP_PIX_ID_MASK 0x0000FFFF
// 26.1 Mono buffer format defines
#define J_GVSP_PIX_MONO8 (J_GVSP_PIX_MONO |
J_GVSP_PIX_OCCUPY8BIT | 0x0001) ///< 8-bit Monochrome pixel format
(Mono8=0x01080001)
#define J_GVSP_PIX_MONO8_SIGNED (J_GVSP_PIX_MONO |
J_GVSP_PIX_OCCUPY8BIT | 0x0002) ///< 8-bit Monochrome Signed pixel
format (Mono8Signed=0x01080002)
#define J_GVSP_PIX_MONO10 (J_GVSP_PIX_MONO |
J_GVSP_PIX_OCCUPY16BIT | 0x0003) ///< 10-bit Monochrome pixel format
(Mono10=0x01100003)
#define J_GVSP_PIX_MONO10_PACKED (J_GVSP_PIX_MONO |
J_GVSP_PIX_OCCUPY12BIT | 0x0004) ///< 10-bit Monochrome Packed pixel
format (Mono10Packed=0x010C0004)
#define J_GVSP_PIX_MONO12 (J_GVSP_PIX_MONO |
J_GVSP_PIX_OCCUPY16BIT | 0x0005) ///< 12-bit Monochrome pixel format
(Mono12=0x01100005)
#define J_GVSP_PIX_MONO12_PACKED (J_GVSP_PIX_MONO |
J_GVSP_PIX_OCCUPY12BIT | 0x0006) ///< 12-bit Monochrome Packed pixel
format (Mono12Packed=0x010C0006)
#define J_GVSP_PIX_MONO14 (J_GVSP_PIX_MONO |
J_GVSP_PIX_OCCUPY16BIT | 0x0025) ///< 14-bit Monochrome pixel format
(Mono14=0x01100025)
#define J_GVSP_PIX_MONO16 (J_GVSP_PIX_MONO |
J_GVSP_PIX_OCCUPY16BIT | 0x0007) ///< 16-bit Monochrome pixel format
(Mono16=0x01100007)
Because of the expired Windows 7 support, this could get relevant for
some other users?
Note:
Without proper configuration of the Network adapter i got some strange
results.
Be sure to follow the instructions for packet size and lost packet
trouble shooting.
What I needed to get everything to work:
Flow Control: Disabled
Jumbo Packet: 9014 --> camera packet size: 8192 (without Jumbo Packet
--> camera packet size: 1476)
Receive Buffers to maximum value
Interrupt Moderation Rate --> low for high frame rate (smal roi), high
for other cases?
The DA Z Stage does not work well with TriggerScope DAC channels (using TriggerScopeMM device adapter).
The DAC output can be changed using the DA Z Stage-Position slider in the Device Property Browser, however, the new stage position is not updated and stays at 0.
Controlling the stage Stage Control GUI also does not work well. I can move the stage up once from zero position, but not a second time. Moving the stage down throws an "Out of range" error.
Probably a straightforward bug, would appreciate help with it. Thanks!
Would it be possible to allow Blanking and Strobe in the NIMultiAnalogAdapter? Some other controllers used for driving lasers (e.g. Arduino) allow blanking (turning off lasers when the camera is not exposing) and strobing (turning the laser on for a period shorter than the exposure time). If possible, it would be nice to have these options in the NIMultiAnalog Adapter.
1.4.x and 2.x.
When MMCore opens shutter via autoshutter, it does not update the system state cache's shutter State
property, unlike when explicitly opening/closing the shutter via setShutterOpen()
.
This means that if any action (e.g. clicking the Refresh button) causes a system state cache update during a Live or sequence acquisition, then the system state cache will remain incorrect after stopping the acquisition (it will record the shutter as being open when it has in fact closed).
Because MDA cleanup uses the system state cache to recover hardware state (which might be problematic in other ways as well), there can be cases where the shutter unexpectedly opens after an MDA.
It might be just a matter of updating the system state cache when opening/closing the shutter at the start or finish of sequence acquisitions (and possibly snap acquisitions) - hopefully this can be done without deadlock since only the system state cache lock needs to be acquired.
See: https://forum.image.sc/t/ti2-e-xy-stage-x-directionality-incorrect/56849
It is possible that the "correct" direction would have been to always flip X, but for backward compatibility we should default to the current orientation, with options to flip.
I was writing an algorithm that automatically calculates and set the focus position, and a bug in that algorithm produced a value of Double.NaN
. core.setPosition()
accepted this as a valid argument, and ended up setting the focus position to an unexpected value, which I assume was the c++ interpretation of the Java NaN byte patten. Perhaps Double.NaN
as well as Double.POSITIVE_INFINITY
and Double.NEGATIVE_INFINTY
should be explicitly checked for and lead to an Exception
so as to prevent unintended and unexpected hardware movements?
For both normal and pixel size config groups, we do not allow more than one preset to have the same combination of property values. However, this is currently enforced in MMStudio GUI code. It should be (also?) enforced by MMCore.
See micro-manager/micro-manager#818.
The tricky first problem to solve is how to make this work with the MMCore API, where each property-value pair is set by a separate function call.
The hidapi API is C, so HIDManager will likely work as is, but we should upgrade so as not to require the Visual C++ 2010 Redistributable.
mmgr_dal_HIDManager.dll
hidapi.dll (bundled)
MSVCR100.dll
Migrate the Windows build from Visual Studio 2010 SP1 and WindowsSDK7.1 to Visual Studio 2015 (v140) or 2019 (v142), with C++ language standard set to C++14 (the VS2019 default) for the time being (because C++17 introduces incompatible changes and should be handled as a second migration). VS2015 through 2019 are very compatible with each other.
The challenge is nothing technical about the code, but mostly about migrating the automated builds and taking care of the handful of modules that need special consideration due to version compatibility issues.
Until we address this, we welcome contributions (pull requests) that address any compiler errors when building individual device adapters with VS2019 (or 2015, 2017). These will be easiest for us to accept and merge if they do not prevent compilation with VS2010 and if they are limited to a single module or topic (see micro-manager/micro-manager#701 for an example of the latter). Changes that replace lines scattered across many parts of many modules may be challenging to immediately merge, due to our ongoing phase out of Subversion.
I would expect the MMEventCallback
's onConfigGropChanged
method to be always be called whenever MMCore:SetConfig
has been called. However, this is not the case as the only place that callback is called from is here:
mmCoreAndDevices/MMCore/CoreCallback.cpp
Lines 496 to 506 in 0e5ba3c
This means that if you follow the demo config and have a group of a channel with only a filter cube in it then any callbacks you have set up will not fire.
Here is an example using pymmcore
:
# from pymmcore_plus import CMMCorePlus
import pymmcore
# core = CMMCorePlus()
core = pymmcore.CMMCore()
mm_path = .... #"/usr/local/lib/micro-manager"
core.setDeviceAdapterSearchPaths([mm_path])
cb = pymmcore.MMEventCallback()
core.registerCallback(cb)
core.loadSystemConfiguration('demo_config.cfg')
# switch twice to make sure we get at least one switch
core.setConfig('Channel', "DAPI")
core.setConfig('Channel', "FITC")
# you will not see any printouts relating to config changes
A use case for the callback always firing is in napari-micromanager where the core state may also be updated by a different python process and it's important for the napari gui to stay up to date.
Either of or some combination of:
< 1
to allow for notifying when the group only has one property.SetConfig
method and then silence/remove the subsequent emission from SetProperty
onConfigSet
which firesFor my money the best option is combine 1
and 3
. The onSet
is different from onChanged
in that the former would only be sent by SetConfig
and the latter can be triggered by any property change so it makes sense to have both. As for option 1
: it seems that this check was added to support micromanager, but is there any reason that that logic couldn't live in micromanager?
hey guys,
I was trying to build Micro-Manager with the following instructions: https://micro-manager.org/Linux_installation_from_source_MM2.
for the most part everything worked, but I wanted to point out a small tweak that I had to make in order to fully build. in the file, micro-manager/mmCoreAndDevices/DeviceAdapters/WieneckeSinske/ZPiezoCanDevice.cpp, on line 42 of the code, it says '#include "ZPiezoCANDevice.h"'. I kept getting an error saying the file doesn't exist.
upon inspection, I found that there was a file named "ZPiezoCanDevice.h", so I changed the line of code to fit that name. The build was successful and I now have it working on my Ubuntu 20 machine.
This seems like a simple fix, but I don't know if this would break anything else?
thanks,
Connor
The up-to-date way to use MMCorePy.i
(with Python >= 3.6) is to run pymmcore (which now has mmCoreAndDevices as a subproject, but does not use the latter's build system, only source code).
The MMCorePy build that remains in this repository is for Python 2.7, but Python 2.7 has been end-of-life for over a year.
Distributing MMCorePy or pymmcore with the Micro-Manager installer is problematic also because installation via pip
is much more convenient and because it is hard to correctly follow Python's build requirements (e.g. compiler version) as part of a larger build.
So I propose that we remove MMCorePy from all parts of the build and distribution, and move MMCorePy.i
to the pymmcore repo. It will be one fewer thing to maintain in this repo's build system. After the change, pymmcore will still have mmCoreAndDevices as a subproject, but will only use the MMDevice
and MMCore
sources.
If there are no objections, I'll create a PR (here and in micro-manager). Cc: @nanthony21 @nicost.
The TriggerScope Volts property can be sequenced, and the user can specify whether the DAC state changes on the Rising or Falling edge of the input TTL signal. Sequencing on the Rising edge works well. Changing Sequence Trigger Edge to Falling has no effect - the triggering still happens on the positive edge.
@nicost could you please look into that too? I think this will be our main mode of operation. I'm happy to test and confirm that everything works well afterwards. Thanks!
The Rapp
device adapter is currently built against the vendor SDK that depends on the Visual Studio 2010 C++ runtime. Since the interface is C++ (not C), we need a new SDK for VS2019.
mmgr_dal_Rapp.dll
obsROE_Device.dll (bundled)
MSVCP100.dll
MSVCR100.dll
ROEobsTools.dll (bundled)
MSVCP100.dll
MSVCR100.dll
ROEobsTools.dll (bundled)
MSVCP100.dll
MSVCR100.dll
Dear,
when I move Git bash into the mmCoreAndDevices submodule: cd mmCoreAndDevices
Change to the "privateMain" branch: git checkout privateMain ;then, git submodule update --init --recursive --remote
it reject me. can anybody help me please ?
what's more; when i build micro-manager,there are many C++ files missing,such as BFApi.h,LightEngineAPI.h,pylon/PylonIncludes.h,PvInterface.h,mvIMPACT_CPP/mvIMPACT_acquire.h,FlyCapture2.h,toupcam.h,NIDAQmx.h,pdl2000.h,ximc.h,USMCDLL.h,TMCLWrapperRS232.h,oasis4i.h,APTAPI.h,PiperApiErrors.h,biostep\EI_SDK 1.0\EagleIceSDK.h,cbw.h,libfli.h, tl_camera_sdk.h,NIDAQmx.h,obsROE_Device.h,ShamrockCIF.h,ITC18.h,flexmotn.h,Jai_Factory.h,master.h,ALC_REV.h,MexExl.h,olmem.h,sencam.h,atmcd32d.h,atmcd32d.h,LightEngineAPI.h...
thank you very much for help me .
I recently got an old Leica DMSTC XY stage. Unfortunately, the micrometer to step conversion is totally broken (tested on mm 1.4). This is caused by LeicaDMSTC XYStage not querying the step to micrometer conversion factor, even though there is support for it in the code. Instead, it falls back to a default of 10um/step in the constructor:
Since there is actually code to query this, a simple call to either GetStepSizeXUm
or GetStepSizeYUm
in XYStage::Initialize
would directly set the internal conversion value properly.
Many microscope Z stages which include a hardware-based continuous focus option are treated in Micro-manager as multiple separate devices. For example the Nikon TI device adapter has a ZStage
device which handles operations when continuous focus (Nikon PFS) is disabled and separate PFS Offset
and PFS Status
devices that can be used to adjust focus when continuous focus is enabled. Having multiple logical devices represent different modes of a single physical device leads to confusing behavior where changing the setting of one device may disable other devices. It can also be difficult to use the continuous focus for many experiments since it's motion is expressed in arbitrary units rather than microns and is not even linear with actual physical motion.
By adding the following methods to the API Micro-Manager applications could have a better standard interface for dealing with continuous focus devices.
boolean supportsContinuousFocus()
void setContinuousFocusEnabled(boolean enable)
boolean isContinuousFocusEnabled()
boolean isContinuousFocusLocked()
//Search for a zStage position where the continuous focus can be locked.
//Returns the position (microns) where lock is achievable. Throws an exception
//if no lock is possible.
double runFullFocus()
Additionally it would be helpful to have the API support an escape
and refocus
functionality provided on many microscopes where the objective retracts completely for safe switching of samples.
boolean supportsEscape()
void setEscaped(boolean escaped)
boolean isEscaped()
I have WIP implementations these improvements in a Java plugin, however they would be much more useful if they were directly in the C++ layer of Micro-Manager:
This allows position to be set in terms of microns even when PFS is being used. However, there are drawbacks to the current implementation.
With the latest nightly build of MM sequencing works when:
but not when acquiring XYZT datasets - sequences are broken up between time points, such that individual z-stacks are acquired in a sequences acquisition, but the sequence does not span multiple time points.
I'm not sure if this is intentional (XYCT works, but XYZT does not), but if so it would be good to have the option to allow sequencing of XYZT dataset through a flag in the Core.
Currently most of it is in the implementation (.cpp
) file. If the comments are in the header, newer versions of SWIG can pick them up and produce Javadoc and Python docstrings (http://www.swig.org/Doc4.0/Doxygen.html). (SWIG never sees the .cpp
file.)
(Note that currently MMCoreJ Javadoc is generated from Doxygen HTML output by swig-doc-converter
in the micro-manager
repo.)
Came up in micro-manager/pymmcore#46.
It would be great if this repo was available via conda-forge. This would make a linux computer a viable option for controlling a microscope.
I think that this would likely be a follow up to #86
But opening now to discuss:
SecretDeviceAdapters
.I think an example recipe would be libtiff: https://github.com/conda-forge/libtiff-feedstock/blob/master/recipe/build.sh
Some other relevant links:
https://docs.conda.io/projects/conda-build/en/latest/concepts/recipe.html
https://conda-forge.org/docs/maintainer/adding_pkgs.html
https://conda.io/projects/conda-build/en/latest/user-guide/tutorials/build-pkgs.html
Also curious if @tlambert03 has thoughts as you're the only person in microscopy world who I know has made conda-forge pacakges.
This is a pretty unlikely scenario but I just ran into it so I thought I might as well report it.
If MMCoreJ::getAvailableConfigs
is called with a null
argument: core.getAvailableDevices(null)
then the program will immediately crash.
Stack: [0x00000000279e0000,0x0000000027ae0000], sp=0x0000000027ad94b0, free space=997k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [KERNELBASE.dll+0x43b29]
C [msvcr100.dll+0x614f1]
C [MMCoreJ_wrap.dll+0x37231]
C [MMCoreJ_wrap.dll+0x45e88]
C [MMCoreJ_wrap.dll+0x2381c]
C 0x0000000002f78ce7
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j mmcorej.MMCoreJJNI.CMMCore_getAvailableConfigs(JLmmcorej/CMMCore;Ljava/lang/String;)J+0
j mmcorej.CMMCore.getAvailableConfigs(Ljava/lang/String;)Lmmcorej/StrVector;+10
j edu.bpl.pwsplugin.UI.settings.ImagingConfigUI.lambda$new$1(Ljava/awt/event/ItemEvent;)V+13
j edu.bpl.pwsplugin.UI.settings.ImagingConfigUI$$Lambda$160.itemStateChanged(Ljava/awt/event/ItemEvent;)V+5
j edu.bpl.pwsplugin.UI.settings.ImagingConfigUI.<init>()V+262
i write a device adapter for the Daheng camera. And when i test it, i find this error "Cannot verify interface compatibility of device adapter""Line 2: run-time error : Failed to load device "DCam" from adapter module "Dahengcameratest" [ Failed to load device adapter "Dahengcameratest" from "C:\Program Files\Micro-Manager-2.0\mmgr_dal_Dahengcameratest.dll" [ Cannot verify interface compatibility of device adapter [ Cannot find function GetModuleVersion() in module "C:\Program Files\Micro-Manager-2.0\mmgr_dal_Dahengcameratest.dll" [ ÕҲ»µ½ָ¶¨µijÌÐò¡£"
Do you know what it mains?
Thankyou very much.
The Stage Control of micro-manager seems to round to integer micron, steps of 0.1-0.9 micron do not seem to be possible.
I am usign Prior stage that has 0.1 um resolution. It works fine at 0.1um with their control software, but not in MM's Stage Control.
If I try setting a 0.1 step size for X or Y and press the Up/Down/Left/Right buttons, I see the correct position displayed. But this is only very momentarily (less than half a second?) and next the updated X Y position is shown in micron, rounded(?) to integer values.
I believe it is setting the rounded position to the device, because
I want to help to debug this, and would like to know how micromanager / imageJ is currently built from an IDE (IntelliJ?) Could you please point to instructions on how you develop/compile it?
Click Device.
Click Hardware Configuration Wizard.
Add BaumerOptronic.
Then the program will hang.
The idea is to create a mechanism by which device adapters can register alternative names for themselves (probably from code, or DLL metadata, which has the advantage of being readable without loading the DLL, but the disadvantage that it is OS-dependent).
This will allow renaming (in the user's view) device adapters without breaking everybody's config files. It will also allow presenting a single device adapter under multiple names.
This will help in cases such as:
It might also make sense to do the same with individual device names (within device adapter modules).
When I change the tube lens on the Ti2, uM2 gamma doesn't update the pixel size, unless I press the "refresh" button. In uM1.4, the pixel size changes instantaneously. When using our old Ti scope with an Arduino- and magnet-based tube lens sensor, the pixel size updates immediately in uM2 gamma. For the Ti2, the tube lens property is not included in the pixel size calibration.
After setting a new PFS offset it can take some time before the Z stage arrives at the correct focused position. Since there is no "Status" property like there is on the TI1 this cannot be used to determine when focusing is completed. I have also found that the Busy
method for the zDrive, PFS, and PFSOffset appear to always return false
.
The only way that I have found to determine when focusing is completed is to poll the z stage position (which only updates at ~1hz when PFS is active) and try to determine when it is stabilized. This is very slow and prone to errors, my current best effort looks like:
public boolean zStageBusy() {
origZ = mmc.getPosition();
Thread.sleep(1000); //Wait for next update of z position
currentZ = mmc.getPosition();
return !(Math.abs(origz - currentZ ) < 0.1); //If delta z is less than 0.1 microns then we consider the position stable.
}
It would be great if we could have a device adapter that provided a more accurate experience of using a microscope than the DemoCamera adapter does. A hypothetical device adapter that could be fed images to display, and had a mechanism for communicating with an external process (e.g. over a socket) to exchange information would be great. Potential features:
The separate program should be responsible for tracking microscope state (e.g. whether or not shutters are open) so the device adapter should just be a thin wrapper around its communication protocol.
I wasn't able to find a device adapter for SMC Pollux positioner controllers so I wrote one, it's posted here. We have been using this interface to control two DC motor positioners from Micronix for like a year now so it's pretty thoroughly tested. I only wrote an XYStage device adapter because that was our application, but I don't see any difficulty in writing a similar single-axis adapter.
Let me know if there is interest and I can write up some documentation about it. Basically, the devices communicate over serial with PI's 'Venus2' protocol. Finding documentation of this protocol was actually not that easy for me, so I'm sharing a copy here. Thanks.
MMCore has a couple of functions to work with SLM (spatial light modulator) devices. These are basically treated as a a display with a rectangular coordinate system that images can be written to.
The api contains a number of functions to change the output of these devices:
setSLMImage() doc: Write an 8-bit monochrome image to the SLM. (also a 32bit RGB version)
setSLMPixelsTo() doc: Set all SLM pixels to a single 8-bit intensity. (also a r, g, b version)
setSLMExposure() doc: For SLM devices with build-in light source (such as projectors) this will set the exposure time, but not (yet) start the illumination
displaySLMImage(): doc: Display the waiting image on the SLM.
The GenericSLM implementation (which I would consider the "reference" implementation) will display immediately, when the setSLMPixelsTo function is called, but not when calling the setSLMImage function. The commands in the SLM api are consistent with this behavior (SetImage: "Load the image into the SLM device adapter. ", SetPixelsTo: "Command the SLM to display one 8-bit intensity. "). I am quite sure that several SLM device adapter will display the image immediately after calling setSLMImage(), i.e. without the need to call displaySLMImage().
I guess that the idea behind the current design is that it can be time consuming to load the image into the device, hence separating out loading and displaying can be beneficial. However, this is currently not obvious from the documentation, and took me more than an hour to figure out.
Probably the easiest solution is to update the documentation in the core to warn the user that the image loaded with setSLMImage() is only guaranteed to be displayed after calling displaySLMImage.
The PointGrey device adapter uses the C++ interface of the (legacy) FlyCapture SDK, which requires a version matched to the compiler.
(I believe the vendor was providing specific downloads for Micro-Manager, which might need to be updated. See https://micro-manager.org/Point_Grey_Research)
mmgr_dal_PointGrey.dll
FlyCapture2_v100.dll
Cameras need a trigger to start exposing (or to start a series of exposures). Triggers can be provided through software, using a clock internal to the camera (internal trigger), or by an external trigger source. Each of these triggers can take various forms: software triggers can start a single exposure or a sequence of exposures (driven by the internal clock), external triggers can trigger on the rising edge of the signal or the falling edge, and sometimes exposure continues as long as the trigger is active (sometimes called "bulb" mode). It would be wonderful to provide and API so that the same call can be used for cameras from different vendors.
In addition, there is some interplay with the SnapImage and StartSequence API calls that the MMDevice interface mandates. For instance, it does not make sense in the Sequence functions to send a software trigger for each image in the sequence. Internal triggering (or software-started internal triggering) is much more useful to the end user). On the other hand, in the SnapImage function, software triggering is almost always desired over internal triggering (since response times using software triggering are lower and much more predictable). We should offer some kind of guidelines to the device adapter authors what choices work best.
Possibly related to micro-manager/micro-manager#676. When using the Nikon TI2 the stage position displayed in the GUI is not updated. I have traced this back to CoreEventCallback::onStagePositionChanged
not getting fired. I'm not sure what the cause is though.
These two device adapters are still using OpenCV 2.4.8 built by Visual Studio 2010.
I don't remember why OpenCV is shipped as DLLs (instead of building into device adapters as static libraries), but there was probably some technical reason. Currently the binaries are included in the source tree; it might make sense to put new builds in 3rdpartypublic if there is no reason not to do so.
mmgr_dal_FakeCamera.dll
opencv_core248.dll (bundled)
MSVCP100.dll
MSVCR100.dll
opencv_highgui248.dll (bundled)
MSVCP100.dll
MSVCR100.dll
opencv_core248.dll (bundled)
MSVCP100.dll
MSVCR100.dll
mmgr_dal_OpenCVgrabber.dll
opencv_core248.dll (bundled)
MSVCP100.dll
MSVCR100.dll
opencv_highgui248.dll (bundled)
MSVCP100.dll
MSVCR100.dll
opencv_core248.dll (bundled)
MSVCP100.dll
MSVCR100.dll
The device adapter "Olympus/Objective: objective turret" is not detected using a BX51WI microscope and a BX-UCB control unit. Since the rest of devices work well and the control unit recognize the objective turret, I assume it is an issue related to the commands that the device adapter code use to detect the device.
This is a continuation of micro-manager/micro-manager#838.
Quoting from the Swig Changelog, in 4.0.0 there is the following entry:
2019-02-28: wsfulton
[Java] std::vector improvements for types that do not have a default constructor.
The std::vector wrappers have been changed to work by default for elements that are
not default insertable, i.e. have no default constructor. This has been achieved by
not wrapping:
vector(size_type n);
Previously the above had to be ignored via %ignore.
If the above constructor is still required it can be added back in again via %extend:
%extend std::vector {
vector(size_type count) { return new std::vector< T >(count); }
}
Alternatively, the following wrapped constructor could be used as it provides near-enough
equivalent functionality:
vector(jint count, const value_type& value);
*** POTENTIAL INCOMPATIBILITY ***
This suggests that it might be possible to have code that works across Swig versions.
The TIScam device adapter currently fails to build with msvc v142 due to the following lines in the SDK that it includes from "3rdParty":
#elif _MSC_VER > 1800
#error This compiler was not tested with this library.
#else
#error Wrong Compiler. This library does only run with Visual C++ 7.1, 8.0, 9.0 and 10.0.
// other maybe newer compiler ...
#endif
If the #error
is commented out then everything builds fine. I can commit this change to the SVN repo but wanted to use this issue as a place to review the change before commiting.
If possible, the DLL should be upgraded, but I don't think it is supported any more.
We can leave this as is (it should still work even if the device adapter is built with VS2019), but maybe stop shipping the VC++ 2008 Redistributable as part of our installer, because this is the only DLL we ship that depends on it. The documentation can be updated to direct users to install VCRedist 2008.
The DLL is dynamically loaded (LoadLibrary()
):
AB_ALC_REV64.dll (bundled)
MSVCP90.dll
MSVCR90.dll
Hi all,
This problem has been mentioned a couple times already:
I wanted to collect information here so we can drive this towards a resolution.
Steps to reproduce (on latest build of Windows 10 64-bit):
Ver2.20 and Ver2.10 crash, Ver2.00 works, the MicroManager wiki link above implies that Ver1.20 works as well.
I'd be happy to submit any log files or run any debug builds if it would help, I don't have Windows development experience so hoping others can tell me what would be most helpful.
Camera device adapters are currently required to implement the "Binning"
property in addition to GetBinning()
and SetBinning()
. Every device adapter is forced to reinvent the wheel to keep the two interfaces in sync (a similar problem exists for "Exposure"
).
However, the MMCore API does not expose any getBinning()
or setBinning()
methods, so applications only have access to the "Binning"
property.
MMCore does call GetBinning()
in the context of computing pixel size (or affine tfms). However, SetBinning()
is never called (and therefore some device adapters may have buggy or missing implementations).
There is the further annoyance caused by the (very small number of) device adapters that use non-integer values (i.e., strings, such as "2x2"
). Applications currently need to parse these themselves.
Enforcing (in the Core) standard integer values for the "Binning"
property might be a useful partial solution. This can be done by either rejecting non-integer values as an error, or by converting known formats (the former would be better in the long term for almost everybody, but the latter could be a transitional solution if necessary).
At the MMDevice level, we could further remove the GetBinning()
and SetBinning()
functions, keeping only the property, so that new device adapters do not need to duplicate code. I think this is better than unifying to the member functions, because there might be cases where the property has change notification callbacks, which cannot trivially be converted to work with the functions. We can do better at documenting standard properties and enforcing their presence and/or behavior in code.
MMCore could expose getBinning()
and setBinning()
, possibly as wrappers around the property. On the other hand, it might be better to just have one way to do things.
[Thinking about the binning API reminds me of the issue that we do not support unequal vertical and horizontal binning (such as 2x4
). However, support for that should be added (if ever) via an entirely new mechanism, and would require extensive support in MMStudio, so probably should be ignored in solving the present issue.]
See micro-manager/micro-manager#933 for related discussion.
There are missing some .m4
files for prebuilt Windows Boost libraries, namely:
ax_boost_filesystem.m4
ax_boost_log.m4
ax_boost_log_setup.m4
ax_boost_regex.m4
ax_boost_timer.m4
On the other hand there is ax_boost_asio.m4
but prebuilt library is missing.
On Linux, I use in Makefile $(BOOST_SYSTEM_LIB)
and $(BOOST_THREAD_LIB)
but have to use -lboost_filesystem
because $(BOOST_FILESYSTEM_LIB)
variable expands to empty string.
So what Boost libraries are officially allowed to use on Micro-Manager project?
From a small group email thread May 2020. Suggestions from Jon Daniels that make a lot of sense to me:
I like the idea of breaking the interface/device into 2 separate device types, one for analog input and the other for analog output. In the ASITiger device adapter we have both input and output devices.
Instead of simply converting the existing SignalIO interface to output-only, maybe it would be better to create two new device types and keep the existing one as-is for backward compatibility? Then device adapters which are actively maintained could be converted to use the new device types and the old device type marked as deprecated.
I also concur with Nico's suggestion of renaming things but I suggest slightly different names. How about AOSignal for analog output and then AISignal for analog input? That matches with the NI convention of AO and AI, and also specifies that it's an analog signal. I think it would be best to avoid "DA" in the method names; people with electronics background immediately recognize "DA" as "digital to analog" meaning it's an analog output, but that is far from obvious to everyone.
So I am suggesting API methods could be
void setAOSignal(const char* signalILabel, double volt) throw (CMMError);
double getAOSignal(const char* signalILabel); // returns the voltage that was actually set on the device, which could differ from the previously set voltage
double getAOLowerLimit(const char* signalILabel) throw (CMMError);
double getAOUpperLimit(const char* signalILabel) throw (CMMError);
The sequence functions could look like:
long getAOSequenceMaxLength(const char* signalILabel) throw (CMMError);
void startAOSequence(const char* signalILabel) throw (CMMError);
void stopAOSequence(const char* signalILabel) throw (CMMError);
void loadAOSequence(const char* signalILabel, std::vector voltSequence) throw (CMMError);
Then with the analog input there would be at minimum the API method (and maybe others aren't needed)
double getAISignal(const char* signalILabel); // returns the voltage read from the hardware
The Spinnaker camera device adapter does not work with Spinnaker SDK version 2.3.0.77 or with the latest version of the SDK. The device adapter works well if we compile it with VS2015, SDK v2.3.0.77, and Spinnakerd_v140.lib dependencies. How is the .dll distributed with the MM nightly builds compiled?
Do this at a carefully planned moment....
Hello,
I am trying to compile this on Windows 10, following the instructions listed here: https://micro-manager.org/Building_MM_on_Windows.
The issue I am running into is that many of the tools listed don't seem to exist anymore (e.g. Visual C++ 2010 Express, which I can't find on the website). When I try to run msbuild
using Visual Studio 2019, I get errors related to needing the Windows 7 SDK
, etc, which look like they come from not having the exact build tools listed in the link above.
I would be interested in using the meson
build system for compiling this, especially as I am more of a command-line kinda guy. What is the timeline for integrating the meson
branch into main
?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.