I'm trying out the implementation of the 12 bit data unpacking
|
for (i=0; i<numPixels/2; i++) { |
|
*output++ = (*input << 8) | ((*(input+1) & 0x0f) << 4); |
|
*output++ = (*(input+1) & 0xf0) | (*(input+2) << 8); |
|
input += 3; |
|
} |
.
It seems that the result will be 12 bits of data occupying bits 15:4 of the uint16 data type, and with 3:0 zeroed out. This is comparable to the Mono16 data type that these cameras are able to produce. My user working with FLIR BFS-PGE-70S7M would prefer to see unpacked 12 bits rather occupy 11:0 of the uint16 data type, with 15:12 zeroed out. On our side, we are assuming that the pixel data should always be non-negative; is this appropriate to expect?
I'm using this IOC startup directive:
dbLoadRecords("NDStdArrays.template", "P=$(PREFIX),R=image1:,PORT=Image1,ADDR=0,TIMEOUT=1,NDARRAY_PORT=$(PORT),TYPE=Int16,FTVL=SHORT,NELEMENTS=$(NELEMENTS)")
One thing that bothered my user was having bit 15 used for pixel data, with 12 bit relevant payload. This resulted in EPICS clients getting waveform with signed 16 bit values where bit 15 would flip the sign interpreted by Matlab as negative pixel value. His Matlab code would have to treat arriving data as int16 then shift the pixel value 4 bits to the right to get into 12 range and avoid 'negative' value pixel values. Maybe I'm doing something wrong but, I can not get the waveform to hold unsigned 16 bit values; it seems that asyn layer with the asynInt16 data type is the reason. IOW, changing FTVL=SHORT
to FTVL=USHORT
has no effect. Is this expected? I know that going 32 bits from the IOC -> DB -> client makes this a non-issue but it feels an overkill in data overhead, for my taste at least.
Is the use of higher bits a standard way of transporting the pixels (i.e. do other devices do this)?
With the current implementation how does using IOC "shift left" PV with 12 bit data type result in valid pixel values? I guess it should be always set to 'no' shift, but then again, it is possible to perform the shift only if the pixel format is not 8 or 16 bits.
For reference, here is the change that makes my user happy:
diff --git a/GenICamApp/src/ADGenICam.cpp b/GenICamApp/src/ADGenICam.cpp
index 5f84c7d..870b9fe 100755
--- a/GenICamApp/src/ADGenICam.cpp
+++ b/GenICamApp/src/ADGenICam.cpp
@@ -553,8 +553,10 @@ void ADGenICam::decompressMono12Packed(int numPixels, epicsUInt8 *input, epicsUI
int i;
for (i=0; i<numPixels/2; i++) {
- *output++ = (*input << 8) | ((*(input+1) & 0x0f) << 4);
- *output++ = (*(input+1) & 0xf0) | (*(input+2) << 8);
+ *output++ = ((epicsUInt16)((*input << 4) | (*(input+1) & 0x0f))) & 0x0FFF;
+ *output++ = ((epicsUInt16)(((*(input+1) & 0xf0) >> 4) | (*(input+2) << 4))) & 0x0FFF;
input += 3;
}
}
With this the EPICS clients get to see lowest 12 bits of the 16 bit data type occupied with pixel data. No more negative pixel values. A (nice) side effect is that, for example, the statistics plugin is now showing proper pixel values in 12 bit range. Also, Matlab code does not have to do any data manipulation either as pixel can not get negative. An OPI with intensity XY plot also looks better now.
What do you think?