Git Product home page Git Product logo

matlab_owc's Introduction

The Matlab OWC toolbox

This is a package of MATLAB routines for calibrating profiling float conductivity sensor drift. A description of the algorithms can be found in "An improved calibration method for the drift of the conductivity sensor on autonomous CTD profiling floats by θ-S climatology", by W.B. Owens and A.P.S. Wong, in Deep-Sea Research Part I: Oceanographic Research Papers, 56(3), 450-457, 2009. Lately, modifications suggested in “Improvement of bias detection in Argo float conductivity sensors and its application in the North Atlantic” , by C. Cabanes, V. Thierry and C. Lagadec, in Deep-Sea Research Part I, 114, 128-136, 2016 have been taken into account.

How to install the toolbox ?

Either clone the latest version of the git repository:

git clone https://github.com/ArgoDMQC/matlabow.git

or download and unzip the zip file (Clone or download button)

You can also access the different releases here: https://github.com/ArgoDMQC/matlabow/releases

How to run the analysis?

Here is a summary of what should be done to run the analysis, please read the ./doc/README.doc file for more details

  1. All files are to be used in MATLAB. The full package was tested with MATLAB R2014a. In addition, you will need: a). The MATLAB Optimization Toolbox; b). The ITS-90 version of the CSIRO SEAWATER library. The version 3_3.0 of this library can be found in ./lib/seawater_330_its90. Please update if necessary. c). The M_MAP toolbox. The version 1.4.c of this library can be found in ./lib/m_map1.4. Please update if necessary.

  2. Add the necessary path to your matlab path: addpath('./lib/seawater_330_its90';'./lib/m_map1.4';'./matlab_codes/')

  3. Put your reference data in ./data/climatology/historical_ctd, /historical_bot, /argo_profiles.

REFERENCE DATA can be obtain at ftp.ifremer.fr cd /coriolis/data/DMQC-ARGO/ (if you need a login/pswd ask [email protected])

Then, create/update your ./data/constants/wmo_boxes.mat file (more details in ./doc/README.doc, p3)

  1. After you have decided where you want to install the package on your computer, edit ow_config.txt at the following lines so the correct pathways are specified:
  • HISTORICAL_DIRECTORY =
  • FLOAT_SOURCE_DIRECTORY =
  • FLOAT_MAPPED_DIRECTORY =
  • FLOAT_CALIB_DIRECTORY =
  • FLOAT_PLOTS_DIRECTORY =
  • CONFIG_DIRECTORY =
  1. The last section of ow_config.txt below the heading "Objective Mapping Parameters" is where you set the various parameters (more details in .doc.README.doc, p4-6)

  2. If this is the first time you are using this system, then the 4 directories /data/float_source, /float_mapped, /float_calib, and /float_plots should be empty. Decide how you want to organise your floats, e.g. under different project names or different investigator names. Then make identical subdirectories under each of these 4 directories. For example:

/data/float_source/project_xx /data/float_mapped/project_xx /data/float_calib/project_xx /data/float_plots/project_xx

  1. Create the float source file (./data/float_source/project_xx/$flt_name$.mat) from the original netcdf files (more details in ./doc/README.doc,p6)

  2. Open MATLAB in the top directory. List all the float files in a cell array "float_names", with the corresponding subdirectories in another cell array "float_dirs". For example, float_dirs = { 'project_xx/'; 'project_xx/'; 'jones/'; 'jones/' }; float_names = { 'float0001'; 'float0002'; 'myfloat_a'; 'myfloat_b' }.

Tips: If the files are not saved under a subdirectory and are only saved under ./float_source/, specify float_dirs = { ''; ''; ''; '' }, etc.

Run ow_calibration.m.

matlab_owc's People

Contributors

cabanesc avatar gmaze avatar mscanderbeg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

matlab_owc's Issues

Figures 6 and 8 when the time series is split

As discussed at the 9th DMQC discussion meeting, when the time series is split (by modifying calseries in set_calseries.m), the OWC software selects a different set of the most stable theta levels for each part of the time series to estimate the salinity adjustment.
However, in Figures 6 and 8, the set of selected theta levels is exactly the same regardless of whether the time series is split. This is because the plot_diagnostics_ow.m function recalculates the 10 most stable theta levels using the entire time series.

Here is an example of Figure 8, obtained when the time series is not split, and when the time series is split at cycle 32.
image

The function plot_diagnostics_ow.m has been modified (in the "enhancements" branch of the matlab_owc repository) so that when the time series is split, Figure 6 and Figure 8 are displayed for each part of the time series, with the corresponding set of the 10 most stable theta levels.

Here is an example of the two Figures 8 obtained when the time series is split into two parts, at cycle 32.
image

In this case two plots 6 are also drawn. The first is for the two most stable theta levels in the first part of the time series (cycles 0-32), the second is for the two most stable theta levels in the second part of the time series (cycles 33-44).

image

The modified plot_diagnostics_ow.m can be dowloaded here

Any comments welcome!

update_salinity_mapping large number of reference data for cycle1

We were suprised the the number of reference data selected for the first cycle is very much larger than the rest., That appears to be true for the majority of mapped floats. we doubled checked that this also is the case with the latest release of the repository. Did any one look into this before?

Matlab2016b

Using the OW software in Matlab2016b, the function plot_diagnostics_ow.m encounters an error when plotting the 'profile locations with historical data' figure. The error is:


Conversion to cell from char is not possible.

Error in plot_diagnostics_ow (line 143)
xticklabels(ii(j),:)=c(1,:);

Error in ow_calibration (line 31)
plot_diagnostics_ow( flt_dir, flt_name, lo_system_configuration );


The correction for Matlab2016b is to change line 143 from:
xticklabels(ii(j),:)=c(1,:);

To:
xticklabels(ii(j),:)=cellstr(c(1,:));

I haven't tested this in earlier versions of Matlab

update_salinity_mapping crash when insufficient unique CTD data

OWC crashed when update_salinity_mapping called noise.m when using CTD reference data for a float in the Southern Ocean (1901751). It turned out to be a problem in screening the selection of data to pass to noise.m where the test for sufficient reference data was not specific enough due to probable duplicate CTD references at the same location. CTD reference data being used is CTD_for_DMQC_2021V02.

We implemented a fix by testing for uniqueness in location but this is probably not the ideal solution.
This is the code segment inserted in update_salinity_mapping near line 260:

--
% PO: System will crash calling noise, below, if all profiles are at the same location
number_of_unique_latitudes = length(unique(la_hist_lat));
number_of_unique_longitudes = length(unique(la_hist_long));
number_of_unique_profiles = number_of_unique_latitudes * number_of_unique_longitudes;
if length(la_hist_sal)>5 & number_of_unique_profiles==1;
display('PO: Climatology includes all duplicates');
end;
% map historical data to float profiles -------------------------
%if( length(la_hist_sal)>1 ) % only proceed with mapping if there are more than one data point
% change config 129 -> at least 5 points are required
%
% PO: This test should be changed to test how many unique profiles are available
%
%PO if( length(la_hist_sal)>5) % only proceed with mapping if there are more than five data point
if( length(la_hist_sal)>5 & number_of_unique_profiles>1) % only proceed with mapping if there are
% more than five data point ...
% (PO) and if there are more tham one unique location

@cabanesc and Annie Wong (@apswong) joined the discussion and Annie suggested to use something like the following:

if( length(la_hist_sal)>5 )

perhaps change it to something like:

if( length(unique(la_hist_long))> 5 & length(unique(la_hist_lat))>5 & length(unique(la_hist_dates))>5 )

--
I implemented this and the code was able to run, but the mapped salinity is different from the previous method.
An image of la_mapped_salinity for each of the methods is shown here:
OWC_mappedSal_compare

There is less data mapped in the 'Annie' version than the 'Peter' version. [Note also there seems to be some strange (reversed order?) data in cycle 15]
The simple uniqueness test above is flawed in logic since it treats lon, lat and date independently, when they should actually be 'triplets'. However, I would have thought in practice it would work. This needs some more thought and discussion.

Error in find_10thetas

From Antonella Gallo :

during DMQC analysis of WMO 6901513 float, OWC gave me this error:

Error using horzcat
Dimensions of arrays being concatenated are not consistent.

Error in find_10thetas (line 222)
       S_temp(i,j) = interp1( [PTMP(ti,j), PTMP(ki,j)], [SAL(ti,j), SAL(ki,j)], Thetalevels(i) );

The problem is ki.
For a profile the difference: a=PTMP(ti,j) - PTMP(interval, j) has the same value. For example:
ti=74; j=26; interval= 73,74,75
a=
0.5226
0
0.5226

So that ki has two dimensions: [73 75] corresponding to the two values 0.5226.

To solve the problem I added an if cycle in find_10thetas as you can see in the attached image.
Screenshot_OWC

remove "cal_COND" and "cal_COND_err" from the cal*.mat.

From A. Wong:

The two output variables in the cal*.mat files, "cal_COND" and "cal_COND_err", are actually "potential conductivity", and not the measured conductivity from the floats. If a float returns CNDC and there is salinity drift, they should not be used to fill CNDC_ADJUSTED. The correct way is to apply the multiplicative factor directly to CNDC:

CNDC_ADJUSTED = CNDCpcond_factor;
CNDC_ADJUSTED_ERROR = CNDC
pcond_factor_err;
CNDC_ADJUSTED_QC = PSAL_ADJUSTED_QC.

To avoid confusion, "cal_COND" and "cal_COND_err" should be removed from the cal*.mat.

Issue in interpolation of data using the SAF

In running the matlab_owc software for the float WMO: 1901877 I have encounter some error with the interpolation of data in the update salinity mapping block of calculations where the software use the SAF in the analysis. The error message refers to not unique sample points used for analysis.

Is there any way to avoid this error in the software other than disable SAF in the config file?

I am using combined, the most recent reference data CTD_for_DMQC_2024V01 and ARGO_for_DMQC_2023V03

The error message is as follows:

UPDATE_SALINITY_MAPPING: Working on profile 121
Error using matlab.internal.math.interp1
Sample points must be unique.

Error in interp1 (line 188)
VqLite = matlab.internal.math.interp1(X,V,method,method,Xqcol);

Error in frontalConstraintSAF (line 155)
T300=interp1(grid_pres(isok,i),temp,300);

Error in update_salinity_mapping (line 201)
frontalConstraintSAF(la_bhist_sal, la_bhist_ptmp, la_bhist_pres, la_bhist_lat, la_bhist_long, la_bhist_dates, la_bhist_Z, LAT, LONG, PRES, PTMP, SAL, po_system_configuration);

Error in ow_calibration_kamwal_v2 (line 39)
update_salinity_mapping( flt_dir, flt_name, lo_system_configuration )

coord_floats.m missing?

Dear developers and colleagues in Argo DMQC.

I am fairly new to this, and I am currently trying to do a new round of the NorARGO floats, for which I am doing the salinity calibrations. After updating the OWC-toolbox recently, I find that one of this summer's changes to the toolbox causes this error when running ow_calibration:

Undefined function or variable 'coord_float'.

Error in calculate_piecewisefit (line 164)
unique_coord_float = coord_float(calindex,:); % ccabanes 01/06/2020

Error in ow_calibration (line 29)
calculate_piecewisefit( flt_dir, flt_name, lo_system_configuration );

It turns out that the function 'coord_float' is not provided in the repository.

If I'm not mistaken, the reason this has gone undetected since summer is that calculate_piecewisefit only calls coord_float when when processing floats with "No good climatological data found" (We currently have a float that has gone into the Barents Sea), or maybe for any reason position is missing (?).

Best regards. Even, IMR, Norway.

build_cov.m improvement and renaming

build_ptmp_cov.m calculates the vertical covariance matrix between the theta levels. This covariance matrix was used in OW to calculate the number of independent observations (NDF) and the error on the fit. In OW, NDF was probably overestimated leading to too small errors on the fit.
In OWC, build_ptmp_cov.m was replaced by build_cov.m. To compute the covaraince matrix, build_cov.m assumes, in addition, a profile-to-profile covariance, using the small spatial scales as the covariance scale. As a result, NDF can be significantly reduced compared to OW, depending on the float path and the small scales defined by the operator. However, NDF is now probably underestimated, particularly if the float does not move over a large distance.

It is suggested here to also introduce a covariance timescale (MAPSCALE_AGE_SMALL) when constructing the covariance matrix. The covariance matrix would then take into account the vertical, horizontal and time scales. This means that two profiles of a float that does not move over a long distance (compared to the small spatial scales) would no longer be correlated if sufficiently distant in time. In this case, this will increase the NDF, giving a value between those of OW and OWC.

To make it easier to distinguish the various version of the codes, Annie suggested the following in OWC:

  • rename "build_cov.m" to "build_ptmp_xy_cov.m", pull "covarxy_pv.m" out into the main package (instead of hidden in build_cov.m);
  • create "build_ptmp_xyt_cov.m" to include age_small in the off-diagonal tiles, and use the existing "covarxyt_pv.m".

"age_scale" variable legacy issue

Issue

Discussion around this issue was instigated HERE

Summary

During the initial development of the matlab OW code only one age scale was used for finding best historical data. Since then a second age scale has been added called scale_age_large. However, to allow old mapped files to continue being used, the other variable was not renamed to scale_age_small (to match the other variables that contain a large and small scale).

We are not planning to allow this problem to propagate into the new python version, and therefore the variable name in the matlab version should be changed to match the python version (scale_age_small).

inconsistencies in find_10thetas

In summer of 2020 (23-27 Jul) we (Birgit, Cécile and Kamila) had an e-mail exchange about some inconsistencies found in the find_10thetas.m script. I am opening this issue here so the discussion does not go lost and some day we fix this :)

There are two main problems:

  1. Sometimes the selected theta levels are not present in the entire time series and therefore the diagnostic plots have empty spaces. Even if the operator selects time series breaks, the function does not takes them into account.
  2. In frontal regions the selected theta levels are present in very different depths which also leads to misleading plots.
    An example (shared by Birgit) can be found here: 7900496_test_OWCall.docx

Moreover, for us it is unclear how this issues affect the further calculations of salinity differences.

@cabanesc @kamwal @BSHbirgit @apswong @gmaze
ps. should we have this discussion in our public QC forum or leave it here?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.