Git Product home page Git Product logo

Comments (8)

kurtisanstey avatar kurtisanstey commented on August 21, 2024
  • Hump gets worse above 900 m; appendix showing every 40m?
  • Based on continuum amplitude and slope, the hump only affects continuum range above -680 m.
  • Hump present through depth at frequencies above 1.2e-4 Hz.

Annual PSD for appendix - may affect continuum range above -680 m:




from internal_waves_barkley_canyon.

kurtisanstey avatar kurtisanstey commented on August 21, 2024

Raw vs 15-min data, for Oct 10, 2013

u:
image
image

Mean Backscatter:
image
image

Beam intensity:
image
image

Beam correlations:
image
image

from internal_waves_barkley_canyon.

kurtisanstey avatar kurtisanstey commented on August 21, 2024
  • Smooth one day (Oct 10) of raw data to 15-minutes using both np.convolve (450 pt box car filter / 450 so sums to 1) and ds.coarsen and see how they compare with each other and ONC 15-minute data.
  • Using xarray coarsen reproduces the ONC 15-minute data exactly (they probably used this function)
  • Using numpy convolution reproduces the ONC 15-minute data with gaps where NaNs were present in the raw data.

Raw (2-second, ONC):
image
Averaged (15-minute, ONC):
image
Coarsened (15-minute, xarray from raw):
image
Convolved (15-minute, numpy from raw):
image


  • PSD of raw data to see if hump present.
  • PSD of two months of raw data show that the hump and spikes are present. They are not a result of averaging.
  • As with the 15-minute data, there is little effect on the specific continuum range below the threshold depths (see end of this comment).

Raw PSD for appendix:





  • Hump only 'goes away' at lower depths because the continuum amplitude rises to conceal it, seen in the 'zoomed out' PSD.
  • There is also a very notable spike at 3e-3 Hz, that diminishes with depth.

'Zoomed out' raw PSD:


  • Evaluate 15-min data for corr. and intensity thresholds using ds.where, see if significant portion of deep data is affected.
  • Echo intensity is best for evaluating noise, as ds.where function begins eliminating data with just a slight threshold increase. Raw data colour bars were set to 55 - 120 counts, and significant data is eliminated at a threshold of just 65.
  • Correlation is more difficult to assess, as significant data is not eliminated until a very high threshold of 120 counts, where raw data colour bars were set to 75-130.
  • From both variables it is obvious that most of the 'bad' data is above -700 m, and rarely dips below this level.

image
image

image
image

from internal_waves_barkley_canyon.

kurtisanstey avatar kurtisanstey commented on August 21, 2024

@jklymak

See comment directly above this one for updates on data quality. If there are too many plots or the comments are tough to follow, we can discuss on Wednesday.

from internal_waves_barkley_canyon.

jklymak avatar jklymak commented on August 21, 2024

@kurtisanstey That looks fine. What happens when you combine these two into a flag on the velocity? It looks like not a lot is gained from screening on full res versus 15 mintue averages, which is good.

from internal_waves_barkley_canyon.

kurtisanstey avatar kurtisanstey commented on August 21, 2024
  • Check long-term raw data for threshold consistency.

2 months screened intensity (raw):
image


  • Combine corr. and intens. threshold data to make a 'flag' for the velocity data, see how it looks.
  • Need to check long-term threshold values for better screening.
  • Preliminary test of one day of 15-minute data results in a velocity screen (values of 1 or NaN, only) that could be applied as such:

Screen:
image
Velocity:
image
Velocity after applying screen:
image

from internal_waves_barkley_canyon.

kurtisanstey avatar kurtisanstey commented on August 21, 2024
  • Check ds.coarsen for number of good data points per window.
  • Coarsen function does not return # of good data points per window (checked xarray info and source code). It uses the 'mean()' function with skipna=True to determine window value. If set to skipna=False convolves data exactly as np.convolve(), with NaN for windows containing NaN. skipna() and np.convolve() both do not have a counter for # of good data points per window.
  • Check long-term 15-minute data for threshold consistency, decide how to cut data (mask or cut depth). Use percent good vs depth to determine good cut-off depth.
  • Percent good vs depth for two years of 15-min data suggests that an echo intensity threshold of 65 and/or correlation threshold of 115 are ideal for screening data, and that threshold depth is fairly consistent through time.
  • For these threshold values, there is steep step-like jump from 20% to 90% between -600 and -700 m.
  • Propose cutting plots at -600 m, with analysis only below -700 m.
  • Some seasonal variation of good data in fall (more bio scatterers higher in water column, so better reflection).
  • Cut plots at -600 m, and analysis line at -700 m? Why not just cut at -700 m?
  • Update writing with details.

image
image
image

from internal_waves_barkley_canyon.

kurtisanstey avatar kurtisanstey commented on August 21, 2024

Updated in writing. Archiving for reference.

from internal_waves_barkley_canyon.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.