Git Product home page Git Product logo

Comments (3)

Arthanis58 avatar Arthanis58 commented on September 5, 2024

Hello,
I would also welcome HDexaminer support, however as I am not good with python I am keeping to the web GUI and it would be very helpful if I could input the exposure times in seconds. Is there any way to do it now with the batch .yaml file definitions ?

from pyhdx.

tuttlelm avatar tuttlelm commented on September 5, 2024

I have created a pull request that includes the models.py changes and an additional script convert_data.py for the HDExaminer conversion.

One main reason for keeping replicates is that we use that type of data for other HDX-MS statistical analysis packages, so it is nice to be able to work with the same original data file for different applications. Leaving as replicates does tend to rather inflate the coverage plots, but I appreciate being able to see any replicate to replicate variability there (obviously there are other ways to do this as well). My preference is to keep the replicates within a single HDXMeasurement object. The per residue calculations take care of the replicate averaging.

Is the hdxms-datasets project for the raw data or just the analysis outputs? I'd be very interested in something that can translate between different raw data formats and meta data specifications. I have the opposite problem as you in my pyHXExpress project in that I have access only to HDExaminer outputs and not so much DynamX type data and outputs.

Currently all of the HDExaminer outputs I am working with are for unpublished projects, but I'll see if I can track down something I am able to share.

from pyhdx.

Jhsmit avatar Jhsmit commented on September 5, 2024

With respect to hdxms-datasets at the moment the scope would be output in the form of peptide d-uptake tables. At the moment as a format where replicates are averages together, but preferable the format would support keeping the replicates and let downstream software decide how they treat replicates. This way statistical testing can still be done on the datasets.

The format doesnt have to be all the same, so there can be DynamX formatted peptide output data files, or HDExaminer formatted output files, as long as the metadata specifies which format it is, and then a reader function can take that metadata and read tables depending on which format was used.

Ideally also there should be some agreement between users on which fields the returned dataframes are; eg is it 'time' , 'exposure' or 'exposure_time' (and units); d-uptake, uptake; should there be a m0 field, etc

from pyhdx.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.