Git Product home page Git Product logo

biopeaks's People

Contributors

dependabot[bot] avatar jancbrammer avatar sappelhoff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

biopeaks's Issues

Making community and contributing guidelines more visible

related to JOSS review (openjournals/joss-reviews#2621)

At the moment you have a contributor's guide in the documentation. That's cool and I have some suggestions on how to make this more visible:

New sections / re-ordering of README that is displayed on GitHub.

  • most users will see this README as the first thing
  • a new section # Documentation would be nice --> the heading will make it immediately obvious where people have to click
    • currently the link to the docs is buried in "general info"
    • perhaps also worth thinking about which type of documentation you publish --> is it the "stable" version or "latest" (i.e., unreleased changes on master) or both?
  • another new section # Contributing would be good with a brief sentence that contributions (of kind XYZ) are welcome ... and then provide a direct link to the contributors guide in the docs

Code of Conduct

Add code coverage to tests

related to JOSS review (openjournals/joss-reviews#2621)

It would be great to add a tool that measures what proportion of your code is actually covered by the tests.

It's helpful to:

  • give users a better impression of how far they can "trust" the tests
  • give the developers a better idea of what aspects of the code are tested, and what aspects are not tested

This would be quite easy to implement, e.g., using pytest extensions and codecov as in this repository: https://github.com/sappelhoff/pyprep

Finally, you could add a coverage badge next to your tests:

image

custom zoom gets undone during manual peak editing + other issues

related to JOSS review (openjournals/joss-reviews#2621)

some notes while trying the new getting started section from #12:

  • after editing some peaks and specifically while adding a new one manually (click and press "a"), my currently "zoomed in" window is re-sized to the full window again and I have to zoom back in ... is this an error you also observe? Or is it linux (ubuntu 18.04) specific?
  • manually edited peaks are indistinguishable from the automatically selected peaks --> both show up as purple dots for me. It'd be nice to be able to distinguish them
  • manually edited peaks that I put into the "wrong" place are not "autocorrected" when pressing the peak autocorrection button ... is that intended?

Less code to maintain?

I'm quite impressed by your work on this GUI, which got me thinking, couldn't you use NK as a dependency (since it basically doesn't really add any other deps that what you already have) here to have less code to maintain (since there's probably some duplication)? So NK could be focused on the algorithms and here on the GUI?

Suggestion: Add classifiers, long_description, and more links to setup.py

related to JOSS review (openjournals/joss-reviews#2621)

Looking at your package on pypi, I see that a description is missing there:

image

Personally, I prefer seeing a description on pypi -> and usually it's easy to re-use the README from GitHub and have it rendered on pypi as well. For example, see these lines in another package. You need to populate long_description in setup.py

Next to the description, I would advise you to add classifiers to your setup.py (https://pypi.org/classifiers/) --> metadata makes it easier for users (and other software) to find your package, or to filter packages according to their needs.

Finally, I want to suggest to you to add more links to your setup.py. You can use the following syntax (link to example):

          project_urls={
              'Documentation': 'https://mne.tools/mne-bids',
              'Bug Reports': 'https://github.com/mne-tools/mne-bids/issues',
              'Source': 'https://github.com/mne-tools/mne-bids',
          },

which then renders on pypi:

image

PS: You can also edit the GitHub "About" section to add the link to your documentation:

image

then users won't have to go through the README to find the link to the docs.

Discussion: Adopting a consistent style for docstrings

related to JOSS review (openjournals/joss-reviews#2621)

Looking at the code base, I find functions / methods / classes that are sometimes not documented, and sometimes extensively documented but in a seemingly unstructured way. Sometimes I also see a documentation about function parameters such as here:

Parameters
----------
detector : function

but more often than not, the parameters are not documented.

I would strongly recommend to adopt a consistent docstr style (such as numpydoc) and apply this to the codebase. The style can also be enforced with flake8 extensions such as flake8-docstrings.

Consistent, extensive (but concise) docstrings make life easier for contributors, reviewers, and your future self ... and perhaps also for users --> but for that last part I am not entirely sure, because this a GUI package after all, and users are not expected to work with an API.

That latter sentence is the reason why I label this issue as "Discussion" ... I can imagine that other people have different opinions about this point.

Adding .zenodo.json?

related to JOSS review (openjournals/joss-reviews#2621)

EDIT: I just saw that you already added the zenodo badge. I probably oversaw it because I was expecting it at the top of the README next to the testing badge. Feel free to ignore my comment on that badge, However, my comment on the .zenodo.json is still valid.


Using a Google search I saw that you have stored a record of biopeaks on Zenodo -> https://zenodo.org/record/3700054

(probably via the GitHub integration, but I cannot check right now because Zenodo seems to be down)

I think that's great. Here are two suggestions:

  1. add the zenodo badge to your repository README (alternatively, don't add it, and add the JOSS badge instead) --> the reasoning is to point users to an archived version of the code that will persist even if GitHub is deleted someday. Once (if) your paper is published in JOSS, the JOSS badge would be sufficient, because the JOSS paper will also contain a link to the Zenodo archive.
  2. (only if you are in fact using the Zenodo-GitHub integration, which I couldn't verify at the moment of writing) --> add a .zenodo.json file to the root of your project. These files are not very well documented (unfortunately ... see zenodo/zenodo#1606), but they allow you to specify metadata for the Zenodo record. As an example, see: https://github.com/mne-tools/mne-bids/blob/master/.zenodo.json

Small suggested paper edits

As part of the JOSS review (openjournals/joss-reviews#2621), I've been reading the paper, with some small notes.

Quick note:

  • it seems a bit surprising to me that mean precision and sensitivity values are exactly the same (in the table). This could of course be correct, but it might be worth double checking there wasn't a copy/paste error here (?)

Small notes for the references:

  • I don't think the Elgendi paper needs to reference the editor (or, at least, it is inconsistent for most papers you don't do this). The McKinney paper also lists editors, but from the PDF I'm not the listed editors are accurate, so I might check and remove there as well.
  • the 'Howell' paper is the only one that indicates a month as part of the date, and I think it probably doesn't need to do so
  • I believe the numpy citation should have the first author's last name as van der Walt (all part of the last name). It seems to have been parsed to put the 'van der' as a middle name

potentially add .mailmap for consistent commiter identities

Hi @JanCBrammer, I am one of the reviewers of your submission at JOSS openjournals/joss-reviews#2621 and over the next weeks I'll dive into the repo and ... review it :-)

I'll try to make small and targeted issues whenever I can, this is the first one:

When trying to find which people contributed to this code, I ran git shortlog --summary --email and got the following output:

     7  Jan C. Brammer <[email protected]>
     1  Jan C. Brammer <[email protected]>
   242  JohnDoenut <[email protected]>
   137  JohnDoenut <[email protected]>

I can see that it's all you, but you might want to consider adding a .mailmap file:

it is used to map author and committer names and email addresses to canonical real names and email addresses

so that you can merge your 4 identities into one reasonable identity (of your choice).

For example, see this .mailmap --> https://github.com/mne-tools/mne-bids/blob/master/.mailmap

Edit: To clarify the goal: if there is only one author, then running git shortlog --summary --email should also only yield one output (not 4 as in this case) ... and that goal can be achieved by .mailmap

Question/suggestions about benchmark files

related to JOSS review (openjournals/joss-reviews#2621)

In the tests section of the docs you explain nicely how one benchmark from biopeaks/benachmarks can be run.

For the other benchmarks I feel like some documentation would come in handy (related to #11 (comment))

For example I think that I as a user should set these two lines from benchmark_PPG.py to the path where I download the Capnobase IEEE TBME benchmark dataset to?

record_dir = r"C:\Users\JohnDoe\surfdrive\Beta\example_data\PPG\signal"
annotation_dir = r"C:\Users\JohnDoe\surfdrive\Beta\example_data\PPG\annotations"
records = os.listdir(record_dir)

and I assume that benchmark_ECG_local.py can be used when the GUDB data has been downloaded instead of streaming it via benchmark_ECG_stream.py? --> all of this should be cleared up using module level docstrings (see link to doc comment above) and smaller inline comments.

Another note: "uncommented" code in scripts like here ...

# plt.figure()
# plt.plot(ecg)
# plt.scatter(manupeaks, ecg[manupeaks], c="m")
# plt.scatter(algopeaks, ecg[algopeaks], c='g', marker='X', s=150)

... could be improved with an if / else clause and a simple setting of a boolean value at the top of the script. that way of coding is often easier understood and leads to less errors.


Suggestion in case you didn't know: Documenting code with sphinx and sphinx-gallery allows you to integrate such "benchmarks" as examples that get built automatically upon building your documentation --> that way users can inspect the code and all of its results online from their browser, instead of having to run it themselves.

You can see that in action here: https://mne.tools/mne-bids/stable/auto_examples/convert_eeg_to_bids.html#sphx-glr-auto-examples-convert-eeg-to-bids-py

Recommended Code CleanUps

As part of the JOSS review (openjournals/joss-reviews#2621) I was scrolling through the code, and I'm collecting some code suggestions. Broadly, I don't consider these strictly necessary for the paper to proceed, but I thought I would collect some notes on things that might be useful for the project.

Code Suggestions:

  • Check for API consistency. I suggest any parameter name referring to the same thing should have the same name across the module
    • For example 'sampling rate' is sfreq in biopeaks/resp but fs biopeaks/fs
    • Just as a note, in my modules, for argument names that are likely to recur a lot, I just keep a list of what they are all called. This can even be collected and made part of contributor materials, so that new code can be written to follow naming standards.
  • Relatedly, consider variable naming. The module mostly uses snake_case, though with some exceptions, such as currentFile and statusBar in view.py. It is also a little inconsistent on if and when words are separated out (with underscores). Labels like prev_peaks and next_peaks are easier to read labels like amppeaks and periodcintp. Some internal names might be able to be safely updated, some might be grandfathered in - but it might be worth trying to set an consistent style from now forwards, for more consistent and easily readable names.
  • Maybe run a linter?
    • There a few randomly really long lines (like in test_heart), which are annoying to scroll / read
    • I consistently run pylint on my modules, checking for any aspects I care about, and it's a nice way to keep code quality good & consistent
  • Check for and remove commented out code
    • There is quite a lot of commented out code, including lots of things that I think relate to debugging, or maybe old. Old versions of things can typically be removed (it is version controlled), or if things are kept for a reason, it might be worth adding a comment and annotating why.
    • If you want to keep debug related code, I might say to have a DEBUG mode. Set a global or something somewhere that you can set, and then these lines can look like if DEBUG: print('doing thing'). This, I think, would be cleaner in the code, and also much easier to use.

I do get that, as a GUI, perhaps many users are less likely to interact directly with the code. But I would still recommend to adopt and continue with good code quality & consistency, both to make it easy for interested users to check the code, to make it easier to onboard people who might want to contribute, and for general maintenance (including for yourself to be able to more quickly interact with the code).

GUI Questions

As I was going through the user guide (https://jancbrammer.github.io/biopeaks/user_guide.html), there were a couple things that either I couldn't figure out how to do (could be better documented), or that don't seem to be available (might want to be added).

  • is there a way to re-open the configuration box, once it has been closed?
    • at some point I closed it, and it wasn't clear how to get it back
  • is there a way to unload a currently loaded signal?
    • I didn't see a clear way to do this, so I would just restart the program when I wanted to load a new signal

Make use of a dedicated example file in "getting-started" section in docs

related to JOSS review (openjournals/joss-reviews#2621)

I was just trying to follow the getting started section in the docs, and noticed that the "examplary workflow on a single file" stays relatively abstract.

I think it'd be great to add a dedicated example file to the repository and make use of that file in that section of the docs: Instead of describing the functions as currently done, describe single steps and expect users to follow along.

For that to work, example files should ideally be shipped with the installation and/or the docs should explain how to easily get these files.

There should also be a practical example of reading a file in "custom" format, like here but with a real file that users can open and look at.

Suggestion: Add more detailed list of features to the documentation site

Typically, a documentation site would have an API listing (for example: https://neurodsp-tools.github.io/neurodsp/api.html)

Since BioPeaks is GUI, this isn't necessarily needed - however, it strikes me there is not an easy way, from the documentation site, to see a list of available features. The landing page has a high level version, but it's not so specific. The paper mentions some specific algorithms available, but this is not obviously available to the user who peruses the site. I think it might make sense to add something akin / similar to an API listing, maybe a new page on the docsite, which lists out the details of what is available. This doesn't have to relate to each function, but it would be nice to list out what's in the module (listing out, for example, available segmenting algorithms, peak detectors, etc). An alternative is to add more detail to the list on the landing page (though it might be useful to have a separate broad overview, to capture the general stuff, and a more detailed list that can expand as things are added).

Idea: Adding pictures to the User Guide Walk-Throughs

As a possible suggestion, one thing that might be useful in the user guide, specifically in the guided walkthroughs for ECG and RESP data, would be to add some pictures of "this is what the screen should look like right now". This should hopefully be relatively easy, since you use a standardized piece of data, and I think would help the new user, especially right at the beginning (at moments, for example, I wasn't sure if i had done things right / or if I could see the expected display).

Also, it might be useful to add a note of any steps in the process that make take some time to process. Not sure if this is expected, but it seemed to take a while, when I was running things to, preview a segment, before it could be confirmed. If this is expected, I would just not in the tutorial "this step might take a few seconds", etc.

Questions about the default method of ecg_peaks() in neurokit2

Hi, I am a university student at the University of Manchester that working on my dissertation now. I have used the library that you are participated in(neurokit2).

Dominique told me that you had written the default method of ecg_peaks() if you could tell me the general idea about your method(A paper would be the best) used for my method part in my dissertation(I mean, it's your method, I will mention that and cite the citations of the biopeaks and neurokit2).

However, if you haven't plan to make your default method public, could you please tell me about that to change another method.

English is not my mother tongue, so I hope you don't mind if I offend you. Thank you for your contribution to open source.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.