Git Product home page Git Product logo

neuroneural / brainchop Goto Github PK

View Code? Open in Web Editor NEW
325.0 7.0 39.0 371.19 MB

Brainchop: In-browser 3D MRI rendering and segmentation

Home Page: https://neuroneural.github.io/brainchop/

License: MIT License

HTML 1.30% JavaScript 73.60% Python 0.55% CSS 14.83% Less 1.66% SCSS 1.68% Jupyter Notebook 6.38%
deep-learning 3d-segmentation frontend-app javascript neuroimaging pyodide tensorflowjs three-js medical-imaging mri

brainchop's Introduction

brainchop's People

Contributors

farfallahu avatar hanayik avatar mmasoud1 avatar neurolabusc avatar sergeyplis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

brainchop's Issues

smooth on label viewer

Label viewer labels should not be smoothed by default. Only if a user specifically wants it.
image

[Documentation]: Add author contributions

In paper.md, could you please add an author contributions statement detailing what each author's respective role is in the software. For example, you could categorize author contribution according to the CRediT taxonomy by including a section

## Author contributions

We describe contributions to this paper using
the CRediT taxonomy [@credit].
Writing – Original Draft: <<insert appropriate author initials here>>;
Writing – Review & Editing: <<insert appropriate author initials here>>;
Conceptualization and methodology: <<insert appropriate author initials here>>;
Software and data curation: <<insert appropriate author initials here>>;
Validation: <<insert appropriate author initials here>>;
Resources: <<insert appropriate author initials here>>;
Visualization: <<insert appropriate author initials here>>;
Supervision: <<insert appropriate author initials here>>;
Project Administration: <<insert appropriate author initials here>>;
Funding Acquisition: <<insert appropriate author initials here>>;

and then adding a credit citation to paper.bib:

@article{credit,
  author  = {Brand, Amy and Allen, Liz and Altman, Micah and Hlava, Marjorie and Scott, Jo},
  title   = {Beyond authorship: attribution, contribution, collaboration, and credit},
  journal = {Learned Publishing},
  volume  = {28},
  number  = {2},
  pages   = {151-155},
  doi     = {10.1087/20150211},
  url     = {https://onlinelibrary.wiley.com/doi/abs/10.1087/20150211},
  year    = {2015}
}

ref: JOSS Review

[Bug]: Segmentation model info popup cuts off content

The info popup that comes up when you click on the Screenshot 2023-02-28 at 11 52 45 AM
button next to the segmentation model selection dropdown has a couple issues:

  • It cuts off the content when the information text is over a certain length. For me, this happened for the "FS aparc+aseg Atlas 104 (failsafe)" model.
  • If I click the info button for one model and then click on the info popup, the popup box stays open. If I then select another model, sometimes the popup content does not change and other times it presents information that is not that same as when I initially click the info button for that model, instead saying something like "This model needs dedicated graphics card" along with a checkbox that doesn't do anything.

ref: JOSS Review

A confusing menu of available models

The current list of models looks very messy and does not leave a feeling of what is unique and important about brainchop.
Minimally, I suggest the following modification that leaves the content as is but makes it clear by the order which models are unique and powerful:

  1. Full Brain Single Pass GWM
  2. 50-ROI Atlas
    • low mem, slower
    • high mem, faster (may not run)
  3. 104-ROI FS aparc+aseg Atlas
    • low mem, slower
    • high mem, faster (may not run)
  4. Gray-White Matter (large)
    • low mem, slower
    • high mem, faster (may not run)

Leave only the above under models. Just 4 models and 2 ways to run them. Eventually we may make the choice which of the ways to run a model automatic based on the users browser capabilities.

If we want to still enable the user to perform the two operations that are currently cluttering the menu without adding to clarity and much to functionality, let's inside the "Models" menu add a divider line for "Operations" followed by:

  1. Extract the Brain (large)
  2. Extract the Brain (light)
  3. Compute Brain Mask (light)
  4. Compute Brain Mask (large)

Ideally, the above menu also contains only 2 operations: extract and compute.

change the overlay range and colormap

It seems that if the default label range in the overlay is complete, 0 - 49 for 50-class model, colormap is spectrum, and transparency is 0.5, the results look much more attractive:
image

[Bug]: Sidebar does not scroll on home page

On the home page of brainchop.org, the sidebar (the one with options "Open Brain T1 MRI," "Segmentation Options," etc) does not scroll and is cut off at the bottom on my Macbook. Would it be possible to make the sidebar scrollable?

Also, perhaps you could make each of the sidebar cards collapsible so that the user could hide them when they don't want them.

ref: JOSS Review

Meshnet to tfjs conversion, memory problem

I would like to use MeshNet for knee MRI bone segmentation. The training results of MeshNet using Catalyst are excellent, but I am facing challenges in implementing an efficient model for use in Brainchop. Additionally, I plan to deploy Brainchop not only on machines dedicated to deep learning but also on regular computers with integrated GPUs, 16GB RAM, etc. 3 hours training example below:
ai
To address this, I trained the model on subvolumes. However, during the PyTorch to TensorFlow.js conversion, it overloads 30GB of memory in the conversion to ONYX stage. As a solution, I decided to divide the conversion into chunks. Unfortunately, this led to memory overloads in Brainchop, causing crashes. Consequently, I opted to process each subvolume separately and accumulate the results. Despite these efforts, I am still unable to achieve results comparable to the example models in Brainchop. Given my relative newness to this topic, I believe I may be overlooking some important aspects. Any guidance would be appreciated.

def process_subvolumes(model, subvolume, device):
    with torch.no_grad():
        subvolume = subvolume.to(device)
        output = model(subvolume)
    return output

volume_shape = [256, 256, 256]
subvolume_shape = [128, 128, 128]
n_subvolumes = 1024
n_classes = 3
atlas_classes = 104
scube = 64  # 64

model_path = '/kaggle/working/logs/best_full.pth'

device_name = "cuda" if torch.cuda.is_available() else "cpu"
device = torch.device(device_name)

meshnet_model = MeshNet(n_channels=1, n_classes=n_classes)
meshnet_model.load_state_dict(torch.load(model_path, map_location=device)['model_state_dict'])
meshnet_model.to(device)
mnm = fuse_bn_recursively(meshnet_model)
mnm.model.eval()

# Generate random input volume
x = torch.randn(1, 1, volume_shape[0], volume_shape[1], volume_shape[2], requires_grad=True)

# Split the input volume into subvolumes
subvolumes = x.unfold(2, subvolume_shape[0], subvolume_shape[0]).unfold(3, subvolume_shape[1], subvolume_shape[1]).unfold(4, subvolume_shape[2], subvolume_shape[2])

# Initialize the result tensor
result = torch.zeros(1, n_classes, subvolume_shape[0], subvolume_shape[1], subvolume_shape[2])

# Process each subvolume and accumulate the results
for i in range(subvolumes.size(2)):
    print("subvolumes.size(2)")
    for j in range(subvolumes.size(3)):
        print("subvolumes.size(3)")
        for k in range(subvolumes.size(4)):
            subvolume = subvolumes[:, :, i, j, k, :, :, :]
            output = process_subvolumes(mnm, subvolume, device)
            result = result + output[:, :, :subvolume_shape[0], :subvolume_shape[1], :subvolume_shape[2]]

# Export the model to ONNX in smaller chunks
chunk_size = 32  # 32=6/14GB RAM, 64=12GB RAM # Adjust this value based on your available memory
num_chunks = volume_shape[0] // chunk_size

onnx_file_paths = []

for i in range(num_chunks):
    start_index = i * chunk_size
    end_index = min((i + 1) * chunk_size, volume_shape[0])
    
    subvolume = x[:, :, start_index:end_index, :, :]
    
    onnx_file_path = f'/kaggle/working/logs/tmp/mnm_model_chunk_{i}.onnx'
    onnx_file_paths.append(onnx_file_path)
    
    torch.onnx.export(mnm, subvolume.to(device), onnx_file_path, export_params=True,
                      opset_version=13, do_constant_folding=True,
                      input_names=['input'], output_names=['output'],
                      dynamic_axes={'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}})

# Merge exported chunks into a single ONNX file
merged_onnx_file_path = '/kaggle/working/logs/tmp/mnm_model_merged.onnx'

# Create a list to store individual onnx models
onnx_models = [onnx.load(onnx_file_path) for onnx_file_path in onnx_file_paths]

# Concatenate onnx models
onnx_model = onnx_models[0]
for i in range(1, num_chunks):
    onnx_model.graph.node.extend(onnx_models[i].graph.node)
    onnx_model.graph.output.extend(onnx_models[i].graph.output)

# Save the merged onnx model
onnx.save(onnx_model, merged_onnx_file_path)

# Load the merged ONNX model and convert to Keras
loaded_onnx_model = onnx.load(merged_onnx_file_path)
k_model = onnx_to_keras(loaded_onnx_model, ['input'], name_policy='renumerate')

# Save Keras model for TensorFlow.js
tfjs.converters.save_keras_model(k_model, '/kaggle/working')
fixjson_file('/kaggle/working/model.json', scube=scube)

# Remove temporary chunk files and folders
for onnx_file_path in onnx_file_paths:
    os.remove(onnx_file_path)

 #print("Done")

What dataset has been used for training?

If you can kindly share what dataset has been used for training, especially the segmentation that would be helpful, as we are working on a CNN based segmentation where we are looking for brain MRI segmentations.

label overlay starts at higher labels

When 18-ROI model runs the label overlay on the input image does not start at 0 but it should. Note, the highest label is not the highest provided by the model either.
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.