Git Product home page Git Product logo

flowtorch's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flowtorch's Issues

Weight Key in new PSP data changed

In the new PSP datasets (MK2) there are two weight keys available denoted Mask1 and Mask2. This is currently incompatible with the PSP dataloader which expects the keyword Mask to find the weights. I would propose to default to the masking that is comparable to MK1 data and give the user an option to select the less conservative masking if desired.

Clarifications on the "state of the field"

In the paper, please clarify how this software compares to other post-processing packages. To my knowledge, there is no mention of similar software packages. For instance, the VisIt package (https://visit-dav.github.io/visit-website/index.html) also supports vtk, hdf5, and netcdf file formats and fluid-flow visualizations. Other examples include Paraview and TecPlot. Please cite and compare with these existing tools. I think this will be a favorable comparison because personally I find VisIt and these other tools excruciating to use.

Implement CGNSDataloader class

The CGNSDataloader class allows project partners to use the Flexi solver to process their data. Flexi stores the mesh in a separate file. Test data you be downloaded here

@MehdiMak96,
as a first step, I would like you to explore and report the structure of Flexi output and mesh files. Let's extract the following information first:

  • datasets available in the mesh file
  • datasets available in the solution files
  • list of write times (extracted from file or filename)
  • list of available fields

You can open the files with h5py. Each file is organized as a set of dictionaries. Each dictionary may contain several other dictionaries. To fully investigate the file content, one needs to go recursively through the dictionaries down to the lowest level. A minimal code example to access the top-level dictionary would look as follows:

from h5py import File

mesh = File("Cylinder_Re200_mesh.h5")
data = File("Cylinder_Re200_RP_0000005.000000000.h5")

print(mesh.keys())
print(data["RP_Data"])

Check out the h5py documentation to learn how to access different kinds of data. This document explains what the mesh file contains.

Best, Andre

FoamDataloader

Hello:
I ran my example on Linux this time, but still reported the following error. I tried other examples and reported the same error. But I ran your SVD decomposition code for the flow past a cylinder and it worked perfectly. So I checked the blockMesh file of the cylinder flow around the dataset you provided. The grid settings are as follows. Is this error related to this setting?
image
image

Installation of flowtorch on Ubuntu 20.04

Hi Mahdi,
to get started, follow the installation instructions below. Please, comment on missing pieces in the described installation process.

Installation on Ubuntu 20.04 via pip:

  1. install pip
sudo apt update
sudo apt install python3-pip pandoc
  1. install basic Python packages to run flowTorch
pip3 install matplotlib mpi4py h5py jupyterlab
  1. install pyTorch (cpu version)
pip3 install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
  1. for documentation and testing, additionally install
pip3 install sphinx sphinx_rtd_theme nbsphinx pytest recommonmark
  1. to export the documentation as PDF file (make latexpdf), install
sudo apt install latexmk texlive texlive-science texlive-formats-extra

Miscellaneous (plotly)

pip3 install plotly
sudo snap install node --classic --channel=14
jupyter labextension install jupyterlab-plotly

Let me if this works for you.
Best, Andre

CSVDataloader problem

Hi there, I am trying to follow an example of DMD, as
follows. https://github.com/AndreWeiner/ofw2022_dmd_training/blob/main/dmd_flowtorch.ipynb But the CSVDataloader could never be able to load any dataset successfully as the helping document of CSVDataloader suggests, with the error as follows. Thanks for helping.


ValueError Traceback (most recent call last)
Cell In[36], line 2
1 path = r'datasets\csv_naca0012_alpha4_surface'
----> 2 loader = CSVDataloader.from_foam_surface(path, "total(p)_coeff_airfoil.raw")
3 times = loader.write_times

File E:\softwares\Anaconda3\envs\pyflowt\lib\site-packages\flowtorch\data\csv_dataloader.py:200, in CSVDataloader.from_foam_surface(cls, path, file_name, header, dtype)
180 """Create CSVDataloader instance to load OpenFOAM surface sample data.
181
182 The class method simplifies to load data generated by OpenFOAM's
(...)
197
198 """
199 folders = glob(f"{path}/*")
--> 200 times = sorted(
201 [folder.split("/")[-1] for folder in folders], key=float
202 )
203 filepath = join(path, times[0], file_name)
204 with open(filepath, "r") as surface:

ValueError: could not convert string to float: 'datasets\csv_naca0012_alpha4_surface\0.001'

Create notebook with linear algebra basics

The idea is to create a notebook that briefly introduces important concepts from linear algebra that are essential to understand dimensionality reduction algorithms like proper orthogonal decomposition (POD) or dynamic mode decomposition (DMD). Potential topics are (list will be updated):

  • tensors (n-dimensional arrays)
  • dot product
  • cross product
  • outer product
  • matrix multiplication
  • identity matrix
  • orthogonal matrix
  • orthonormal matrix
  • Hermitian matrix
  • Unitary matrix
  • Eigenvalue decomposition
  • Singular value decomposition

Each description should contain the following three elements:

  1. mathematical notation
  2. implementation using PyTorch
  3. visualization (using Matplotlib)

Add requirements.txt file or equivalent

Hello!

First of all, great job with this repository -- the functionality looks very useful. I'm going to be opening a set of new (mostly minor) issues directly in your GitHub repo, unless you prefer that I do so directly in the JOSS review.

First thing I noticed: installation went well with either pip or directly downloading the GitHub repo, but it appears that if the GitHub repo is directly downloaded, there is no requirements.txt or equivalent file that tells the user which python package dependencies are required to run your package. This should be a straightforward fix.

I believe the pip download correctly installs all the dependencies for the package.

Best,
Alan

Functionality of POD

Hi Andre,

I was using the Flowtorch library to perform snapshot POD. After loading initial data and creating an instance of POD class, when I call on the compute_decomposition method it throws an error.

` 17 def compute_decomposition(self, mode="snapshot", solver="svd", device="cpu", precision="single"):
18 if mode == "snapshot":
---> 19 data_matrix = self._data.get_data_matrix()
20 if not data_matrix.device == device:
21 data_matrix.to(device=device)

AttributeError: 'Tensor' object has no attribute 'get_data_matrix'`

The data was passed as a tensor to the POD class, as expected by its constructor class.

I was also unable to find any implementation of the get_data_matrix method. Is this functionality available? Can POD be performed using the given script?

flowtorch HDF5 writer

  • write HDF5 files with internal flowtorch data;
  • simple file structure: variable (time-dependent - fields) and constant data (geometry data, weights)

The issue of DATASETS

Hello,

First of all, thanks for the interesting project!
I followed the tutorial: https://flowmodelingcontrol.github.io/flowtorch-docs/1.0/notebooks/svd_cylinder.html
And tried to reproduce the code result in my notebook. The below statement throw some error

----> [1]path = DATASETS["of_cylinder2D_binary"]
      [2]loader = FOAMDataloader(path)
      [3]times = loader.write_times

KeyError: 'of_cylinder2D_binary'

And I printed DATASETS, it was an empty dict. Where can I download the test dataset? Thanks!

data matrix

HELLO
I see that when constructing the data matrix, you place the example data after each column of the data matrix for SVD decomposition, so the left singular matrix is the POD mode. Can I place the data from the example in each row of the data matrix and perform SVD decomposition, where the right singular matrix is POD mode. For example, the number of rows in a data matrix is the number of snapshots, and the number of columns is the number of grid points. Is this flow torch supported?

image

First modification and commit

Hi @MehdiMak96 ,
let's make the first change on the linear_algebra_basics.ipynb notebook.

  1. make sure that you are working on the correct branch (not master): git status; you can change branches using git checkout branch_name (always use the tab-key for autocompletion)
  2. open the linear algebra notebook and make a modification
cd notebooks
jupyter lab
# now you are working in the browser
# open linear_algebra_basics.ipynb ( left side menu; double click)
# add some markdown or code to one of the cells
# close the notebook: File -> Shut Down
  1. go back to the top-level of the repository and run git status again; there should be exactly one modified file, namely linear_algebra_basics.ipynb
  2. stage the modification using
git add  notebooks/linear_algebra_basics.ipynb
  1. check again that the modification is staged git status (the color should have changed from red to orange)
  2. commit the change; mention this issue in the commit message using the syntax #issue_number; the number of this issue should be 6
git commit -m"Make first modification according to issue #6."
  1. push the changes on your branch to the central repository
git push origin name_of_your_branch

Best, Andre

Improvements to the GitHub front page (README file)

While the paper clearly states the problems that the software is designed to solve, and the target audience, it is less clear on the Github repository page. I recommend editing your README file to make it very clear that the software is for post-processing any kind of fluid flow data from a wide range of file formats, and the goal is to help researchers with fluid data have easy and flexible post-processing, verification, and analysis pipelines.

Test flowTorch wheel package

Dear @MehdiMak96 and @Tushargh29,

please have a look if the following flowTorch package installation works for you.

  1. download this wheel package
  2. open a terminal and navigate to the folder in which the .whl file is located
  3. run pip install flowTorch-0.1-py3-none-any.whl
  4. running pip list | grep flowTorch should now display something like flowTorch 0.1
  5. start python3 by typing python3 in the terminal; then run
>> import flowtorch
>> flowtorch.__version__
# expected output
'0.1'

If the above steps work, I'll add some additional tests hereafter.

Best, Andre

DMD for batched trajectories/snapshots

Thanks for the great library !
In the documentation An introduction to DMD, I was wondering if there was any batched equivalent of the DMD example given ?

In the general deep learning we would have more often than not, a batch of trajectories.

dmd = DMD(data_matrix) # here data_matrix is of shape [Nt, #features]

The only way I see this happening and the slow way to would to do a for loop ?

Parametric ROM ?

Hello,

Is it possible to build parametric ROM using CMN? In my case, the prediction is not only depends on the init_state, but also the other_params. Some codes looks like below:

cnm.predict(data_matrix[:, :5], other_params, end_time=1.0, step_size=0.1)
cnm.predict_reduced(reduced_state[:, :5], other_params, 1.0, 0.1)

And my current idea is going to to create the new class, say ParametricCMN which inherit from CMN, and then customize the new transition_prob, and put the other_params transition_prob. Is that right direction?

various minor code documentation changes

Okay I am done with my review. I will finalize it on the review GitHub page when we resolve the issues I submitted. The code and code documentation looks excellent. Good use of comments and class inheritance. Last optional comments about minor code functionality and documentation:

  1. (Optional) It might be nice to add some documentation in sub functions of the various classes, for instance in functions like "interact" in psp_explorer.py. It looks like you already do this here and there in the code.
  2. (Optional) The code is meant to streamline the post-processing + analysis of fluid flows and does a great job for combining fluid flow data from disparate sources. But other than the large set of file readers, the code and examples have somewhat limited functionality in terms of producing ROMs or performing some statistical analysis -- only the traditional SVD, DMD, CNM, are implemented/illustrated in the examples. My suggestion for future work is to either (1) expand the code with enhanced capabilities, or (2) link to pre-existing packages for advanced analyses, since SVD, DMD, CNM, and other methods all have specific Python packages.

Documentation and API Examples down

Hey I want to test the POD implementation on my TomoPIV data sets and noticed that the documentation is down. It would be great to see some implementation examples! Kind regards

Minor edits in the paper.md file

First of all, the paper is well-written, flows well, and clearly states the purpose of the software. Great job.

Some minor edits I'm combining into a single issue:

  1. The DaVis file format is mentioned in the abstract, but not defined in the remainder of the paper. It might be good to include this with the discussion in paragraph 1 of the Statement of need Section.

  2. Clarify in paragraph 1 what is meant by "to improve complex technical application". This is pretty vague. For instance, you could improve this by saying something like "to improve the control of industrial processes..."

  3. Clarify in paragraph 1 why "gaining insights from the data is becoming increasingly challenging." Is this simply because of the size of the data? If so, you could combine this sentence with the next one, saying something like "As modern datasets continue to grow, post-processing pipelines will be increasingly important for synthesizing different data formats and facilitating complex data analysis."

  4. The sentence "When confronted with such ideas..." sounds awkward, I recommend changing it to something like "Often these obstacles require significant research time to be spent accessing, converting, ..."

  5. Third paragraph in the Statement of Need section: It would be good to define/give examples for "computing statistics, performing modal analysis, or building reduced-order models". It would be even better to add some references about these things.

  6. I think Figure 1 could be improved in a few ways. For instance, it would be good if all the package dependencies were all the same color, to indicate they are the same type of thing. It might be nice to expand this figure, showing some of the sub-libraries like the DMD class, the FileReaders, etc. but this might be too much.

  7. Before the "DMD analysis of airfoil surface data" section, you say "Reduced-order-model (ROM)" but reduced order models have been introduced earlier in the document. It would be good to move the (ROM) abbreviation to the earliest use.

  8. If there is a reference for the airfoil dataset, please add it.

  9. Figure 2:
    (a) Please clarify if the images are in the (x, z) space, and m. 8, m. 6, etc. refer to modes 8, modes 6, and so on.
    (b) Other than the shock front, is there some physical interpretations of the DMD modes or their characteristic frequencies (for instance, the top and bottom modes seem to be harmonics)?
    (c) Please reduce the number of significant digits shown on the colorbars, and it might be good to use the same colorbar for all three modes.
    (d) Maybe I missed it, but what is the significant of the "cp" in the title?

  10. In the first paragraph of "CNM of the flow past a circular cylinder" Section, it should be "always consultants of three steps". It would also be good to add some description and a reference on reduced order models of the cylinder flow, like Noack et al. (2003). Figure 3 shows that the first two SVD modes form a harmonic oscillator, but I am not entirely show what the data clustering dots are showing.

  11. In general, this paper would benefit from some more references.

Projection error for TLSQ DMD

To compute the projection error, self._dm should be used rather than self._X and self._Y since those are projected already onto the right singular vectors.

Projection error in HODMD

Bug in the computation of the projection error in HODMD. Presumably, only the first n_points entries must be used for projection.

csvdataloader

Hello
Can csvdataloader load data like foamdataloader? If possible, is the data format. csv or. dat? How can I use Openfoam to calculate the results and obtain CSV data?I used Paraview to extract data at a certain section and output it in CSV format. However, when I imported it using csvdataloader, the following error occurred
image

Bug in parsing faces from binary format

Cylinder and airfoil test case throw exception when computing cell vertices. The reason is most likely in the face parsing. The point labels take unreasonably large values. The error occurs for serial and decomposed cases. Parsing ASCII format is not affected.

CNM CODE

Is the principle of using CNN algorithm for prediction interpolation on pod coefficients? I don't quite understand why this code needs to input data from the first 5 columns.
屏幕截图 2024-03-22 152216

Implement OpenFOAM unit tests

Idea:
create basic OpenFOAM simulation setups to test the OpenFOAM reader implementation.

  • create openFOAM singularity container
  • create a cavity-like test cases
    • serial, ascii
    • serial, binary
    • parallel, ascii
    • parallel, binary
  • create unit tests with pytest

SVD code

I'm sorry to bother you.Why choose z-component to build a data matrix
image
Doc1.pdf

FOAMDatalaoder

I am sorry to bother you again.When I used Flowtorch to import data calculated by myself using Open Foam, the following error was reported. May I ask what happened to you
image

potential pytest issues

Hello again. I am now basically done reviewing the pytest tests + non-code documentation + paper, and only the documentation within the code and reproducing/suggesting improvements for the examples remains. I will open an issue for these separately. I am happy to see some good pytest tests. These mostly work for me after downloading the big dataset. However, I still have the following issues:

  1. Failure in analysis/test_psp_explorer.py because it looks like the test file is hard-coded on your system, "FileNotFoundError: Could not find file /home/andre/Downloads/input/FOR/iPSP_reference_may_2021/0226.hdf5"

  2. I get a mysterious segmentation fault when using pytest on flowtorch/rom/test_cnm.py, although it appears the tests are sometimes completing correctly. The cnm_cylinder.ipynb jupyter notebook also crashes for me when the CNM function is called. See below:

flowtorch/rom/test_cnm.py: 16 tests with warnings
  /Users/alankaptanoglu/flowtorch/flowtorch/rom/utils.py:72: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. Use `bool` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.bool_` here.
    is_different = np.diff(sequence).astype(np.bool)

-- Docs: https://docs.pytest.org/en/latest/warnings.html
================================================== 10 passed, 18 warnings in 5.54s ==================================================
Fatal Python error: Segmentation fault: 11 
  1. (Optional) Ideally, it would be nice if the pytest tests were not dependent on first downloading a big 2.5 GB file. I think many of these tests should still work on synthetic datasets of smaller size.

XDMF file writer

  • creates xdmf files for post-processing in paraview
  • links to datasets in flowtorch HDF5 files

Implement FOAMDataloader class

Implement the FOAMDataloader class. The class should take a path to an OpenFOAM case as input and provide functions to load fields and the mesh sequentially. Moreover, the class should be able to handle:

  • serial and parallel cases
  • ascii and binary format

Additionally, there should be a

  • unit test
  • an example in the documentation on how to use the class

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.