flowmodelingcontrol / flowtorch Goto Github PK
View Code? Open in Web Editor NEWflowTorch - a Python library for analysis and reduced-order modeling of fluid flows
License: GNU General Public License v3.0
flowTorch - a Python library for analysis and reduced-order modeling of fluid flows
License: GNU General Public License v3.0
In the new PSP datasets (MK2) there are two weight keys available denoted Mask1 and Mask2. This is currently incompatible with the PSP dataloader which expects the keyword Mask to find the weights. I would propose to default to the masking that is comparable to MK1 data and give the user an option to select the less conservative masking if desired.
In the paper, please clarify how this software compares to other post-processing packages. To my knowledge, there is no mention of similar software packages. For instance, the VisIt package (https://visit-dav.github.io/visit-website/index.html) also supports vtk, hdf5, and netcdf file formats and fluid-flow visualizations. Other examples include Paraview and TecPlot. Please cite and compare with these existing tools. I think this will be a favorable comparison because personally I find VisIt and these other tools excruciating to use.
The CGNSDataloader class allows project partners to use the Flexi solver to process their data. Flexi stores the mesh in a separate file. Test data you be downloaded here
@MehdiMak96,
as a first step, I would like you to explore and report the structure of Flexi output and mesh files. Let's extract the following information first:
You can open the files with h5py. Each file is organized as a set of dictionaries. Each dictionary may contain several other dictionaries. To fully investigate the file content, one needs to go recursively through the dictionaries down to the lowest level. A minimal code example to access the top-level dictionary would look as follows:
from h5py import File
mesh = File("Cylinder_Re200_mesh.h5")
data = File("Cylinder_Re200_RP_0000005.000000000.h5")
print(mesh.keys())
print(data["RP_Data"])
Check out the h5py documentation to learn how to access different kinds of data. This document explains what the mesh file contains.
Best, Andre
Hello
I want to use flowtorch on my windows systerm. I want to know if it can be installed by the wsl(I have tried to do it ,but I am failed).
Hello,
I run the demo of CNM (https://flowmodelingcontrol.github.io/flowtorch-docs/1.1/notebooks/cnm_cylinder.html) and some segmentation fault
error comes up:
[1] 14357 segmentation fault python3 03-cnm.py
Maybe the problem is recorded in #26. And my dataset is minimal dataset.
Hello:
I ran my example on Linux this time, but still reported the following error. I tried other examples and reported the same error. But I ran your SVD decomposition code for the flow past a cylinder and it worked perfectly. So I checked the blockMesh file of the cylinder flow around the dataset you provided. The grid settings are as follows. Is this error related to this setting?
Hi Mahdi,
to get started, follow the installation instructions below. Please, comment on missing pieces in the described installation process.
Installation on Ubuntu 20.04 via pip:
sudo apt update
sudo apt install python3-pip pandoc
pip3 install matplotlib mpi4py h5py jupyterlab
pip3 install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
pip3 install sphinx sphinx_rtd_theme nbsphinx pytest recommonmark
make latexpdf
), installsudo apt install latexmk texlive texlive-science texlive-formats-extra
Miscellaneous (plotly)
pip3 install plotly
sudo snap install node --classic --channel=14
jupyter labextension install jupyterlab-plotly
Let me if this works for you.
Best, Andre
Hi there, I am trying to follow an example of DMD, as
follows. https://github.com/AndreWeiner/ofw2022_dmd_training/blob/main/dmd_flowtorch.ipynb But the CSVDataloader could never be able to load any dataset successfully as the helping document of CSVDataloader suggests, with the error as follows. Thanks for helping.
ValueError Traceback (most recent call last)
Cell In[36], line 2
1 path = r'datasets\csv_naca0012_alpha4_surface'
----> 2 loader = CSVDataloader.from_foam_surface(path, "total(p)_coeff_airfoil.raw")
3 times = loader.write_times
File E:\softwares\Anaconda3\envs\pyflowt\lib\site-packages\flowtorch\data\csv_dataloader.py:200, in CSVDataloader.from_foam_surface(cls, path, file_name, header, dtype)
180 """Create CSVDataloader instance to load OpenFOAM surface sample data.
181
182 The class method simplifies to load data generated by OpenFOAM's
(...)
197
198 """
199 folders = glob(f"{path}/*")
--> 200 times = sorted(
201 [folder.split("/")[-1] for folder in folders], key=float
202 )
203 filepath = join(path, times[0], file_name)
204 with open(filepath, "r") as surface:
ValueError: could not convert string to float: 'datasets\csv_naca0012_alpha4_surface\0.001'
The idea is to create a notebook that briefly introduces important concepts from linear algebra that are essential to understand dimensionality reduction algorithms like proper orthogonal decomposition (POD) or dynamic mode decomposition (DMD). Potential topics are (list will be updated):
Each description should contain the following three elements:
Hello!
First of all, great job with this repository -- the functionality looks very useful. I'm going to be opening a set of new (mostly minor) issues directly in your GitHub repo, unless you prefer that I do so directly in the JOSS review.
First thing I noticed: installation went well with either pip or directly downloading the GitHub repo, but it appears that if the GitHub repo is directly downloaded, there is no requirements.txt or equivalent file that tells the user which python package dependencies are required to run your package. This should be a straightforward fix.
I believe the pip download correctly installs all the dependencies for the package.
Best,
Alan
Hi Andre,
I was using the Flowtorch library to perform snapshot POD. After loading initial data and creating an instance of POD class, when I call on the compute_decomposition method it throws an error.
` 17 def compute_decomposition(self, mode="snapshot", solver="svd", device="cpu", precision="single"):
18 if mode == "snapshot":
---> 19 data_matrix = self._data.get_data_matrix()
20 if not data_matrix.device == device:
21 data_matrix.to(device=device)
AttributeError: 'Tensor' object has no attribute 'get_data_matrix'`
The data was passed as a tensor to the POD class, as expected by its constructor class.
I was also unable to find any implementation of the get_data_matrix method. Is this functionality available? Can POD be performed using the given script?
Hi, sir
I am trying to use this package.
When quadrilateral mesh is imported, the map is not exactly right.
Thanks
Hello,
First of all, thanks for the interesting project!
I followed the tutorial: https://flowmodelingcontrol.github.io/flowtorch-docs/1.0/notebooks/svd_cylinder.html
And tried to reproduce the code result in my notebook. The below statement throw some error
----> [1]path = DATASETS["of_cylinder2D_binary"]
[2]loader = FOAMDataloader(path)
[3]times = loader.write_times
KeyError: 'of_cylinder2D_binary'
And I printed DATASETS
, it was an empty dict. Where can I download the test dataset? Thanks!
Implement parallel OpenFOAM case reader against unit tests in #3
HELLO
I see that when constructing the data matrix, you place the example data after each column of the data matrix for SVD decomposition, so the left singular matrix is the POD mode. Can I place the data from the example in each row of the data matrix and perform SVD decomposition, where the right singular matrix is POD mode. For example, the number of rows in a data matrix is the number of snapshots, and the number of columns is the number of grid points. Is this flow torch supported?
Hi @MehdiMak96 ,
let's make the first change on the linear_algebra_basics.ipynb notebook.
git status
; you can change branches using git checkout branch_name
(always use the tab-key for autocompletion)cd notebooks
jupyter lab
# now you are working in the browser
# open linear_algebra_basics.ipynb ( left side menu; double click)
# add some markdown or code to one of the cells
# close the notebook: File -> Shut Down
git status
again; there should be exactly one modified file, namely linear_algebra_basics.ipynbgit add notebooks/linear_algebra_basics.ipynb
git status
(the color should have changed from red to orange)#issue_number
; the number of this issue should be 6git commit -m"Make first modification according to issue #6."
git push origin name_of_your_branch
Best, Andre
While the paper clearly states the problems that the software is designed to solve, and the target audience, it is less clear on the Github repository page. I recommend editing your README file to make it very clear that the software is for post-processing any kind of fluid flow data from a wide range of file formats, and the goal is to help researchers with fluid data have easy and flexible post-processing, verification, and analysis pipelines.
Dear @MehdiMak96 and @Tushargh29,
please have a look if the following flowTorch package installation works for you.
pip install flowTorch-0.1-py3-none-any.whl
pip list | grep flowTorch
should now display something like flowTorch 0.1
python3
in the terminal; then run>> import flowtorch
>> flowtorch.__version__
# expected output
'0.1'
If the above steps work, I'll add some additional tests hereafter.
Best, Andre
The methods named in the title should be properties rather than methods.
Thanks for the great library !
In the documentation An introduction to DMD, I was wondering if there was any batched equivalent of the DMD example given ?
In the general deep learning we would have more often than not, a batch of trajectories.
dmd = DMD(data_matrix) # here data_matrix is of shape [Nt, #features]
The only way I see this happening and the slow way to would to do a for
loop ?
Hello,
Is it possible to build parametric ROM using CMN? In my case, the prediction is not only depends on the init_state
, but also the other_params
. Some codes looks like below:
cnm.predict(data_matrix[:, :5], other_params, end_time=1.0, step_size=0.1)
cnm.predict_reduced(reduced_state[:, :5], other_params, 1.0, 0.1)
And my current idea is going to to create the new class, say ParametricCMN
which inherit from CMN
, and then customize the new transition_prob
, and put the other_params
transition_prob. Is that right direction?
Okay I am done with my review. I will finalize it on the review GitHub page when we resolve the issues I submitted. The code and code documentation looks excellent. Good use of comments and class inheritance. Last optional comments about minor code functionality and documentation:
Hey I want to test the POD implementation on my TomoPIV data sets and noticed that the documentation is down. It would be great to see some implementation examples! Kind regards
First of all, the paper is well-written, flows well, and clearly states the purpose of the software. Great job.
Some minor edits I'm combining into a single issue:
The DaVis file format is mentioned in the abstract, but not defined in the remainder of the paper. It might be good to include this with the discussion in paragraph 1 of the Statement of need Section.
Clarify in paragraph 1 what is meant by "to improve complex technical application". This is pretty vague. For instance, you could improve this by saying something like "to improve the control of industrial processes..."
Clarify in paragraph 1 why "gaining insights from the data is becoming increasingly challenging." Is this simply because of the size of the data? If so, you could combine this sentence with the next one, saying something like "As modern datasets continue to grow, post-processing pipelines will be increasingly important for synthesizing different data formats and facilitating complex data analysis."
The sentence "When confronted with such ideas..." sounds awkward, I recommend changing it to something like "Often these obstacles require significant research time to be spent accessing, converting, ..."
Third paragraph in the Statement of Need section: It would be good to define/give examples for "computing statistics, performing modal analysis, or building reduced-order models". It would be even better to add some references about these things.
I think Figure 1 could be improved in a few ways. For instance, it would be good if all the package dependencies were all the same color, to indicate they are the same type of thing. It might be nice to expand this figure, showing some of the sub-libraries like the DMD class, the FileReaders, etc. but this might be too much.
Before the "DMD analysis of airfoil surface data" section, you say "Reduced-order-model (ROM)" but reduced order models have been introduced earlier in the document. It would be good to move the (ROM) abbreviation to the earliest use.
If there is a reference for the airfoil dataset, please add it.
Figure 2:
(a) Please clarify if the images are in the (x, z) space, and m. 8, m. 6, etc. refer to modes 8, modes 6, and so on.
(b) Other than the shock front, is there some physical interpretations of the DMD modes or their characteristic frequencies (for instance, the top and bottom modes seem to be harmonics)?
(c) Please reduce the number of significant digits shown on the colorbars, and it might be good to use the same colorbar for all three modes.
(d) Maybe I missed it, but what is the significant of the "cp" in the title?
In the first paragraph of "CNM of the flow past a circular cylinder" Section, it should be "always consultants of three steps". It would also be good to add some description and a reference on reduced order models of the cylinder flow, like Noack et al. (2003). Figure 3 shows that the first two SVD modes form a harmonic oscillator, but I am not entirely show what the data clustering dots are showing.
In general, this paper would benefit from some more references.
To compute the projection error, self._dm
should be used rather than self._X
and self._Y
since those are projected already onto the right singular vectors.
Bug in the computation of the projection error in HODMD. Presumably, only the first n_points entries must be used for projection.
Hello
Can csvdataloader load data like foamdataloader? If possible, is the data format. csv or. dat? How can I use Openfoam to calculate the results and obtain CSV data?I used Paraview to extract data at a certain section and output it in CSV format. However, when I imported it using csvdataloader, the following error occurred
Cylinder and airfoil test case throw exception when computing cell vertices. The reason is most likely in the face parsing. The point labels take unreasonably large values. The error occurs for serial and decomposed cases. Parsing ASCII format is not affected.
Idea:
create basic OpenFOAM simulation setups to test the OpenFOAM reader implementation.
I'm sorry to bother you.Why choose z-component to build a data matrix
Doc1.pdf
Hello again. I am now basically done reviewing the pytest tests + non-code documentation + paper, and only the documentation within the code and reproducing/suggesting improvements for the examples remains. I will open an issue for these separately. I am happy to see some good pytest tests. These mostly work for me after downloading the big dataset. However, I still have the following issues:
Failure in analysis/test_psp_explorer.py because it looks like the test file is hard-coded on your system, "FileNotFoundError: Could not find file /home/andre/Downloads/input/FOR/iPSP_reference_may_2021/0226.hdf5"
I get a mysterious segmentation fault when using pytest on flowtorch/rom/test_cnm.py, although it appears the tests are sometimes completing correctly. The cnm_cylinder.ipynb jupyter notebook also crashes for me when the CNM function is called. See below:
flowtorch/rom/test_cnm.py: 16 tests with warnings
/Users/alankaptanoglu/flowtorch/flowtorch/rom/utils.py:72: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. Use `bool` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.bool_` here.
is_different = np.diff(sequence).astype(np.bool)
-- Docs: https://docs.pytest.org/en/latest/warnings.html
================================================== 10 passed, 18 warnings in 5.54s ==================================================
Fatal Python error: Segmentation fault: 11
Implement the FOAMDataloader class. The class should take a path to an OpenFOAM case as input and provide functions to load fields and the mesh sequentially. Moreover, the class should be able to handle:
Additionally, there should be a
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.