ansys / pytwin Goto Github PK
View Code? Open in Web Editor NEWAnsys Digital Twin repository
Home Page: https://twin.docs.pyansys.com
License: MIT License
Ansys Digital Twin repository
Home Page: https://twin.docs.pyansys.com
License: MIT License
PyTwin does not support import by multiple python processes.
This is due to the way the logging and working directory are setup by pytwin.settings
when importing pytwin in a given python processes.
The feature will be to let user import pytwin
in multiple python processes running at the same time and call any public method from the pytwin.settings
module in such context.
One possible quick solution would be to add the python process id in the working directory path (see pytwin.settings.PYTWIN_SETTINGS.working_dir
and how pytwin.evaluate.model.model_dir
is working with it).
One difficulty is to handle the working directory migration when pytwin.settings.modify_pytwin_working_dir
is called.
This should be refined based on the contexts where this issue has been encountered:
To be described (@ansJBoly could you please do so? Thanks!)
To be described (@EDCarman could you please do so? Thanks!)
No response
The named_selection argument is the first argument for the function and is not optional. This is inconsistent with other similar functions e.g. generate_snapshot().
Additionally, the docstring lists named_selection as being the third argument and optional.
from pytwin import TwinModel
model1 = TwinModel('model.twin')
model1.initialize_evaluation()
romname = model1.tbrom_names[0]
nslist = model1.get_named_selections(romname)
# Generate a snapshot
fieldresults = model1.generate_snapshot(romname, False, nslist[0])
# Try to generate points with same arguments
points= model1.generate_snapshot(romname, False, nslist[0])
Windows
No response
3.9
colorama==0.4.6
contourpy==1.1.0
coverage==7.2.7
cycler==0.11.0
exceptiongroup==1.1.2
fonttools==4.41.0
importlib-metadata==6.8.0
importlib-resources==6.0.0
iniconfig==2.0.0
kiwisolver==1.4.4
matplotlib==3.7.2
numpy==1.25.1
packaging==23.1
pandas==2.0.3
Pillow==10.0.0
pluggy==1.2.0
pyparsing==3.0.9
pytest==7.4.0
pytest-cov==4.1.0
python-dateutil==2.8.2
-e git+https://github.com/pyansys/pytwin@ec3594c2f4f343d36dfa12bb35e6e6e78fffc638#egg=pytwin
pytz==2023.3
pywin32==306
six==1.16.0
tomli==2.0.1
tzdata==2023.3
zipp==3.16.1
When instantiating a TwinModel in a loop, the allocated memory keep increasing.
import os
import tracemalloc
from pytwin import TwinModel
tracemalloc.start()
snapshot = tracemalloc.take_snapshot()
twin_file = os.path.join("path_to_a_twin_model_with_tbrom.twin")
for i in range(1, 100):
twin_model = TwinModel(model_filepath=twin_file)
snapshot2 = tracemalloc.take_snapshot()
top_stats = snapshot2.compare_to(snapshot, 'lineno')
stat_tbrom_read_basis = top_stats[0]
print(stat_tbrom_read_basis)
Windows
NA
3.9
certifi==2023.7.22
charset-normalizer==3.3.2
colorama==0.4.6
coverage==7.3.2
cycler==0.11.0
exceptiongroup==1.1.3
fonttools==4.38.0
idna==3.4
imageio==2.31.2
iniconfig==2.0.0
kiwisolver==1.4.5
matplotlib==3.5.3
numpy==1.21.1
packaging==23.2
pandas==1.3.5
Pillow==9.5.0
platformdirs==4.0.0
pluggy==1.3.0
pooch==1.8.0
pyparsing==3.1.1
pytest==7.4.3
pytest-cov==4.1.0
python-dateutil==2.8.2
-e git+https://github.com/ansys/pytwin@16cf78a84ed27a2881d0c7781a8afe66750a76e9#egg=pytwin
pytz==2023.3.post1
pyvista==0.38.6
pywin32==306
requests==2.31.0
scooby==0.7.3
six==1.16.0
tomli==2.0.1
tqdm==4.66.1
urllib3==2.0.7
vtk==9.3.0
Add option to read input snapshots from memory, rather than disk
For much the same reasons why we may prefer output snapshots in memory, rather than disk, we should consider providing input snapshots from memory. This will reduce disk I/O in cases where we may connect multiple models, all with vector inputs and outputs.
One potential implementation would be to modify the TbRom._reduce_field_input()
method and replace the snapshot_filepath
argument with a general snapshot
argument that could be a string or Path, to read from disk, or an array or list of vec values.
Optionally add a on_disk: bool
to explicitly indicate to treat the input as a vector or Path.
If on_disk
, read the binary file as before. If not, skip reading and use the vector directly (convert list to np.ndarray?)
No response
Currently, when a twin model is instantiated, it takes the global Pytwin log level. The twin runtime is instantiated with log_level=self._get_runtime_log_level()
.
It would be desirable to have the option to assign log levels on a per-instance basis. This is already possible at the core runtime level.
The current behaviour is problematic in the following case:
Certain twin models can produce extremely verbose logs at levels of INFO or higher, for example, containing detailed solver convergence logs for every time step. These messages quickly overwhelm any log messages from other parts of the code.
One workaround is to use a WARNING or lower log level, but then we lose INFO and DEBUG messages from other parts of the code.
Alternatively, we can use modify_pytwin_logging()
to temporarily change the global log level to the desired level before instantiating the twin model e.g.
current_level = pytwin.settings.get_pytwin_log_level()
pytwin.modify_pytwin_logging(new_option=None, new_level='INFO')
twin_model = pytwin.TwinModel(model_path)
pytwin.modify_pytwin_logging(new_option=None, new_level=current_level)
We could add an optional log_level
argument to the TwinModel
__init__()
method. This can then be propagated down to _instantiate_twin_model()
and use when instantiating TwinRuntime
.
The default value for the argument could be set to retain the current behaviour.
Something along the lines of:
class TwinModel(Model):
...
def __init__(self, model_filepath: str, log_level=None):
...
self._instantiate_twin_model(log_level)
...
def _instantiate_twin_model(self, log_level=None):
...
try:
if log_level is None:
log_level = self._get_runtime_log_level()
...
# Instantiate twin runtime
self._twin_runtime = TwinRuntime(
model_path=self._model_filepath,
load_model=True,
log_path=self.model_log,
log_level=log_level,
)
No response
Hi all!
As per @MaxJPRey request, I tried to add pytwin
to the pyansys
metapackage repository, so that it is distributed as part of our bundle package.
However, we encountered some issues in the process. It seems that pytwin
does not support Python 3.7. This is a requirement for the metapackage since we are ensuring that all Python versions that are currently maintained keep being supported. This means Python 3.7 to 3.10. In fact, we are in process of supporting Python 3.11 as well - but this may take a while.
Would it be possible to make pytwin
support Python 3.7? I can help you out implementing the needed changes. Thanks in advance!! ๐
https://nvd.nist.gov/vuln/detail/CVE-2022-37434
pip install pytwin
Scan the resulting venv. The following vulnerability is found from use of madler-zlib 1.2.12
https://nvd.nist.gov/vuln/detail/CVE-2022-37434
Windows
N/A
3.10
numpy==1.26.1
pandas==2.1.1
python-dateutil==2.8.2
pytwin==0.5.0
pytz==2023.3.post1
pywin32==306
six==1.16.0
tzdata==2023.3
Running the 3D Field Rom example, I am facing some errors related to the point cloud-based ROM Viewer embedded in the twin runtime.
When the model is initialised and the first output is updated, with the following code:
twin_model = TwinModel(twin_file)
twin_model.initialize_evaluation(inputs=rom_inputs, parameters=rom_parameters)
rom_name = twin_model.tbrom_names[0]
rom_directory = twin_model.get_rom_directory(rom_name)
snapshot = twin_model.get_snapshot_filepath(rom_name)
geometry = twin_model.get_geometry_filepath(rom_name)
temperature_file = snapshot_to_cfd(snapshot, geometry, "temperature", os.path.join(rom_directory, "cfd_file.ip"))
the following errors are logged:
dlopen1:libEGL.so.1: cannot open shared object file: No such file or directory
dlopen2:./RomViewerSharedLib.so: cannot open shared object file: No such file or directory
I can also find the following lines logged in the model log file:
[INFO] [ThermalTBROM_23R1SP1] [EXTME2][ThermalROM23R1_1][ThermalROM23R1_1][ThermalROM23R1_1] SetupExperiment [t = 0s] [Aug 16, 2023 02:50:04 PM]
[INFO] [ThermalTBROM_23R1SP1] [ME2FMU][ThermalROM23R1_1][ThermalROM23R1_1] fmi2EnterInitializationMode [t = 0s] [Aug 16, 2023 02:50:04 PM]
[INFO] [ThermalTBROM_23R1SP1] [ME2FMU][ThermalROM23R1_1][ThermalROM23R1_1] RomViewerSharedLib instance:00000000
[t = 0s] [Aug 16, 2023 02:50:04 PM]
[WARNING] [ThermalTBROM_23R1SP1] [ME2FMU][ThermalROM23R1_1][ThermalROM23R1_1] Cannot Load RomViewer Library => unable to render figures... [t = 0s] [Aug 16, 2023 02:50:04 PM]
As a consequence, no image file is generated in the ROM directory and the following cell fails to display the image:
view_name = twin_model.get_available_view_names(rom_name)[0]
image_filepath = twin_model.get_image_filepath(rom_name, view_name)
plt.imshow(img.imread(image_filepath))
plt.show()
I suppose that some .so
are missing and I was expecting them to be installed with the pytwin
package, as this ROM viewer should be embedded in the runtime.
In a fresh environment, i.e. without any specific library added to the classpath, install pytwin and then run the 3D Field ROM example.
Linux
0.3.0
3.10
absl-py==1.0.0
accelerate==0.19.0
aiohttp==3.8.4
aiosignal==1.3.1
appdirs==1.4.4
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
astor==0.8.1
asttokens==2.2.1
astunparse==1.6.3
async-timeout==4.0.2
attrs==21.4.0
audioread==3.0.0
azure-core==1.27.1
azure-cosmos==4.3.1b1
azure-storage-blob==12.17.0b1
azure-storage-file-datalake==12.11.0
backcall==0.2.0
bcrypt==3.2.0
beautifulsoup4==4.11.1
black==22.6.0
bleach==4.1.0
blinker==1.4
blis==0.7.9
boto3==1.24.28
botocore==1.27.28
cachetools==4.2.4
catalogue==2.0.8
category-encoders==2.6.0
certifi==2022.9.14
cffi==1.15.1
chardet==4.0.0
charset-normalizer==2.0.4
click==8.0.4
cloudpickle==2.0.0
cmdstanpy==1.1.0
confection==0.0.4
configparser==5.2.0
convertdate==2.4.0
cryptography==37.0.1
cycler==0.11.0
cymem==2.0.7
Cython==0.29.32
dacite==1.8.1
databricks-automl-runtime==0.2.16
databricks-cli==0.17.7
databricks-feature-store==0.13.5
databricks-sdk==0.1.6
dataclasses-json==0.5.8
datasets==2.12.0
dbl-tempo==0.1.23
dbus-python==1.2.18
debugpy==1.5.1
decorator==5.1.1
defusedxml==0.7.1
dill==0.3.4
diskcache==5.6.1
distlib==0.3.6
distro==1.7.0
distro-info===1.1build1
docstring-to-markdown==0.12
entrypoints==0.4
ephem==4.1.4
evaluate==0.4.0
executing==1.2.0
facets-overview==1.0.3
fastjsonschema==2.17.1
fasttext==0.9.2
filelock==3.6.0
Flask @ https://databricks-build-artifacts-manual-staging.s3.amazonaws.com/flask/Flask-1.1.2%2Bdb1-py2.py3-none-any.whl?AWSAccessKeyId=AKIAX7HWM34HCSVHYQ7M&Expires=2001354391&Signature=bztIumr2jXFbisF0QicZvqbvT9s%3D
flatbuffers==23.5.26
fonttools==4.25.0
frozenlist==1.3.3
fsspec==2022.7.1
future==0.18.2
gast==0.4.0
gitdb==4.0.10
GitPython==3.1.27
google-api-core==2.8.2
google-auth==1.33.0
google-auth-oauthlib==0.4.6
google-cloud-core==2.3.2
google-cloud-storage==2.9.0
google-crc32c==1.5.0
google-pasta==0.2.0
google-resumable-media==2.5.0
googleapis-common-protos==1.56.4
greenlet==1.1.1
grpcio==1.48.1
grpcio-status==1.48.1
gunicorn==20.1.0
gviz-api==1.10.0
h5py==3.7.0
holidays==0.25
horovod==0.28.0
htmlmin==0.1.12
httplib2==0.20.2
huggingface-hub==0.15.1
idna==3.3
ImageHash==4.3.1
imbalanced-learn==0.8.1
importlib-metadata==4.11.3
importlib-resources==5.12.0
ipykernel==6.17.1
ipython==8.10.0
ipython-genutils==0.2.0
ipywidgets==7.7.2
isodate==0.6.1
itsdangerous==2.0.1
jedi==0.18.1
jeepney==0.7.1
Jinja2==2.11.3
jmespath==0.10.0
joblib==1.2.0
joblibspark==0.5.1
jsonschema==4.16.0
jupyter-client==7.3.4
jupyter_core==4.11.2
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
keras==2.11.0
keyring==23.5.0
kiwisolver==1.4.2
korean-lunar-calendar==0.3.1
langchain==0.0.181
langcodes==3.3.0
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
lazy_loader==0.2
libclang==15.0.6.1
librosa==0.10.0
lightgbm==3.3.5
llvmlite==0.38.0
LunarCalendar==0.0.9
Mako==1.2.0
Markdown==3.3.4
MarkupSafe==2.0.1
marshmallow==3.19.0
marshmallow-enum==1.5.1
matplotlib==3.5.2
matplotlib-inline==0.1.6
mccabe==0.7.0
mistune==0.8.4
mleap==0.20.0
mlflow-skinny==2.4.2
more-itertools==8.10.0
msgpack==1.0.5
multidict==6.0.4
multimethod==1.9.1
multiprocess==0.70.12.2
murmurhash==1.0.9
mypy-extensions==0.4.3
nbclient==0.5.13
nbconvert==6.4.4
nbformat==5.5.0
nest-asyncio==1.5.5
networkx==2.8.4
ninja==1.11.1
nltk==3.7
nodeenv==1.8.0
notebook==6.4.12
numba==0.55.1
numexpr==2.8.4
numpy==1.21.5
oauthlib==3.2.0
openai==0.27.7
openapi-schema-pydantic==1.2.4
opt-einsum==3.3.0
packaging==21.3
pandas==1.4.4
pandocfilters==1.5.0
paramiko==2.9.2
parso==0.8.3
pathspec==0.9.0
pathy==0.10.1
patsy==0.5.2
petastorm==0.12.1
pexpect==4.8.0
phik==0.12.3
pickleshare==0.7.5
Pillow==9.2.0
platformdirs==2.5.2
plotly==5.9.0
pluggy==1.0.0
pmdarima==2.0.3
pooch==1.7.0
preshed==3.0.8
prometheus-client==0.14.1
prompt-toolkit==3.0.36
prophet==1.1.3
protobuf==3.19.4
psutil==5.9.0
psycopg2==2.9.3
ptyprocess==0.7.0
pure-eval==0.2.2
pyarrow==8.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pybind11==2.10.4
pycparser==2.21
pydantic==1.10.6
pyflakes==3.0.1
Pygments==2.11.2
PyGObject==3.42.1
PyJWT==2.3.0
PyMeeus==0.5.12
PyNaCl==1.5.0
pyodbc==4.0.32
pyparsing==3.0.9
pyright==1.1.294
pyrsistent==0.18.0
pytesseract==0.3.10
python-apt==2.4.0+ubuntu1
python-dateutil==2.8.2
python-editor==1.0.4
python-lsp-jsonrpc==1.0.0
python-lsp-server==1.7.1
pytoolconfig==1.2.2
pytwin==0.3.0
pytz==2022.1
PyWavelets==1.3.0
PyYAML==6.0
pyzmq==23.2.0
regex==2022.7.9
requests==2.28.1
requests-oauthlib==1.3.1
responses==0.18.0
rope==1.7.0
rsa==4.9
s3transfer==0.6.0
scikit-learn==1.1.1
scipy==1.9.1
seaborn==0.11.2
SecretStorage==3.3.1
Send2Trash==1.8.0
sentence-transformers==2.2.2
sentencepiece==0.1.99
shap==0.41.0
simplejson==3.17.6
six==1.16.0
slicer==0.0.7
smart-open==5.2.1
smmap==5.0.0
soundfile==0.12.1
soupsieve==2.3.1
soxr==0.3.5
spacy==3.5.3
spacy-legacy==3.0.12
spacy-loggers==1.0.4
spark-tensorflow-distributor==1.0.0
SQLAlchemy==1.4.39
sqlparse==0.4.2
srsly==2.4.6
ssh-import-id==5.11
stack-data==0.6.2
statsmodels==0.13.2
tabulate==0.8.10
tangled-up-in-unicode==0.2.0
tenacity==8.1.0
tensorboard==2.11.0
tensorboard-data-server==0.6.1
tensorboard-plugin-profile==2.11.2
tensorboard-plugin-wit==1.8.1
tensorflow-cpu==2.11.1
tensorflow-estimator==2.11.0
tensorflow-io-gcs-filesystem==0.32.0
termcolor==2.3.0
terminado==0.13.1
testpath==0.6.0
thinc==8.1.10
threadpoolctl==2.2.0
tiktoken==0.4.0
tokenize-rt==4.2.1
tokenizers==0.13.3
tomli==2.0.1
torch==1.13.1+cpu
torchvision==0.14.1+cpu
tornado==6.1
tqdm==4.64.1
traitlets==5.1.1
transformers==4.29.2
typeguard==2.13.3
typer==0.7.0
typing-inspect==0.9.0
typing_extensions==4.3.0
ujson==5.4.0
unattended-upgrades==0.1
urllib3==1.26.11
virtualenv==20.16.3
visions==0.7.5
wadllib==1.3.6
wasabi==1.1.2
wcwidth==0.2.5
webencodings==0.5.1
websocket-client==0.58.0
Werkzeug==2.0.3
whatthepatch==1.0.2
widgetsnbextension==3.6.1
wordcloud==1.9.2
wrapt==1.14.1
xgboost==1.7.5
xxhash==3.2.0
yapf==0.31.0
yarl==1.9.2
ydata-profiling==4.2.0
zipp==3.8.0
Add option to import/export snapshot as matrix
An obstacle to quick adoption of TBROMs in a workflow is the need for the user to create additional functions to convert data in tabular format (generally the most common import/export format from other 3D tools) to snapshot vectors and vice versa.
On the input side, there is the potential error of using column-major, rather than row-major vectorisation, while for outputs, the dimensionality is not directly inferable from the vector and needs to retrieve from the TBROM settings anyway.
e.g. n x 3
matrix must be flattened to 3n
vector on input. On output, dimensionality must be retrieved from tbrom.field_output_dim
and then used to reshape the vector
x | y | z |
---|---|---|
x_1 | y_1 | z_1 |
x_2 | y_2 | z_2 |
... | ... | ... |
x_n | y_n | z_n |
flattens to
x_1 |
---|
y_1 |
... |
z_n |
Incorporating these functions simplifies user work and abstracts the snapshot format.
A possible option is to provide an additional boolean argument to TbRom.generate_snapshot
and TbRom.generate_snapshot
to export as a matrix e.g. as_matrix
.
Input flattening could use ndarray.flatten(order='C')
directly, explicitly using the row-major style. Boolean argument is not strictly necessary, since structure can be inferred from input shape. This would also required the ability to pass inputs in memory (see Issue 107).
Output reshaping can use the numpy.reshape(vec, newshape, order='C')
function. New shape columns are retrieved from TBROM field_output_dim
, rows calculated from len(vec) / dim
.
In both cases, reading/writing matrices to disk need not necessarily be supported. The specific formats required may vary with upstream and downstream programs, so writing can be left to the user.
Input matrices would require implementation of Issue #107
Looks like in the latest Python 3.7 support, Python3.11 has been removed from the supported versions. Could we please add it back? Thanks in advance!
https://nvd.nist.gov/vuln/detail/CVE-2020-1971
pip install pytwin
Run vulnerability scan on resulting venv
Windows
none
3.10
numpy==1.26.1
pandas==2.1.1
python-dateutil==2.8.2
pytwin==0.5.0
pytz==2023.3.post1
pywin32==306
six==1.16.0
tzdata==2023.3
Include TBROM info in the TwinRuntime.print_model_info()
and expose it in TwinModel
so it can be accessed from there.
print_model_info()
provides a useful summary, in one place, of almost all twin model information, except for the TBROM information. It would also be useful to have the TBROM info here too.
In addition, the information is still useful at the higher TwinModel
level, but is only only accessible through self._twin_runtime
.
Exposure at high level could be a simple as adding and extra method to TwinModel:
def print_model_info(self, max_var_to_print=np.inf):
self._twin_runtime.print_model_info(max_var_to_print)
TBROM info print could be done within TwinRuntime, or within TwinModel.
No response
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.