Git Product home page Git Product logo

sciann-applications's Issues

AttributeError: module 'keras.backend' has no attribute 'get_graph'

Hello, I'm testing out sciann. Running my code on google colab.
It uses python version Python 3.10.12

Error:
AttributeError: module 'keras.backend' has no attribute 'get_graph'

Whats the work around for this?

I've seen some responses asking me to downgrade python version, is that the only fix?

Running Error, tf v2.4.0

Since the introduction of the new version of Tensorflow ==2.4.0, almost16 days ago, there is a problem with running the sciann:
AttributeError: module 'tensorflow' has no attribute 'python'

I have searched to find the solution, some advised previous version of Tensorflow, and some others advised to delete the python in the code:
from
import tensorflow.python.keras.backend as K
to
import tensorflow.keras.backend as K

but even this solution brings up errors
module 'tensorflow.keras.backend' has no attribute 'get_graph'

AttributeError: Can't set the attribute "name", likely because it conflicts with an existing read-only @property of the object. Please choose a different name

Python version-3.6.8
Tensorflow version-2.3
Keras version-2.4.3

I was trying to run a few examples. I faced a number of errors. The first one was:

File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\sciann_init.py", line 6, in
from . import constraints
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\sciann\constraints_init_.py", line 7, in
from . import constraint
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\sciann\constraints\constraint.py", line 8, in
from ..utils import is_tensor
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\sciann\utils_init_.py", line 7, in
from . import math
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\sciann\utils\math.py", line 14, in
from .utilities import *
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\sciann\utils\utilities.py", line 16, in
from keras.backend import is_tensor
ImportError: cannot import name 'is_tensor'_

I bypassed his error by checking Tensorflow documents which uses is_keras_tensor instead of is_tensor. Later I faced the following error:

"C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 2767, in setattr
'different name.').format(name))
AttributeError: Can't set the attribute "name", likely because it conflicts with an existing read-only @Property of the object. Please choose a different name.

sciann_datagenerator.py generates more samples than asked ?

Dear Ehsan,

Thanks a lot for your fantastic work !

Just a small remark, I didn't really understand why the sciann_generator.py generates more samples than asked when I called DataGeneratorXY or DataGeneratorXYT class.
Finally, I think there is a problem in the generate_data function and especially with the generated data for the last boundary conditions (top edge) and I would replace

x_bc_top = np.random.uniform(self.Xdomain[0], self.Xdomain[1], num_sample-num_sample_per_edge)
y_bc_top = np.full(num_sample-num_sample_per_edge, self.Ydomain[1])

by
x_bc_top = np.random.uniform(self.Xdomain[0], self.Xdomain[1], num_sample_per_edge)
y_bc_top = np.full(num_sample_per_edge, self.Ydomain[1])

for both class.

Thanks

PS : Has someone already tried to add parametric variable in input of the PINN (material properties for example in order to create "a parametric solution") ? If yes, I would be interested to speak about it !

Florian

"tuple" object does not support assignment

Dear Dr. @ehsanhaghighat,
Thank you for sharing the DataGenerator class. I have a question about a one-dimensional time-dependent problem with Dirichlet and Neumann boundary conditions. As I understood from the class of DataGenerator, by default, all these boundary and initial conditions are set to zero. If we have nonzero boundary conditions/initial conditions (e.g., equal to a function of time or space), how can we change the associated target data from zero to function(input_data)???
I tried to address the desired tuple and assign this function to it, but you know tuples are immutable, and I don't have the foggiest idea about a solution to that.

Diffusion problem

Dear Ehsan,
Is SciANN capable of solving a system coupled partial differential equations for diffusion problems in which there are derivatives with the order of three?

Elastoplasticity - type error in training section

Firstly, thank you for sharing the comprehensive work on PINN. The type error occurs when executing the following scripts in the training section of the Elastoplasticity model:

fig, ax= plt.subplots(1,2, figsize=(8, 3))
loss_val = history.history["loss"]/history.history["loss"][0]
ax[0].semilogy(loss_val)
ax[0].set_xlabel('epochs')
ax[0].set_ylabel('$\\mathcal{L}/\\mathcal{L}_0$')
ax[1].semilogy(np.linspace(0, t, loss_val.size), loss_val)
ax[1].set_xlabel('time (s)')
plt.show()

TypeError: unsupported operand type(s) for /: 'list' and 'float'

I am sharing the screenshot of the error below:

image

I hope you can help me out with this error.
Thanks for your time and consideration.

Problems running SciANN-SolidMechanics.py & SciANN-SolidMechanics-BCs.py

First of all, thank you for the comprehensive work on physically informed neural networks. When executing the above mentioned scripts, the following error occurs:

Traceback (most recent call last):
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1378, in binary_op_wrapper
    out = r_op(x)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 1076, in _run_op
    return tensor_oper(a.value(), *args, **kwargs)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1399, in r_binary_op_wrapper
    y, x = maybe_promote_tensors(y, x)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1335, in maybe_promote_tensors
    ops.convert_to_tensor(tensor, dtype, name="x"))
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\profiler\trace.py", line 163, in wrapped
    return func(*args, **kwargs)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 1535, in convert_to_tensor
    (dtype.name, value.dtype.name, value))
ValueError: Tensor conversion requested dtype float32 for Tensor with dtype float64: <tf.Tensor 'add_1/Cast:0' shape=(1,) dtype=float64>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:/Users/neuma/Documents/neumann/2_Forschung/KRITIS/Projektideen/5 PINN für hybride digitale Zwillinge/Bearbeitung/sciann-applications-master/SciANN-SolidMechanics/SciANN-SolidMechanics.py", line 333, in <module>
    train()
  File "C:/Users/neuma/Documents/neumann/2_Forschung/KRITIS/Projektideen/5 PINN für hybride digitale Zwillinge/Bearbeitung/sciann-applications-master/SciANN-SolidMechanics/SciANN-SolidMechanics.py", line 166, in train
    C11 = (2*lame2 + lame1)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\sciann\functionals\mlp_functional.py", line 358, in __add__
    return math.add(self, other)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\sciann\utils\math.py", line 227, in add
    outputs = _apply_operation(lmbd, f, other),
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\sciann\utils\math.py", line 520, in _apply_operation
    outputs = [l([x, y]) for l, x, y in zip(lambda_layer, lhs.outputs, rhs.outputs)]
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\sciann\utils\math.py", line 520, in <listcomp>
    outputs = [l([x, y]) for l, x, y in zip(lambda_layer, lhs.outputs, rhs.outputs)]
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\keras\engine\base_layer_v1.py", line 765, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\keras\layers\core.py", line 903, in call
    result = self.function(inputs, **kwargs)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\sciann\utils\math.py", line 219, in <lambda>
    lmbd = [Lambda(lambda x: x[0]+x[1], name=graph_unique_name("add")) for X in f.outputs]
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1383, in binary_op_wrapper
    raise e
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1367, in binary_op_wrapper
    return func(x, y, name=name)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1700, in _add_dispatch
    return gen_math_ops.add_v2(x, y, name=name)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 464, in add_v2
    "AddV2", x=x, y=y, name=name)
  File "C:\Users\neuma\Documents\neumann\2_Forschung\KRITIS\Projektideen\5 PINN für hybride digitale Zwillinge\Bearbeitung\sciann-applications-master\venv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 558, in _apply_op_helper
    inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'AddV2' Op has type float32 that does not match type float64 of argument 'x'.

It seems that there are some problems due to dependencies of different packages and version incompatibilities. I am working on Windows with TensorFlow version: 2.6.2 and Python version: 3.6.8

Do you have a suggestion on how to solve this problem? Let me know if you need any further information.

Kind regards

Deep energy method

Hello Ehsan,

are there some Deep energy method examples (SCIANN)?

Best,
Libo

Difficulty in Setting Initial Conditions

Hello,

I have been trying to write a code to solve a reaction-diffusion type problem. I wanted to compare deepXDE and SciANN but was having difficulty getting my initial conditions set up in SciANN.

In deepXDE, I can use

rho0*tf.cast(tf.math.greater(((x-x_center_scaled)**2)/(x_axis_scaled**2),1),tf.float64)

To set a circle (line segment in 1D for now) in the center of the domain to zero while keeping everything else at a constant value. The result is shown below (sorry for the unlabeled axes - that is x from 0 to 75 and t from 0 to 10).

image

In SciANN, this doesn't seem to work since the Functionals cannot be acted on by TF operations. In accordance with the Burger eq example, I tried using

0.25*(1 - sign(tt - TOL)) * ((1 - sign(x - (left_boundary+TOL))) + (1 + sign(x - (right_boundary-TOL)))) * ( rho0)

as well as

0.5*(1 - sign(t - TOL)) * rho0 * sign(((x-x_center)**2)/(x_axis**2))

In numpy, both of these give the correct shape

image

I also tried using

0.5*(1 - sign(t - TOL)) * rho0 * tanh(((x-x_center)**2)/(x_axis**2))

Which should give a smoother boundary. In all cases the whole domain appears as solid rho0, with no area of 0 in the center, as shown below.

image

I also note that the initial condition loss seems to immediately get stuck at a particular value.

image

So I am wondering: is there a problem with the way I have set up my boundary conditions, a way to use TF operations or a "greater_than" function in SciANN, or is there another way to set up my boundary conditions?

Thanks,
David

the eval. in SciModel and Functional in Burgers Equation example

In this example, the evaluation section, says:

"
There are two ways to evaluate the functionals.
1- eval without passing the model:
Not suggested - the data should be provided in the same order as it was defined in the Functional.
u.eval(m, [t_data, x_data])

2- eval with model (suggested):
the data should be provided in the same order as it was defined in the SciModel.
u.eval(m, [x_data, t_data])
"

I think the in the first item (in the Functional), the code should be u.eval(m, [x_data, t_data]), x should be at former, and t be at later, because when we construct model m = sn.SciModel([x, t], [L1, C1, C2, C3]), the consequence is [x, t] .

And then, for the second item (in the SciModel), why the code is the same with the previous one? Why not be 'u_pred = m.predict([x_test_line, t_test_line])' or 'u_pred = m.eval([x_test, t_test])' ? Because it says use the eval in SciModel. Why still call u.eval (the eval in Functional)?

AttributeError: 'GeneratorWrapper' object has no attribute 'shape'

Dear Ehsan Haghighat,

Thanks for your contribution towards understanding SciANN. I am new to Neural Networks(Computer Science itself!). I have tried implementing the same code given in the example to understand the results and get a gist of the work. Unfortunately, I keep getting an attribute error while trying to train the model. My code is implemented in spyder and is as follows:

*import numpy as np
import matplotlib.pyplot as plt
import sciann as sn
x = sn.Variable('x')
t = sn.Variable('t')
u = sn.Functional('u', [x,t], 8
[20], 'tanh')
from numpy import pi
from sciann.utils.math import diff,sign,sin
L1 = diff(u, t) + u*diff(u,x) - (0.01/pi)diff(u, x, order=2)
TOL = 0.001
C1 = (1-sign(t - TOL)) * (u + sin(pi
x))
C2 = (1-sign(x - (-1+TOL))) * (u)
C3 = (1+sign(x - ( 1-TOL))) * (u)

m = sn.SciModel([x, t], [L1, C1, C2, C3])
x_data, t_data = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(0, 1, 100))
h = m.train([x_data, t_data], 4*['zero'], learning_rate=0.002, epochs=5000, verbose=0)

x_test, t_test = np.meshgrid(np.linspace(-1, 1, 200),np.linspace(0, 1, 200))
u_pred = u.eval(m, [x_test, t_test])

fig = plt.figure(figsize=(3, 4))
plt.pcolor(x_test, t_test, u_pred, cmap='seismic')
plt.xlabel('x')
plt.ylabel('t')
plt.colorbar()**

I am sharing the screenshot of the error below:
image
image

Hope you can help me out on this.

Thanks for your time and consideration.

Burger's Inversion not working properly

Motivated by Raissi's PINN model and provided data for exact solution of Burger's equation, I want to implement the inversion/identification problem in SciANN. Following and adapting the provided SciANN-example for the Navier-Stokes-Inversion I came up with attached code for Burger's. However the network yields horrible results: lambda1 = 0.03 and lambda2 = 4e-0.5 (using Adam optimizer, 10000 epochs and lr = 0.001) which is far from the exact lambda1 = 1 and lambda2 = 0.01/pi.

Besides the difference to Raissi using a L-BFGS-B optimizer, there must be some problem with my code/data reshaping I do not see.

Here's my code:

import numpy as np
import sciann as sn
import matplotlib.pyplot as plt
import scipy.io

def prepData(n):

#Import Data from Raissi
data = scipy.io.loadmat('burgers_shock.mat')
U_star = data['usol']
t_star = data['t']
X_star = data['x']
    
    #Dimensions     
N = X_star.shape[0]
T = t_star.shape[0]
         
    #Reshape
xx = np.tile(X_star[:,0:1], (1,T)) # N x T
tt = np.tile(t_star, (1,N)) # N x T
    
    #Randomly pick n exact solution data out of 256x100=25600
idx = np.random.choice(N*T, n, replace=False)
    
x = xx.flatten()
t = tt.flatten()
u = U_star.flatten()
return (x,t,u, idx)

#Generate Data
x_train, t_train, u_train, ids = prepData(2000)
input_data = [x_train[ids], t_train[ids]]

sample_u_ex = u_train[ids]
sample_u_ex = sample_u_ex.reshape(-1,1)

u_train.reshape(-1,1)
x = sn.Variable("x", dtype='float64')
t = sn.Variable("t", dtype='float64')

u = sn.Functional("u", [x,t], 8*[20], 'tanh')

lambda1 = sn.Parameter(val = 0, inputs=[x,t], name="lambda1")
lambda2 = sn.Parameter(val = -6.0, inputs=[x,t], name="lambda2")

#Gradient Layer
u_t = sn.utils.grad(u,t)
u_x = sn.utils.grad(u,x)
u_xx = sn.utils.grad(u,x, order=2)

#PINN
Loss_f = u_t + lambda1* u * u_x - lambda2*u_xx
pinn = sn.SciModel(inputs=[x,t], targets=[u,Loss_f], loss_func="mse", optimizer="Adam")

pinn.train(
input_data,
[sample_u_ex,'zeros'],
learning_rate = 0.001,
epochs=10000
)

print("lambda1: {}, lambda2: {}".format(lambda1.value, lambda2.value))

Any help is very much appreciated !

Run error

I tried to run the cases you gave, but all encountered errors, as shown below. I don't know where the problem is, can you help me?
image

The meaning of the data return by "get_weights"

Dear community,

It is happy to meet SCIANN, such an amazing tool. I have a question need your patient help.

When I run following code:

f=sn.Functional(['f'],[u],[1],'linear')
weight_a=f.get_weights()
print(np.array(weight_a))

I get the below results, which is confused.

[[[-0.85555252525252]
   [0.00018264590427]
[[[-0.80343422342870]
   [0.00342918346589]]]

I think python should sent back 2 numbers for me, as I set 1 input, 1 output, and used 1 neural. While it sent back 4 numbers.
So what do these numers mean? And if I want to set specific weights and biases for a neuron, how should I do?

Thanks a lot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.