Git Product home page Git Product logo

pinns's People

Contributors

maziarraissi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pinns's Issues

burgers equation PINN idea

In burgers equation are we taking the data from boundary to train the nueral network. From what I understood in the paper we use information from the boundary to train data. But in code we are selcting random values from the whole domain. Am I correct or did I read the code wrong? burgers equation PINN idea

IRK weights txt documents

Hi,
I wonder where the documents in "PINNs-master\Utilities\IRK_weights" and the data come from.

Thanks

Loading the Data

I am having trouble loading the data. How am I supposed to save it in order to load it?

Walltime for the Navier-Stokes identification code

Hello,

Thank you very much for sharing your code. I have read the two parts of Physics Informed Deep Learning work and I have tried to run the Navier-Stokes identification code but more than 12 hours have passed with the training procedure not being finished. I would like to know what is the approximate wall time for running this code. Thank you very much.

Best regards,
Saddam

Darcy's flow in multiscale porous media

Hi everyone.
It seems that classical PINN framework does not work for multiscale (heterogeneous) porous media. I have a Darcy’s 2D (xy cartesian coordinate, time independent) problem with permeability field as the input and pressure distribution as the output. Any idea? Here is a sample.
Tnx so much.
333

Faster response

How can I get a faster result from the code? I don't want anything accurate so 200000 iterations on the Navier-Stokes PDE is too much for me. I changed it to 10000, but the code keeps on running without showing the iteration count, which is strange.

error on tf.contrib.opt.ScipyOptimizerInterface

Hi,
This code is based on the "TF1.X".
There is no way to get "TF1.X" anymore on either Google colab or Jupyter.
Just the "TF1.X" parts can be updated to "TF2.X" by two following lines:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

But I still get error in "self.optimizer = tf.contrib.opt.ScipyOptimizerInterface()."
It says, "AttributeError: module 'tensorflow.compat.v1' has no attribute 'contrib' ."
The reason is that "tf.contrib" is not changed, but moved from tensorflow!
Can anyone help to solve this problem?

tf.exp() for lambda_2 in KdV function

This question has been asked before by someone else, but it stayed unanswered. I am wondering about the answer myself, as I was reusing the KvD.py code with a different equation and not getting the parameters.

The KdV.py example estimates two parameters : lambda_1 and lambda_2. Both are coefficients that go into the differential operator (F) as:

F = -lambda_1 U U_x - lambda_2 U_xxx

that is, lambda_1 multiplies the solution (U) and its first derivate (U_x), lambda_2 multiplies the third derivative (U_xxx). However, within the functions net_U0 and net_U1, the coefficient lambda_2 has the operation exp applied to it.

I mean, lambda_1 = self.lambda_1, but lambda_2 = tf.exp(self.lambda_2). Perhaps I am missing something. It should not be lambda_2 = self.lambda_2 instead?

Navier-Stokes Inference

Hello and thank you for sharing this code!

I am trying to recreate the data used for Navier-Stokes Inference:
/main/Data/cylinder_nektar_wake.mat
/main/Data/cylinder_nektar_t0_vorticity.mat

I have read your publications but I have a hard time recreating the exact data using Nektar++ framework.
Could you please share your XML files for recreation purposes?

Thanks again!

Opyimisation in TF2

I am trying the lbfgs function optimization in TF 2 for the burgers Conti equation.
In the new version, we have to specify an initial position for the optimization. But the tensor flow version 1 didn't have that what can be given for the initial position

Shrodinger's equation

Uploading 48ae55107dba90d7b7caacfc54351b90.png…
The solution of the Shrodinger's equation seems symmetric both in spatial and time coordinates. But it only got periodic condition in spatial coordinate, not in time. Why it is also symmetric in time?

I can't find pyODE this document.

In continuous_time_inference (Schrodinger) this paper, you are from pyODE import lhs. But there is no module named pyODE. Can you help me with this problem? thanks you.

more than two snapshots

The code for the Korteweg–de Vries problem (KdV.py) provides an example using only two time snapshots. I am wondering which parts need to be edited/added in order to use three or more time snapshots.

Unable to see figure after running Burgers' Eq'n code

After I run continuous_time_identification (Burgers) or inference, I see the loss value outputs but am unable to see the figure plots. Does anyone else have this problem? I suspect it is due to my version of matplotlib being 3.1.3.

Failed to process string with tex because latex could not be found

**I would like to fix this error please assign me this error **

RuntimeError: Failed to process string with tex because latex could not be found

In the code follows : https://github.com/maziarraissi/PINNs/blob/master/appendix/continuous_time_identification%20(Burgers)/Burgers.py#L224-L288

` ######################################################################
############################# Plotting ###############################
######################################################################

fig, ax = newfig(1.0, 1.4)
ax.axis('off')

####### Row 0: u(t,x) ##################    
gs0 = gridspec.GridSpec(1, 2)
gs0.update(top=1-0.06, bottom=1-1.0/3.0+0.06, left=0.15, right=0.85, wspace=0)
ax = plt.subplot(gs0[:, :])

h = ax.imshow(U_pred.T, interpolation='nearest', cmap='rainbow', 
              extent=[t.min(), t.max(), x.min(), x.max()], 
              origin='lower', aspect='auto')
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
fig.colorbar(h, cax=cax)

ax.plot(X_u_train[:,1], X_u_train[:,0], 'kx', label = 'Data (%d points)' % (u_train.shape[0]), markersize = 2, clip_on = False)

line = np.linspace(x.min(), x.max(), 2)[:,None]
ax.plot(t[25]*np.ones((2,1)), line, 'w-', linewidth = 1)
ax.plot(t[50]*np.ones((2,1)), line, 'w-', linewidth = 1)
ax.plot(t[75]*np.ones((2,1)), line, 'w-', linewidth = 1)

ax.set_xlabel('$t$')
ax.set_ylabel('$x$')
ax.legend(loc='upper center', bbox_to_anchor=(1.0, -0.125), ncol=5, frameon=False)
ax.set_title('$u(t,x)$', fontsize = 10)

####### Row 1: u(t,x) slices ##################    
gs1 = gridspec.GridSpec(1, 3)
gs1.update(top=1-1.0/3.0-0.1, bottom=1.0-2.0/3.0, left=0.1, right=0.9, wspace=0.5)

ax = plt.subplot(gs1[0, 0])
ax.plot(x,Exact[25,:], 'b-', linewidth = 2, label = 'Exact')       
ax.plot(x,U_pred[25,:], 'r--', linewidth = 2, label = 'Prediction')
ax.set_xlabel('$x$')
ax.set_ylabel('$u(t,x)$')    
ax.set_title('$t = 0.25$', fontsize = 10)
ax.axis('square')
ax.set_xlim([-1.1,1.1])
ax.set_ylim([-1.1,1.1])

ax = plt.subplot(gs1[0, 1])
ax.plot(x,Exact[50,:], 'b-', linewidth = 2, label = 'Exact')       
ax.plot(x,U_pred[50,:], 'r--', linewidth = 2, label = 'Prediction')
ax.set_xlabel('$x$')
ax.set_ylabel('$u(t,x)$')
ax.axis('square')
ax.set_xlim([-1.1,1.1])
ax.set_ylim([-1.1,1.1])
ax.set_title('$t = 0.50$', fontsize = 10)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.35), ncol=5, frameon=False)

ax = plt.subplot(gs1[0, 2])
ax.plot(x,Exact[75,:], 'b-', linewidth = 2, label = 'Exact')       
ax.plot(x,U_pred[75,:], 'r--', linewidth = 2, label = 'Prediction')
ax.set_xlabel('$x$')
ax.set_ylabel('$u(t,x)$')
ax.axis('square')
ax.set_xlim([-1.1,1.1])
ax.set_ylim([-1.1,1.1])    
ax.set_title('$t = 0.75$', fontsize = 10)
`

I am getting the error :

`Text(0.5, 1.0, '$t = 0.75$')

FileNotFoundError Traceback (most recent call last)
~\anaconda3\envs\xyz\lib\site-packages\matplotlib\texmanager.py in _run_checked_subprocess(self, command, tex)
276 cwd=self.texcache,
--> 277 stderr=subprocess.STDOUT)
278 except FileNotFoundError as exc:

~\anaconda3\envs\xyz\lib\subprocess.py in check_output(timeout, *popenargs, **kwargs)
355 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
--> 356 **kwargs).stdout
357

~\anaconda3\envs\xyz\lib\subprocess.py in run(input, timeout, check, *popenargs, **kwargs)
422
--> 423 with Popen(*popenargs, **kwargs) as process:
424 try:

~\anaconda3\envs\xyz\lib\subprocess.py in init(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors)
728 errread, errwrite,
--> 729 restore_signals, start_new_session)
730 except:

~\anaconda3\envs\xyz\lib\subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session)
1016 os.fspath(cwd) if cwd is not None else None,
-> 1017 startupinfo)
1018 finally:

FileNotFoundError: [WinError 2] The system cannot find the file specified

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last)
~\anaconda3\envs\xyz\lib\site-packages\IPython\core\formatters.py in call(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)

~\anaconda3\envs\xyz\lib\site-packages\IPython\core\pylabtools.py in (fig)
246
247 if 'png' in formats:
--> 248 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
249 if 'retina' in formats or 'png2x' in formats:
250 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))

~\anaconda3\envs\xyz\lib\site-packages\IPython\core\pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
130 FigureCanvasBase(fig)
131
--> 132 fig.canvas.print_figure(bytes_io, **kw)
133 data = bytes_io.getvalue()
134 if fmt == 'svg':

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs)
2191 else suppress())
2192 with ctx:
-> 2193 self.figure.draw(renderer)
2194
2195 bbox_inches = self.figure.get_tightbbox(

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
39 renderer.start_filter()
40
---> 41 return draw(artist, renderer, *args, **kwargs)
42 finally:
43 if artist.get_agg_filter() is not None:

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1862 self.patch.draw(renderer)
1863 mimage._draw_list_compositing_images(
-> 1864 renderer, self, artists, self.suppressComposite)
1865
1866 renderer.close_group('figure')

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
129 if not_composite or not has_images:
130 for a in artists:
--> 131 a.draw(renderer)
132 else:
133 # Composite any adjacent images together

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
39 renderer.start_filter()
40
---> 41 return draw(artist, renderer, *args, **kwargs)
42 finally:
43 if artist.get_agg_filter() is not None:

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\cbook\deprecation.py in wrapper(*inner_args, **inner_kwargs)
409 else deprecation_addendum,
410 **kwargs)
--> 411 return func(*inner_args, **inner_kwargs)
412
413 return wrapper

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\axes_base.py in draw(self, renderer, inframe)
2705 artists.remove(spine)
2706
-> 2707 self._update_title_position(renderer)
2708
2709 if not self.axison or inframe:

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\axes_base.py in _update_title_position(self, renderer)
2646 _log.debug('top of axes not in the figure, so title not moved')
2647 return
-> 2648 if title.get_window_extent(renderer).ymin < top:
2649 _, y = self.transAxes.inverted().transform((0, top))
2650 title.set_position((x, y))

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\text.py in get_window_extent(self, renderer, dpi)
900
901 with cbook._setattr_cm(self.figure, dpi=dpi):
--> 902 bbox, info, descent = self._get_layout(self._renderer)
903 x, y = self.get_unitless_position()
904 x, y = self.get_transform().transform((x, y))

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\text.py in _get_layout(self, renderer)
287 _, lp_h, lp_d = renderer.get_text_width_height_descent(
288 "lp", self._fontproperties,
--> 289 ismath="TeX" if self.get_usetex() else False)
290 min_dy = (lp_h - lp_d) * self._linespacing
291

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\backends\backend_agg.py in get_text_width_height_descent(self, s, prop, ismath)
226 fontsize = prop.get_size_in_points()
227 w, h, d = texmanager.get_text_width_height_descent(
--> 228 s, fontsize, renderer=self)
229 return w, h, d
230

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\texmanager.py in get_text_width_height_descent(self, tex, fontsize, renderer)
421 else:
422 # use dviread.
--> 423 dvifile = self.make_dvi(tex, fontsize)
424 with dviread.Dvi(dvifile, 72 * dpi_fraction) as dvi:
425 page, = dvi

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\texmanager.py in make_dvi(self, tex, fontsize)
309 self._run_checked_subprocess(
310 ["latex", "-interaction=nonstopmode", "--halt-on-error",
--> 311 texfile], tex)
312 for fname in glob.glob(basefile + '*'):
313 if not fname.endswith(('dvi', 'tex')):

~\anaconda3\envs\xyz\lib\site-packages\matplotlib\texmanager.py in _run_checked_subprocess(self, command, tex)
279 raise RuntimeError(
280 'Failed to process string with tex because {} could not be '
--> 281 'found'.format(command[0])) from exc
282 except subprocess.CalledProcessError as exc:
283 raise RuntimeError(

RuntimeError: Failed to process string with tex because latex could not be found

`

This error can be fixed by changing the code of Utilities : https://github.com/maziarraissi/PINNs/tree/master/Utilities

Data in the vorticity file

Does anyone know what the data in the cylinder_nektar_t0_vorticity file means? Is there a total of 412 grid divisions and what does the 10 modes represent? I want to replace the author's data with the results from the ANSYS simulation, is it possible to use the vorticity from each node individually?

How to solve equations with a second or higher order derivative in time, like the wave equation?

I'm a physicist and I'm working with PINNs and other similar algorithms for the last three years to numerically solve differential equations.

Some of these differential equations of my interest have high order derivatives in time, like the wave equation, that have a second order derivative in time. Looking forward to solve this type of equations, I developed a method/algorithm based on Raissi's scripts that can solve some equations with specific boundary conditions. But, my method have some problems and cant solve all the equations I want, it is, also, computationally expensive.

So, I'm here to ask you: Did you manage to solve differential equation with high order derivatives in time? What method did use? How did you implement this and what changes did you make in Raissi's scripts?

Saving the Trained models

@maziarraissi

First off, thank you for sharing the code.

I have run the code for Navier Stokes. It took a long time to train and after that there was nothing to show for. So I request you to add a .save to your PhysicsInformedNN class.

Euler 1d shock

Dear Maziar,
thanks for your work, extremely inspiring for us all!

I'm trying to replicate the results published in https://doi.org/10.1016/j.cma.2019.112789 (Example 1) regarding the compressible 1d Euler equations for a shock flow.

Learning from your examples, I wrote the file Euler1dShock.txt (to be renamed Euler1dShock.py), with the dataset datashock1d001.txt

plotting.txt (to be renamed /utilities/plotting.py)

I have the feeling that something is not correct or improvable, as always. Would you mind to take a look at my script, please?

Once ok, I'll be glad to share as another example.
Kind regards,
Lorenzo Campoli

some improvement need to run on tensorflow v2

class PhysicsInformedNN:
def init(self, x, y, t, u, v, layers):

    X = np.concatenate([x, y, t], 1)
    
    self.lb = X.min(0)
    self.ub = X.max(0)
            
    self.X = X
    
    self.x = X[:,0:1]
    self.y = X[:,1:2]
    self.t = X[:,2:3]
    
    self.u = u
    self.v = v
    
    self.layers = layers
    
    # Initialize NN
    self.weights, self.biases = self.initialize_NN(layers)        
    
    # Initialize parameters
    self.lambda_1 = tf.Variable([0.0], dtype=tf.float32)
    self.lambda_2 = tf.Variable([0.0], dtype=tf.float32)
    
    # tf placeholders and graph
    self.sess = tf.compat.v1.Session()
    
    self.x_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.x.shape[1]])
    self.y_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.y.shape[1]])
    self.t_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.t.shape[1]])
    
    self.u_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.u.shape[1]])
    self.v_tf = tf.compat.v1.placeholder(tf.float32, shape=[None, self.v.shape[1]])
    
    self.u_pred, self.v_pred, self.p_pred, self.f_u_pred, self.f_v_pred = self.net_NS(self.x_tf, self.y_tf, self.t_tf)
    
    self.loss = tf.reduce_sum(tf.square(self.u_tf - self.u_pred)) + \
                tf.reduce_sum(tf.square(self.v_tf - self.v_pred)) + \
                tf.reduce_sum(tf.square(self.f_u_pred)) + \
                tf.reduce_sum(tf.square(self.f_v_pred))
    
    self.adam = tf.compat.v1.train.AdamOptimizer(learning_rate=1e-3)
    self.train_op_adam = self.adam.minimize(self.loss)
    
    init = tf.compat.v1.global_variables_initializer()
    self.sess.run(init)

def initialize_NN(self, layers):        
    weights = []
    biases = []
    num_layers = len(layers) 
    for l in range(0,num_layers-1):
        W = self.xavier_init(size=[layers[l], layers[l+1]])
        b = tf.Variable(tf.zeros([1,layers[l+1]], dtype=tf.float32), dtype=tf.float32)
        weights.append(W)
        biases.append(b)        
    return weights, biases
    
def xavier_init(self, size):
    in_dim = size[0]
    out_dim = size[1]        
    xavier_stddev = np.sqrt(2/(in_dim + out_dim))
    return tf.Variable(tf.random.truncated_normal([in_dim, out_dim], stddev=xavier_stddev), dtype=tf.float32)

def neural_net(self, X, weights, biases):
    num_layers = len(weights) + 1
    
    H = 2.0*(X - self.lb)/(self.ub - self.lb) - 1.0
    for l in range(0,num_layers-2):
        W = weights[l]
        b = biases[l]
        H = tf.tanh(tf.add(tf.matmul(H, W), b))
    W = weights[-1]
    b = biases[-1]
    Y = tf.add(tf.matmul(H, W), b)
    return Y
    
def net_NS(self, x, y, t):
    lambda_1 = self.lambda_1
    lambda_2 = self.lambda_2
    
    psi_and_p = self.neural_net(tf.concat([x,y,t], 1), self.weights, self.biases)
    psi = psi_and_p[:,0:1]
    p = psi_and_p[:,1:2]
    
    u = tf.gradients(psi, y)[0]
    v = -tf.gradients(psi, x)[0]  
    
    u_t = tf.gradients(u, t)[0]
    u_x = tf.gradients(u, x)[0]
    u_y = tf.gradients(u, y)[0]
    u_xx = tf.gradients(u_x, x)[0]
    u_yy = tf.gradients(u_y, y)[0]
    
    v_t = tf.gradients(v, t)[0]
    v_x = tf.gradients(v, x)[0]
    v_y = tf.gradients(v, y)[0]
    v_xx = tf.gradients(v_x, x)[0]
    v_yy = tf.gradients(v_y, y)[0]
    
    p_x = tf.gradients(p, x)[0]
    p_y = tf.gradients(p, y)[0]

    f_u = u_t + lambda_1*(u*u_x + v*u_y) + p_x - lambda_2*(u_xx + u_yy) 
    f_v = v_t + lambda_1*(u*v_x + v*v_y) + p_y - lambda_2*(v_xx + v_yy)
    
    return u, v, p, f_u, f_v

def callback(self, loss, lambda_1, lambda_2):
    print('Loss: %.3e, l1: %.3f, l2: %.5f' % (loss, lambda_1, lambda_2))
  
def train(self, nIter): 

    tf_dict = {self.x_tf: self.x, self.y_tf: self.y, self.t_tf: self.t,
               self.u_tf: self.u, self.v_tf: self.v}
    
    start_time = time.time()
    for it in range(nIter):
        self.sess.run(self.train_op_adam, tf_dict)
        
        # Print
        if it % 10 == 0:
            elapsed = time.time() - start_time
            loss_value = self.sess.run(self.loss, tf_dict)
            lambda_1_value = self.sess.run(self.lambda_1)
            lambda_2_value = self.sess.run(self.lambda_2)
            print('It: %d, Loss: %.3e, l1: %.3f, l2: %.5f, Time: %.2f' % 
                  (it, loss_value, lambda_1_value, lambda_2_value, elapsed))
            start_time = time.time()
        
        

def predict(self, x_star, y_star, t_star):
    
    tf_dict = {self.x_tf: x_star, self.y_tf: y_star, self.t_tf: t_star}
    
    u_star = self.sess.run(self.u_pred, tf_dict)
    v_star = self.sess.run(self.v_pred, tf_dict)
    p_star = self.sess.run(self.p_pred, tf_dict)
    
    return u_star, v_star, p_star

Burger's Equation

Hello,

I was trying to run the continuous Burger's equation file, but I do not get the figure in the folder, instead, I keep getting the figure that the trained solution is a horizontal red dash line. which is nowhere close to the true solution. I checked the training step prints, and the loss value does not change with time. Does anyone else get the same thing?

Thanks.

cannot open the code

every time I try to open the code it pops up with a sorry msg stating the code is too long to be displayed

Try in tensorflow2.6 by myself. But had some problems. Hope get some help!

When I try to define a layer as loss by myself and use the add_weight() function to declare the trainable return propagation variable,Threw an error:

ValueError: Variable <tf.Variable ‘eqn1_1/constant1:0’ shape=(1,) dtype=float32> has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

My code is as follows:

class WbceLoss(KL.Layer):

  def __init__(self, **kwargs):
      super(WbceLoss, self).__init__(**kwargs)
      
  def build(self, input_shape):
      self.constant1 = self.add_weight(name = "constant1", shape[1],initializer='random_normal', trainable=True)
      self.constant2 = self.add_weight(name = "constant2", shape[1],initializer='random_normal', trainable=True)
  
  def call(self, inputs, **kwargs):
          
      tf.compat.v1.disable_eager_execution()
      out1, out2, out3, cur_time, cur_x_input, cur_y_input, cur_z_input, perm_input = inputs
      
      x_input = cur_x_input
      y_input = cur_y_input
      z_input = cur_z_input
      perm_input = perm_input
      
      constant1 = self.constant1
      constant2 = self.constant2
      print(constant1)
      print(constant2)
      
      gradient_with_time = tf.keras.backend.gradients(out1, cur_time)[0]
      constant1 = tf.convert_to_tensor(constant1)
      constant2 = tf.convert_to_tensor(constant2)
      a = tf.zeros((1,), dtype=tf.float32)
      bias = tf.convert_to_tensor([a, a, constant1])
      #bias = tf.expand_dims([0., 0., constant1], 0)
      bias = tf.expand_dims(bias, 2)
      
      pressure_grad_x = tf.keras.backend.gradients(out2, cur_x_input)[0]
      pressure_grad_y = tf.keras.backend.gradients(out2, cur_y_input)[0]
      pressure_grad_z = tf.keras.backend.gradients(out2, cur_z_input)[0]
        
      pressure_grad = tf.convert_to_tensor([pressure_grad_x, pressure_grad_y, pressure_grad_z])
      pressure_grad = tf.keras.backend.permute_dimensions(
      pressure_grad, (1, 0, 2))
      coeff = (1 - out1) / constant2
      
      m = tf.matmul(perm_input, (pressure_grad - bias))
      m_grad_x = tf.keras.backend.gradients(m, cur_x_input)[0]
      m_grad_y = tf.keras.backend.gradients(m, cur_y_input)[0]
      m_grad_z = tf.keras.backend.gradients(m, cur_z_input)[0]
      m_grad_1 = tf.add(m_grad_x, m_grad_y)
      m_grad = tf.add(m_grad_1, m_grad_z)
      
      m_final = tf.multiply(coeff, m_grad)
      eqn_1 = tf.add(gradient_with_time, m_final)
      eqn_2 = tf.add(eqn_1, out3)
      eqn = tf.negative(eqn_2)
      
      eqn = tf.compat.v1.to_float(eqn)
      
      self.add_loss(eqn, inputs=True)
      self.add_metric(eqn, aggregation="mean", name="eqn1")
      
      return eqn

The whole error when I train the model is as follows:

ValueError Traceback (most recent call last) in () 12 batch_size=241, 13 shuffle=True, ---> 14 verbose=1)

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_v1.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 794 max_queue_size=max_queue_size, 795 workers=workers, --> 796 use_multiprocessing=use_multiprocessing) 797 798 def evaluate(self,

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_arrays_v1.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 655 validation_steps=validation_steps, 656 validation_freq=validation_freq, --> 657 steps_name='steps_per_epoch') 658 659 def evaluate(self,

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_arrays_v1.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 175 # function we recompile the metrics based on the updated 176 # sample_weight_mode value. --> 177 f = _make_execution_function(model, mode) 178 179 # Prepare validation data. Hold references to the iterator and the input list

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_arrays_v1.py in _make_execution_function(model, mode) 545 if model._distribution_strategy: 546 return distributed_training_utils_v1._make_execution_function(model, mode) --> 547 return model._make_execution_function(mode) 548 549

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_v1.py in _make_execution_function(self, mode) 2077 def _make_execution_function(self, mode): 2078 if mode == ModeKeys.TRAIN: -> 2079 self._make_train_function() 2080 return self.train_function 2081 if mode == ModeKeys.TEST:

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_v1.py in _make_train_function(self) 2009 # Training updates
2010 updates = self.optimizer.get_updates( -> 2011 params=self._collected_trainable_weights, loss=self.total_loss) 2012 # Unconditional updates
2013 updates += self.get_updates_for(None)

~\AppData\Roaming\Python\Python36\site-packages\keras\optimizer_v2\optimizer_v2.py in get_updates(self, loss, params) 757 758 def get_updates(self, loss, params): --> 759 grads = self.get_gradients(loss, params) 760 grads_and_vars = list(zip(grads, params)) 761 self._assert_valid_dtypes([

~\AppData\Roaming\Python\Python36\site-packages\keras\optimizer_v2\optimizer_v2.py in get_gradients(self, loss, params) 753 "gradient defined (i.e. are differentiable). " 754 "Common ops without gradient: " --> 755 "K.argmax, K.round, K.eval.".format(param)) 756 return grads 757

ValueError: Variable <tf.Variable 'constant1_6:0' shape=(1,) dtype=float32> has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

Hope get some help. Thank you!

Optimizer

This is a two-fold question.
Why are two optimizers used? One is used every iteration and the other is used after the 5,000 iterations. Secondly, does the AdamOptimizer need to be defined in the init method, because I was interested in implementing a learning schedule and initializing a new RMSProp optimizer every iteration but I was wondering how I would do that with their setup.

continuous_time_identification (Burgers)

Why do you use exp for lambda_2
def net_f(self, x, t): lambda_1 = self.lambda_1 lambda_2 = tf.exp(self.lambda_2) u = self.net_u(x,t) u_t = tf.gradients(u, t)[0] u_x = tf.gradients(u, x)[0] u_xx = tf.gradients(u_x, x)[0] f = u_t + lambda_1*u*u_x - lambda_2*u_xx

Usage of two optimizers in the fit function

Can you explain why do you use the LBFGS-Optimzer and the Adam-Optimzer in each training-step
`

    for it in range(nIter):
        self.sess.run(self.train_op_Adam, tf_dict)
        
        # Print
        if it % 10 == 0:
            elapsed = time.time() - start_time
            loss_value = self.sess.run(self.loss, tf_dict)
            print('It: %d, Loss: %.3e, Time: %.2f' % 
                  (it, loss_value, elapsed))
            start_time = time.time()
                                                                                                                      
        self.optimizer.minimize(self.sess, 
                                feed_dict = tf_dict,         
                                fetches = [self.loss], 
                                loss_callback = self.callback)

`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.