hplgit / fenics-tutorial Goto Github PK
View Code? Open in Web Editor NEWSource files and published documents for the FEniCS tutorial.
Home Page: http://hplgit.github.com/fenics-tutorial/doc/web/index.html
Source files and published documents for the FEniCS tutorial.
Home Page: http://hplgit.github.com/fenics-tutorial/doc/web/index.html
First of all, thank you for writing Fenics and the documentation for it. It's a great tool and it has a very good documentation! Despite that I have a few comments/questions on the tutorial:
Thank you in advance! Best regards, Johannes
Hi
I'm trying to run ft06_elasticity.py and it throws out that message:
NameError: name 'nabla_div' is not defined
I've tried to import it manually but it doesn't work
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I change the plot code because WSL cannot use GUI.
"""
FEniCS tutorial demo program: Diffusion of a Gaussian hill.
u'= Laplace(u) + f in a square domain
u = u_D on the boundary
u = u_0 at t = 0
u_D = f = 0
The initial condition u_0 is chosen as a Gaussian hill.
"""
from __future__ import print_function
from dolfin import *
import time
import matplotlib.pyplot as plt
T = 2.0 # final time
num_steps = 50 # number of time steps
dt = T / num_steps # time step size
# Create mesh and define function space
nx = ny = 30
mesh = RectangleMesh(Point(-2, -2), Point(2, 2), nx, ny)
V = FunctionSpace(mesh, 'P', 1)
# Define boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, Constant(0), boundary)
# Define initial value
u_0 = Expression('exp(-a*pow(x[0], 2) - a*pow(x[1], 2))',
degree=2, a=5)
u_n = interpolate(u_0, V)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
f = Constant(0)
F = u*v*dx + dt*dot(grad(u), grad(v))*dx - (u_n + dt*f)*v*dx
a, L = lhs(F), rhs(F)
# Create VTK file for saving solution
vtkfile = File('heat_gaussian/solution.pvd')
# Time-stepping
u = Function(V)
t = 0
for n in range(num_steps):
# Update current time
t += dt
# Compute solution
solve(a == L, u, bc)
# Save to file and plot solution
vtkfile << (u, t)
plot(u)
plt.savefig("u"+str(n)+".png")
plt.cla
# Update previous solution
u_n.assign(u)
# Hold plot
# interactive()
Hi,
I'm trying to do the ft06 elasticity demo, but I am getting the following error when I run the program:
*** Warning: Matplotlib plotting backend does not support displacement for 3 in 3. Continuing without plotting...
Furthermore, when I open the image file for "displacement" it seems to be empty. The other two files (magnitude and Von Mises) seem to work fine. Any thoughts on how I can fix this?
I am using Debian 10 and python 3.
Thanks!
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I delete the plot code because WSL cannot use it.
"""
FEniCS tutorial demo program: Magnetic field generated by a copper
wire wound around an iron cylinder. The solution is computed by
solving the Poisson equation for the z-component of the magnetic
vector potential.
-Laplace(A_z) = mu_0 * J_z
"""
from __future__ import print_function
from dolfin import *
from mshr import *
from math import sin, cos, pi
a = 1.0 # inner radius of iron cylinder
b = 1.2 # outer radius of iron cylinder
c_1 = 0.8 # radius for inner circle of copper wires
c_2 = 1.4 # radius for outer circle of copper wires
r = 0.1 # radius of copper wires
R = 5.0 # radius of domain
n = 10 # number of windings
# Define geometry for background
domain = Circle(Point(0, 0), R)
# Define geometry for iron cylinder
cylinder = Circle(Point(0, 0), b) - Circle(Point(0, 0), a)
# Define geometry for wires (N = North (up), S = South (down))
angles_N = [i*2*pi/n for i in range(n)]
angles_S = [(i + 0.5)*2*pi/n for i in range(n)]
wires_N = [Circle(Point(c_1*cos(v), c_1*sin(v)), r) for v in angles_N]
wires_S = [Circle(Point(c_2*cos(v), c_2*sin(v)), r) for v in angles_S]
# Set subdomain for iron cylinder
domain.set_subdomain(1, cylinder)
# Set subdomains for wires
for (i, wire) in enumerate(wires_N):
domain.set_subdomain(2 + i, wire)
for (i, wire) in enumerate(wires_S):
domain.set_subdomain(2 + n + i, wire)
# Create mesh
mesh = generate_mesh(domain, 128)
# Define function space
V = FunctionSpace(mesh, 'P', 1)
# Define boundary condition
bc = DirichletBC(V, Constant(0), 'on_boundary')
# Define subdomain markers and integration measure
markers = MeshFunction('size_t', mesh, 2, mesh.domains())
dx = Measure('dx', domain=mesh, subdomain_data=markers)
# Define current densities
J_N = Constant(1.0)
J_S = Constant(-1.0)
# Define magnetic permeability
# class Permeability(Expression):
# def __init__(self, markers, **kwargs):
# self.markers = markers
# def eval_cell(self, values, x, cell):
# if self.markers[cell.index] == 0:
# values[0] = 4*pi*1e-7 # vacuum
# elif self.markers[cell.index] == 1:
# values[0] = 1e-5 # iron (should really be 6.3e-3)
# else:
# values[0] = 1.26e-6 # copper
class Permeability(UserExpression): # UserExpression instead of Expression
def __init__(self, markers, **kwargs):
super().__init__(**kwargs) # This part is new!
self.markers = markers
def eval_cell(self, values, x, cell):
if self.markers[cell.index] == 0:
values[0] = 4*pi*1e-7 # vacuum
elif self.markers[cell.index] == 1:
values[0] = 1e-5 # iron (should really be 6.3e-3)
else:
values[0] = 1.26e-6 # copper
mu = Permeability(markers, degree=1)
# Define variational problem
A_z = TrialFunction(V)
v = TestFunction(V)
a = (1 / mu)*dot(grad(A_z), grad(v))*dx
L_N = sum(J_N*v*dx(i) for i in range(2, 2 + n))
L_S = sum(J_S*v*dx(i) for i in range(2 + n, 2 + 2*n))
L = L_N + L_S
# Solve variational problem
A_z = Function(V)
solve(a == L, A_z, bc)
# Compute magnetic field (B = curl A)
W = VectorFunctionSpace(mesh, 'P', 1)
B = project(as_vector((A_z.dx(1), -A_z.dx(0))), W)
# Plot solution
# plot(A_z)
# plot(B)
# Save solution to file
vtkfile_A_z = File('magnetostatics/potential.pvd')
vtkfile_B = File('magnetostatics/field.pvd')
vtkfile_A_z << A_z
vtkfile_B << B
# Hold plot
# interactive()
Please, delete this. I have found the mistake. Sorry for the trouble.
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I change the plot code because WSL cannot use GUI but it can save figure.
"""
FEniCS tutorial demo program: Deflection of a membrane.
-Laplace(w) = p in the unit circle
w = 0 on the boundary
The load p is a Gaussian function centered at (0, 0.6).
"""
from __future__ import print_function
from dolfin import *
from mshr import *
import numpy as np
# Create mesh and define function space
domain = Circle(Point(0, 0), 1)
mesh = generate_mesh(domain, 64)
V = FunctionSpace(mesh, 'P', 2)
# Define boundary condition
w_D = Constant(0)
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, w_D, boundary)
# Define load
beta = 8
R0 = 0.6
p = Expression('4*exp(-pow(beta, 2)*(pow(x[0], 2) + pow(x[1] - R0, 2)))',
degree=1, beta=beta, R0=R0)
# Define variational problem
w = TrialFunction(V)
v = TestFunction(V)
a = dot(grad(w), grad(v))*dx
L = p*v*dx
# Compute solution
w = Function(V)
solve(a == L, w, bc)
# Plot solution
p = interpolate(p, V)
# plot(w, title='Deflection')
# plot(p, title='Load')
import matplotlib.pyplot as plt
plot(w, title='Deflection')
plt.savefig('w.pdf')
plt.cla()
plot(p, title='Load')
plt.savefig('p.pdf')
plt.cla()
# Save solution to file in VTK format
vtkfile_w = File('poisson_membrane/deflection.pvd')
vtkfile_w << w
vtkfile_p = File('poisson_membrane/load.pvd')
vtkfile_p << p
# Curve plot along x = 0 comparing p and w
import numpy as np
tol = 0.001 # avoid hitting points outside the domain
y = np.linspace(-1 + tol, 1 - tol, 101)
points = [(0, y_) for y_ in y] # 2D points
w_line = np.array([w(point) for point in points])
p_line = np.array([p(point) for point in points])
plt.plot(y, 50*w_line, 'k', linewidth=2) # magnify w
plt.plot(y, p_line, 'b--', linewidth=2)
plt.grid(True)
plt.xlabel('$y$')
plt.legend(['Deflection ($\\times 50$)', 'Load'], loc='upper left')
plt.savefig('poisson_membrane/curves.pdf')
plt.savefig('poisson_membrane/curves.png')
# Hold plots
# interactive()
The log function is no longer functional in FEniCS.
For instance, f = Expression("log(x[0])", degree=2) yields the following error:
Moving new file over differing existing file:
src: /Users/jaddoghman/PycharmProjects/pythonProject/jitfailure-dolfin_expression_f60265c862d87700218c841a812dafcb/error.log.6ea03de6066b4b9d83ac2b8ded7b64d0
dst: /Users/jaddoghman/PycharmProjects/pythonProject/jitfailure-dolfin_expression_f60265c862d87700218c841a812dafcb/error.log
backup: /Users/jaddoghman/PycharmProjects/pythonProject/jitfailure-dolfin_expression_f60265c862d87700218c841a812dafcb/error.log.old
------------------- Start compiler output ------------------------
/var/folders/ss/zp8nt0fj6sq7cbyhdz9w52q40000gp/T/tmpi7ash_kk/dolfin_expression_f60265c862d87700218c841a812dafcb.cpp:61:23: error: no matching function for call to 'log'
values[0] = log(x[0]);
^~~
/Users/jaddoghman/opt/anaconda3/envs/fenics2018/include/dolfin/log/log.h:103:8: note: candidate function not viable: requires at least 2 arguments, but 1 was provided
void log(int debug_level, std::string msg, ...);
^
1 error generated.
------------------- End compiler output ------------------------
Compilation failed! Sources, command, and errors have been written to: /Users/jaddoghman/PycharmProjects/pythonProject/jitfailure-dolfin_expression_f60265c862d87700218c841a812dafcb
Traceback (most recent call last):
File "/Users/jaddoghman/opt/anaconda3/envs/fenics2018/lib/python3.7/site-packages/dolfin/jit/jit.py", line 167, in compile_class
mpi_comm=mpi_comm)
File "/Users/jaddoghman/opt/anaconda3/envs/fenics2018/lib/python3.7/site-packages/dolfin/jit/jit.py", line 47, in mpi_jit
return local_jit(*args, **kwargs)
File "/Users/jaddoghman/opt/anaconda3/envs/fenics2018/lib/python3.7/site-packages/dolfin/jit/jit.py", line 103, in dijitso_jit
return dijitso.jit(*args, **kwargs)
File "/Users/jaddoghman/opt/anaconda3/envs/fenics2018/lib/python3.7/site-packages/dijitso/jit.py", line 217, in jit
% err_info['fail_dir'], err_info)
dijitso.jit.DijitsoError: Dijitso JIT compilation failed, see '/Users/jaddoghman/PycharmProjects/pythonProject/jitfailure-dolfin_expression_f60265c862d87700218c841a812dafcb' for details
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "", line 1, in
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/jaddoghman/PycharmProjects/pythonProject/Scratch2.py", line 8, in
f = Expression("log(x[0])", degree=2)
File "/Users/jaddoghman/opt/anaconda3/envs/fenics2018/lib/python3.7/site-packages/dolfin/function/expression.py", line 376, in init
self._cpp_object = jit.compile_expression(cpp_code, params)
File "/Users/jaddoghman/opt/anaconda3/envs/fenics2018/lib/python3.7/site-packages/dolfin/function/jit.py", line 158, in compile_expression
expression = compile_class(cpp_data, mpi_comm=mpi_comm)
File "/Users/jaddoghman/opt/anaconda3/envs/fenics2018/lib/python3.7/site-packages/dolfin/jit/jit.py", line 170, in compile_class
raise RuntimeError("Unable to compile C++ code with dijitso")
RuntimeError: Unable to compile C++ code with dijitso
Is there any alternative?
Thanks!
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I delete the plot code because WSL cannot use it.
"""
FEniCS tutorial demo program: Incompressible Navier-Stokes equations
for channel flow (Poisseuille) on the unit square using the
Incremental Pressure Correction Scheme (IPCS).
u' + u . nabla(u)) - div(sigma(u, p)) = f
div(u) = 0
"""
from __future__ import print_function
from dolfin import *
import numpy as np
T = 10.0 # final time
num_steps = 500 # number of time steps
dt = T / num_steps # time step size
mu = 1 # kinematic viscosity
rho = 1 # density
# Create mesh and define function spaces
mesh = UnitSquareMesh(16, 16)
V = VectorFunctionSpace(mesh, 'P', 2)
Q = FunctionSpace(mesh, 'P', 1)
# Define boundaries
inflow = 'near(x[0], 0)'
outflow = 'near(x[0], 1)'
walls = 'near(x[1], 0) || near(x[1], 1)'
# Define boundary conditions
bcu_noslip = DirichletBC(V, Constant((0, 0)), walls)
bcp_inflow = DirichletBC(Q, Constant(8), inflow)
bcp_outflow = DirichletBC(Q, Constant(0), outflow)
bcu = [bcu_noslip]
bcp = [bcp_inflow, bcp_outflow]
# Define trial and test functions
u = TrialFunction(V)
v = TestFunction(V)
p = TrialFunction(Q)
q = TestFunction(Q)
# Define functions for solutions at previous and current time steps
u_n = Function(V)
u_ = Function(V)
p_n = Function(Q)
p_ = Function(Q)
# Define expressions used in variational forms
U = 0.5*(u_n + u)
n = FacetNormal(mesh)
f = Constant((0, 0))
k = Constant(dt)
mu = Constant(mu)
rho = Constant(rho)
# Define strain-rate tensor
def epsilon(u):
return sym(nabla_grad(u))
# Define stress tensor
def sigma(u, p):
return 2*mu*epsilon(u) - p*Identity(len(u))
# Define variational problem for step 1
F1 = rho*dot((u - u_n) / k, v)*dx + \
rho*dot(dot(u_n, nabla_grad(u_n)), v)*dx \
+ inner(sigma(U, p_n), epsilon(v))*dx \
+ dot(p_n*n, v)*ds - dot(mu*nabla_grad(U)*n, v)*ds \
- dot(f, v)*dx
a1 = lhs(F1)
L1 = rhs(F1)
# Define variational problem for step 2
a2 = dot(nabla_grad(p), nabla_grad(q))*dx
L2 = dot(nabla_grad(p_n), nabla_grad(q))*dx - (1/k)*div(u_)*q*dx
# Define variational problem for step 3
a3 = dot(u, v)*dx
L3 = dot(u_, v)*dx - k*dot(nabla_grad(p_ - p_n), v)*dx
# Assemble matrices
A1 = assemble(a1)
A2 = assemble(a2)
A3 = assemble(a3)
# Apply boundary conditions to matrices
[bc.apply(A1) for bc in bcu]
[bc.apply(A2) for bc in bcp]
# Time-stepping
t = 0
for n in range(num_steps):
# Update current time
t += dt
# Step 1: Tentative velocity step
b1 = assemble(L1)
[bc.apply(b1) for bc in bcu]
solve(A1, u_.vector(), b1)
# Step 2: Pressure correction step
b2 = assemble(L2)
[bc.apply(b2) for bc in bcp]
solve(A2, p_.vector(), b2)
# Step 3: Velocity correction step
b3 = assemble(L3)
solve(A3, u_.vector(), b3)
# Plot solution
# plot(u_)
# Compute error
u_e = Expression(('4*x[1]*(1.0 - x[1])', '0'), degree=2)
u_e = interpolate(u_e, V)
error = np.abs(u_e.vector().get_local() - u_.vector().get_local()).max()
print('t = %.2f: error = %.3g' % (t, error))
print('max u:', u_.vector().get_local().max())
# Update previous solution
u_n.assign(u_)
p_n.assign(p_)
# Hold plot
# interactive()
On the page 136, the module boxfield can't be imported in python. Moreover, the corresponding page https://fenicsproject.org/pub/tutorial/python/vol1/boxfield.py can't be found.
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I change the plot code because WSL cannot use GUI.
"""
FEniCS tutorial demo program: Heat equation with Dirichlet conditions.
Test problem is chosen to give an exact solution at all nodes of the mesh.
u'= Laplace(u) + f in the unit square
u = u_D on the boundary
u = u_0 at t = 0
u = 1 + x^2 + alpha*y^2 + \beta*t
f = beta - 2 - 2*alpha
"""
from __future__ import print_function
from dolfin import *
import numpy as np
import matplotlib.pyplot as plt
T = 2.0 # final time
num_steps = 10 # number of time steps
dt = T / num_steps # time step size
alpha = 3 # parameter alpha
beta = 1.2 # parameter beta
# Create mesh and define function space
nx = ny = 8
mesh = UnitSquareMesh(nx, ny)
V = FunctionSpace(mesh, 'P', 1)
# Define boundary condition
u_D = Expression('1 + x[0]*x[0] + alpha*x[1]*x[1] + beta*t',
degree=2, alpha=alpha, beta=beta, t=0)
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, u_D, boundary)
# Define initial value
u_n = interpolate(u_D, V)
#u_n = project(u_D, V)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
f = Constant(beta - 2 - 2*alpha)
F = u*v*dx + dt*dot(grad(u), grad(v))*dx - (u_n + dt*f)*v*dx
a, L = lhs(F), rhs(F)
# Time-stepping
u = Function(V)
t = 0
for n in range(num_steps):
# Update current time
t += dt
u_D.t = t
# Compute solution
solve(a == L, u, bc)
# Plot solution
plot(u)
plt.savefig('u'+str(n)+'.png')
plt.cla
# Compute error at vertices
u_e = interpolate(u_D, V)
error = np.abs(u_e.vector().get_local() - u.vector().get_local()).max()
print('t = %.2f: error = %.3g' % (t, error))
# Update previous solution
u_n.assign(u)
# Hold plot
# interactive()
I am trying to execute ft01_poisson.py and I get the following error:
fenics> python ./ft01_poisson.py
Traceback (most recent call last):
File "./ft01_poisson.py", line 11, in
from fenics import *
File "/usr/lib/python2.7/dist-packages/fenics/init.py", line 7, in
from dolfin import *
File "/usr/lib/python2.7/dist-packages/dolfin/init.py", line 17, in
from . import cpp
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/init.py", line 43, in
exec("from . import %s" % module_name)
File "", line 1, in
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/common.py", line 32, in
_common = swig_import_helper()
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/common.py", line 28, in swig_import_helper
_mod = imp.load_module('_common', fp, pathname, description)
ImportError: /usr/lib/x86_64-linux-gnu/libhwloc.so.5: symbol migrate_pages, version libnuma_1.2 not defined in file libnuma.so.1 with link time reference
I am using a Linux mint 18.2 Sonya, 64-bit laptop, kernel 4.13.5-041305-generic
Regards,
Guy
I tried to run the demo of elastic problem.But it tells ''nabla_grad" is not defined. I checked the installation of my fenics. It is good. anybody knows why.
Traceback (most recent call last):
File "ft001.py", line 114, in
set_log_level(PROGRESS)
NameError: name 'PROGRESS' is not defined
why?
I with conda install -c conda-forge fenics jupyterlab matplotlib
, I am seeing the following:
$ python ft07_navier_stokes_channel.py
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Traceback (most recent call last):
File "ft07_navier_stokes_channel.py", line 118, in <module>
error = np.abs(u_e.vector().array() - u_.vector().array()).max()
AttributeError: 'dolfin.cpp.la.PETScVector' object has no attribute 'array'
I am running python 3.8 and I am having an issue with the line
error = np.abs(u_e.vector().array() - u_.vector().array()).max()
The runtime error is below.
(fenicsproject) MacBook-Pro-2017-i7 Python_Codes % python navier_stokes_channel.py
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Traceback (most recent call last):
File "navier_stokes_channel.py", line 117, in
error = np.abs(u_e.vector().array() - u_.vector().array()).max()
AttributeError: 'dolfin.cpp.la.PETScVector' object has no attribute 'array'
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I delete the plot code because WSL cannot use it.
"""
Class for a scalar (or vector) field over a BoxGrid or UniformBoxGrid grid.
"""
from numpy import zeros, array, transpose, ndarray, linspace, meshgrid
import dolfin
__all__ = ['BoxField', 'BoxGrid', 'UniformBoxGrid', 'X', 'Y', 'Z',
'FEniCSBoxField', 'update_from_fenics_array']
# constants for indexing the space directions:
X = X1 = 0
Y = X2 = 1
Z = X3 = 2
class UniformBoxGrid(object):
"""
Simple uniform grid on an interval, rectangle, box, or hypercube.
============= ====================================================
Attributes Description
============= ====================================================
nsd no of spatial dimensions in the grid
min_coor array of minimum coordinates
max_coor array of maximum coordinates
division array of cell divisions in the
delta array of grid spacings
dirnames names of the space directions ('x', 'y', etc.)
shape (nx+1, ny+1, ...); dimension of array over grid
coor list of coordinates; self.coor[Y][j] is the
the j-th coordinate in direction Y (=1)
X, Y, Z are predefined constants 0, 1, 2
coorv expanded version of coor for vectorized expressions
(in 2D, self.coorv[0] = self.coor[0][:,newaxis])
tolerance small geometric tolerance based on grid coordinates
npoints total number of grid points
============= ====================================================
"""
def __init__(self,
min=(0,0), # minimum coordinates
max=(1,1), # maximum coordinates
division=(4,4), # cell divisions
dirnames=('x', 'y', 'z')): # names of the directions
"""
Initialize a BoxGrid by giving domain range (minimum and
maximum coordinates: min and max tuples/lists/arrays)
and number of cells in each space direction (division tuple/list/array).
The dirnames tuple/list holds the names of the coordinates in
the various spatial directions.
>>> g = UniformBoxGrid(min=0, max=1, division=10)
>>> g = UniformBoxGrid(min=(0,-1), max=(1,1), division=(10,4))
>>> g = UniformBoxGrid(min=(0,0,-1), max=(2,1,1), division=(2,3,5))
"""
# Allow int/float specifications in one-dimensional grids
# (just turn to lists for later multi-dimensional processing)
if isinstance(min, (int,float)):
min = [min]
if isinstance(max, (int,float)):
max = [max]
if isinstance(division, (int,float)):
division = [division]
if isinstance(dirnames, str):
dirnames = [dirnames]
self.nsd = len(min)
# strip dirnames down to right space dim (in case the default
# with three components were unchanged by the user):
dirnames = dirnames[:self.nsd]
# check consistent lengths:
for a in max, division:
if len(a) != self.nsd:
raise ValueError(
'Incompatible lengths of arguments to constructor'\
' (%d != %d)' % (len(a), self.nsd))
self.min_coor = array(min, float)
self.max_coor = array(max, float)
self.dirnames = dirnames
self.division = division
self.coor = [None]*self.nsd
self.shape = [0]*self.nsd
self.delta = zeros(self.nsd)
for i in range(self.nsd):
self.delta[i] = \
(self.max_coor[i] - self.min_coor[i])/float(self.division[i])
self.shape[i] = self.division[i] + 1 # no of grid points
self.coor[i] = \
linspace(self.min_coor[i], self.max_coor[i], self.shape[i])
self._more_init()
def _more_init(self):
self.shape = tuple(self.shape)
self.coorv = meshgrid(*self.coor, indexing='ij')
if not isinstance(self.coorv, (list,tuple)):
# 1D grid, wrap self.coorv as list:
self.coorv = [self.coorv]
self.npoints = 1
for i in range(len(self.shape)):
self.npoints *= self.shape[i]
self.tolerance = (max(self.max_coor) - min(self.min_coor))*1E-14
# nicknames: xcoor, ycoor, xcoorv, ycoorv, etc
for i in range(self.nsd):
self.__dict__[self.dirnames[i]+'coor'] = self.coor[i]
self.__dict__[self.dirnames[i]+'coorv'] = self.coorv[i]
if self.nsd == 3:
# make boundary coordinates for vectorization:
xdummy, \
self.ycoorv_xfixed_boundary, \
self.zcoorv_xfixed_boundary = meshgrid(0, self.ycoor, self.zcoor,
indexing='ij')
self.xcoorv_yfixed_boundary, \
ydummy, \
self.zcoorv_yfixed_boundary = meshgrid(self.xcoor, 0, self.zcoor,
indexing='ij')
self.xcoorv_yfixed_boundary, \
self.zcoorv_yfixed_boundary, \
zdummy = meshgrid(self.xcoor, self.ycoor, 0, indexing='ij')
# could have _ in all variable names and define read-only
# access via properties
def string2griddata(s):
"""
Turn a text specification of a grid into a dictionary
with the grid data.
For example,
>>> s = "domain=[0,10] indices=[0:11]"
>>> data = BoxGrid.string2griddata(s)
>>> data
{'dirnames': ('x', 'y'), 'division': [10], 'max': [10], 'min': [0]}
>>> s = "domain=[0.2,0.5]x[0,2E+00] indices=[0:20]x[0:100]"
>>> data = BoxGrid.string2griddata(s)
>>> data
{'dirnames': ('x', 'y', 'z'),
'division': [19, 99],
'max': [0.5, 2],
'min': [0.2, 0]}
>>> s = "[0,1]x[0,2]x[-1,1.5] [0:25]x[0:10]x[0:16]"
>>> data = BoxGrid.string2griddata(s)
>>> data
{'dirnames': ('x', 'y', 'z'),
'division': [24, 9, 15],
'max': [1.0, 2.0, 1.5],
'min': [0.0, 0.0, -1.0]}
The data dictionary can be used as keyword arguments to the
class UniformBoxGrid constructor.
"""
domain = r'\[([^,]*),([^\]]*)\]'
indices = r'\[([^:,]*):([^\]]*)\]'
import re
d = re.findall(domain, s)
i = re.findall(indices, s)
nsd = len(d)
if nsd != len(i):
raise ValueError('could not parse "%s"' % s)
kwargs = {}
dirnames = ('x', 'y', 'z')
for dir in range(nsd):
if not isinstance(d[dir], (list,tuple)) or len(d[dir]) != 2 or \
not isinstance(i[dir], (list,tuple)) or len(i[dir]) != 2:
raise ValueError('syntax error in "%s"' % s)
# old syntax (nx, xmin, xmax, ny, ymin, etc.):
#kwargs[dirnames[dir]] = (float(d[dir][0]), float(d[dir][1]))
#kwargs['n'+dirnames[dir]] = int(i[dir][1]) - int(i[dir][0]) # no of cells!
kwargs['min'] = [float(d[dir][0]) for dir in range(nsd)]
kwargs['max'] = [float(d[dir][1]) for dir in range(nsd)]
kwargs['division'] = [int(i[dir][1]) - int(i[dir][0]) \
for dir in range(nsd)]
kwargs['dirnames'] = dirnames[:nsd]
return kwargs
string2griddata = staticmethod(string2griddata)
def __getitem__(self, i):
"""
Return access to coordinate array in direction no i, or direction
name i, or return the coordinate of a point if i is an nsd-tuple.
>>> g = UniformBoxGrid(x=(0,1), y=(-1,1), nx=2, ny=4) # xy grid
>>> g[0][0] == g.min[0] # min coor in direction 0
True
>>> g['x'][0] == g.min[0] # min coor in direction 'x'
True
>>> g[0,4]
(0.0, 1.0)
>>> g = UniformBoxGrid(min=(0,-1), max=(1,1), division=(2,4), dirnames=('y', 'z'))
>>> g[1][0] == g.min[1]
True
>>> g['z'][0] == g.min[1] # min coor in direction 'z'
True
"""
if isinstance(i, str):
return self.coor[self.name2dirindex(i)]
elif isinstance(i, int):
if self.nsd > 1:
return self.coor[i] # coordinate array
else:
return self.coor[0][i] # coordinate itself in 1D
elif isinstance(i, (list,tuple)):
return tuple([self.coor[k][i[k]] for k in range(len(i))])
else:
raise TypeError('i must be str, int, tuple')
def __setitem__(self, i, value):
raise AttributeError('subscript assignment is not valid for '\
'%s instances' % self.__class__.__name__)
def ncells(self, i):
"""Return no of cells in direction i."""
# i has the meaning as in __getitem__. May be removed if not much used
return len(self.coor[i])-1
def name2dirindex(self, name):
"""
Return direction index corresponding to direction name.
In an xyz-grid, 'x' is 0, 'y' is 1, and 'z' is 2.
In an yz-grid, 'x' is not defined, 'y' is 0, and 'z' is 1.
"""
try:
return self.dirnames.index(name)
except ValueError:
# print name, 'is not defined'
print(name, 'is not defined')
return None
def dirindex2name(self, i):
"""Inverse of name2dirindex."""
try:
return self.dirnames[i]
except IndexError:
# print i, 'is not a valid index'
print(i, 'is not a valid index')
return None
def ok(self):
return True # constructor init only => always ok
def __len__(self):
"""Total number of grid points."""
n = 1
for dir in self.coor:
n *= len(dir)
return n
def __repr__(self):
s = self.__class__.__name__ + \
'(min=%s, max=%s, division=%s, dirnames=%s)' % \
(self.min_coor.tolist(),
self.max_coor.tolist(),
self.division, self.dirnames)
return s
def __str__(self):
"""Pretty print, using the syntax of init_fromstring."""
domain = 'x'.join(['[%g,%g]' % (min_, max_) \
for min_, max_ in
zip(self.min_coor, self.max_coor)])
indices = 'x'.join(['[0:%d]' % div for div in self.division])
return 'domain=%s indices=%s' % (domain, indices)
def interpolator(self, point_values):
"""
Given a self.nsd dimension array point_values with
values at each grid point, this method returns a function
for interpolating the scalar field defined by point_values
at an arbitrary point.
2D Example:
given a filled array point_values[i,j], compute
interpolator = grid.interpolator(point_values)
v = interpolator(0.1243, 9.231) # interpolate point_values
>>> g=UniformBoxGrid(x=(0,2), nx=2, y=(-1,1), ny=2)
>>> g
UniformBoxGrid(x=(0,2), nx=2, y=(-1,1), ny=2)
>>> def f(x,y): return 2+2*x-y
>>> f=g.vectorized_eval(f)
>>> f
array([[ 3., 2., 1.],
[ 5., 4., 3.],
[ 7., 6., 5.]])
>>> i=g.interpolator(f)
>>> i(0.1,0.234) # interpolate (not a grid point)
1.9660000000000002
>>> f(0.1,0.234) # exact answer
1.9660000000000002
"""
args = self.coor
args.append(point_values)
# make use of wrap2callable, which applies ScientificPython
return wrap2callable(args)
def vectorized_eval(self, f):
"""
Evaluate a function f (of the space directions) over a grid.
f is supposed to be vectorized.
>>> g = BoxGrid(x=(0,1), y=(0,1), nx=3, ny=3)
>>> # f(x,y) = sin(x)*exp(x-y):
>>> a = g.vectorized_eval(lambda x,y: sin(x)*exp(y-x))
>>> print a
[[ 0. 0. 0. 0. ]
[ 0.23444524 0.3271947 0.45663698 0.63728825]
[ 0.31748164 0.44308133 0.6183698 0.86300458]
[ 0.30955988 0.43202561 0.60294031 0.84147098]]
>>> # f(x,y) = 2: (requires special consideration)
>>> a = g.vectorized_eval(lambda x,y: zeros(g.shape)+2)
>>> print a
[[ 2. 2. 2. 2.]
[ 2. 2. 2. 2.]
[ 2. 2. 2. 2.]
[ 2. 2. 2. 2.]]
"""
a = f(*self.coorv)
# check if f is really vectorized:
try:
msg = 'calling %s, which is supposed to be vectorized' % f.__name__
except AttributeError: # if __name__ is missing
msg = 'calling a function, which is supposed to be vectorized'
try:
self.compatible(a)
except (IndexError,TypeError) as e:
# print 'e=',e, type(e), e.__class__.__name__
print('e=',e, type(e), e.__class__.__name__)
raise e.__class__('BoxGrid.vectorized_eval(f):\n%s, BUT:\n%s' % \
(msg, e))
return a
def init_fromstring(s):
data = UniformBoxGrid.string2griddata(s)
return UniformBoxGrid(**data)
init_fromstring = staticmethod(init_fromstring)
def compatible(self, data_array, name_of_data_array=''):
"""
Check that data_array is a NumPy array with dimensions
compatible with the grid.
"""
if not isinstance(data_array, ndarray):
raise TypeError('data %s is %s, not NumPy array' % \
(name_of_data_array, type(data_array)))
else:
if data_array.shape != self.shape:
raise IndexError("data %s of shape %s is not "\
"compatible with the grid's shape %s" % \
(name_of_data_array, data_array.shape,
self.shape))
return True # if we haven't raised any exceptions
def iter(self, domain_part='all', vectorized_version=True):
"""
Return iterator over grid points.
domain_part = 'all': all grid points
domain_part = 'interior': interior grid points
domain_part = 'all_boundary': all boundary points
domain_part = 'interior_boundary': interior boundary points
domain_part = 'corners': all corner points
domain_part = 'all_edges': all points along edges in 3D grids
domain_part = 'interior_edges': interior points along edges
vectorized_version is true if the iterator returns slice
objects for the index slice in each direction.
vectorized_version is false if the iterator visits each point
at a time (scalar version).
"""
self.iterator_domain = domain_part
self.vectorized_iter = vectorized_version
return self
def __iter__(self):
# Idea: set up slices for the various self.iterator_domain
# values. In scalar mode, make a loop over the slices and
# yield the scalar value. In vectorized mode, return the
# appropriate slices.
self._slices = [] # elements meant to be slice objects
if self.iterator_domain == 'all':
self._slices.append([])
for i in range(self.nsd):
self._slices[-1].append((i, slice(0, len(self.coor[i]), 1)))
elif self.iterator_domain == 'interior':
self._slices.append([])
for i in range(self.nsd):
self._slices[-1].append((i, slice(1, len(self.coor[i])-1, 1)))
elif self.iterator_domain == 'all_boundary':
for i in range(self.nsd):
self._slices.append([])
# boundary i fixed at 0:
for j in range(self.nsd):
if j != i:
self._slices[-1].\
append((j, slice(0, len(self.coor[j]), 1)))
else:
self._slices[-1].append((i, slice(0, 1, 1)))
# boundary i fixed at its max value:
for j in range(self.nsd):
if j != i:
self._slices[-1].\
append((j, slice(0, len(self.coor[j]), 1)))
else:
n = len(self.coor[i])
self._slices[-1].append((i, slice(n-1, n, 1)))
elif self.iterator_domain == 'interior_boundary':
for i in range(self.nsd):
self._slices.append([])
# boundary i fixed at 0:
for j in range(self.nsd):
if j != i:
self._slices[-1].\
append((j, slice(1, len(self.coor[j])-1, 1)))
else:
self._slices[-1].append((i, slice(0, 1, 1)))
# boundary i fixed at its max value:
for j in range(self.nsd):
if j != i:
self._slices[-1].\
append((j, slice(1, len(self.coor[j])-1, 1)))
else:
n = len(self.coor[i])
self._slices[-1].append((i, slice(n-1, n, 1)))
elif self.iterator_domain == 'corners':
if self.nsd == 1:
for i0 in (0, len(self.coor[0])-1):
self._slices.append([])
self._slices[-1].append((0, slice(i0, i0+1, 1)))
elif self.nsd == 2:
for i0 in (0, len(self.coor[0])-1):
for i1 in (0, len(self.coor[1])-1):
self._slices.append([])
self._slices[-1].append((0, slice(i0, i0+1, 1)))
self._slices[-1].append((0, slice(i1, i1+1, 1)))
elif self.nsd == 3:
for i0 in (0, len(self.coor[0])-1):
for i1 in (0, len(self.coor[1])-1):
for i2 in (0, len(self.coor[2])-1):
self._slices.append([])
self._slices[-1].append((0, slice(i0, i0+1, 1)))
self._slices[-1].append((0, slice(i1, i1+1, 1)))
self._slices[-1].append((0, slice(i2, i2+1, 1)))
elif self.iterator_domain == 'all_edges':
# print 'iterator over "all_edges" is not implemented'
print('iterator over "all_edges" is not implemented')
elif self.iterator_domain == 'interior_edges':
# print 'iterator over "interior_edges" is not implemented'
print('iterator over "interior_edges" is not implemented')
else:
raise ValueError('iterator over "%s" is not impl.' % \
self.iterator_domain)
# "def __next__(self):"
"""
If vectorized mode:
Return list of slice instances, where the i-th element in the
list represents the slice for the index in the i-th space
direction (0,...,nsd-1).
If scalar mode:
Return list of indices (in multi-D) or the index (in 1D).
"""
if self.vectorized_iter:
for s in self._slices:
yield [slice_in_dir for dir, slice_in_dir in s]
else:
# scalar version
for s in self._slices:
slices = [slice_in_dir for dir, slice_in_dir in s]
if len(slices) == 1:
for i in xrange(slices[0].start, slices[0].stop):
yield i
elif len(slices) == 2:
for i in xrange(slices[0].start, slices[0].stop):
for j in xrange(slices[1].start, slices[1].stop):
yield [i, j]
elif len(slices) == 3:
for i in xrange(slices[0].start, slices[0].stop):
for j in xrange(slices[1].start, slices[1].stop):
for k in xrange(slices[2].start, slices[2].stop):
yield [i, j, k]
def locate_cell(self, point):
"""
Given a point (x, (x,y), (x,y,z)), locate the cell in which
the point is located, and return
1) the (i,j,k) vertex index
of the "lower-left" grid point in this cell,
2) the distances (dx, (dx,dy), or (dx,dy,dz)) from this point to
the given point,
3) a boolean list if point matches the
coordinates of any grid lines. If a point matches
the last grid point in a direction, the cell index is
set to the max index such that the (i,j,k) index can be used
directly for look up in an array of values. The corresponding
element in the distance array is then set 0.
4) the indices of the nearest grid point.
The method only works for uniform grid spacing.
Used for interpolation.
>>> g1 = UniformBoxGrid(min=0, max=1, division=4)
>>> cell_index, distance, match, nearest = g1.locate_cell(0.7)
>>> print cell_index
[2]
>>> print distance
[ 0.2]
>>> print match
[False]
>>> print nearest
[3]
>>>
>>> g1.locate_cell(0.5)
([2], array([ 0.]), [True], [2])
>>>
>>> g2 = UniformBoxGrid.init_fromstring('[-1,1]x[-1,2] [0:3]x[0:4]')
>>> print g2.coor
[array([-1. , -0.33333333, 0.33333333, 1. ]), array([-1. , -0.25, 0.5 , 1.25, 2. ])]
>>> g2.locate_cell((0.2,0.2))
([1, 1], array([ 0.53333333, 0.45 ]), [False, False], [2, 2])
>>> g2.locate_cell((1,2))
([3, 4], array([ 0., 0.]), [True, True], [3, 4])
>>>
>>>
>>>
"""
if isinstance(point, (int,float)):
point = [point]
nsd = len(point)
if nsd != self.nsd:
raise ValueError('point=%s has wrong dimension (this is a %dD grid!)' % \
(point, self.nsd))
#index = zeros(nsd, int)
index = [0]*nsd
distance = zeros(nsd)
grid_point = [False]*nsd
nearest_point = [0]*nsd
for i, coor in enumerate(point):
# is point inside the domain?
if coor < self.min_coor[i] or coor > self.max_coor[i]:
raise ValueError(
'locate_cell: point=%s is outside the domain [%s,%s]' % \
point, self.min_coor[i], self.max_coor[i])
index[i] = int((coor - self.min_coor[i])//self.delta[i]) # (need integer division)
distance[i] = coor - (self.min_coor[i] + index[i]*self.delta[i])
if distance[i] > self.delta[i]/2:
nearest_point[i] = index[i] + 1
else:
nearest_point[i] = index[i]
if abs(distance[i]) < self.tolerance:
grid_point[i] = True
nearest_point[i] = index[i]
if (abs(distance[i] - self.delta[i])) < self.tolerance:
# last cell, update index such that it coincides with the point
grid_point[i] = True
index[i] += 1
nearest_point[i] = index[i]
distance[i] = 0.0
return index, distance, grid_point, nearest_point
def interpolate(v0, v1, x0, x1, x):
return v0 + (v1-v0)/float(x1-x0)*(x-x0)
def gridline_slice(self, start_coor, direction=0, end_coor=None):
"""
Compute start and end indices of a line through the grid,
and return a tuple that can be used as slice for the
grid points along the line.
The line must be in x, y or z direction (direction=0,1 or 2).
If end_coor=None, the line ends where the grid ends.
start_coor holds the coordinates of the start of the line.
If start_coor does not coincide with one of the grid points,
the line is snapped onto the grid (i.e., the line coincides with
a grid line).
Return: tuple with indices and slice describing the grid point
indices that make up the line, plus a boolean "snapped" which is
True if the original line did not coincide with any grid line,
meaning that the returned line was snapped onto to the grid.
>>> g2 = UniformBoxGrid.init_fromstring('[-1,1]x[-1,2] [0:3]x[0:4]')
>>> print g2.coor
[array([-1. , -0.33333333, 0.33333333, 1. ]),
array([-1. , -0.25, 0.5 , 1.25, 2. ])]
>>> g2.gridline_slice((-1, 0.5), 0)
((slice(0, 4, 1), 2), False)
>>> g2.gridline_slice((-0.9, 0.4), 0)
((slice(0, 4, 1), 2), True)
>>> g2.gridline_slice((-0.2, -1), 1)
((1, slice(0, 5, 1)), True)
"""
start_cell, start_distance, start_match, start_nearest = \
self.locate_cell(start_coor)
# If snapping the line onto to the grid is not desired, the
# start_cell and start_match lists must be used for interpolation
# (i.e., interpolation is needed in the directions i where
# start_match[i] is False).
start_snapped = start_nearest[:]
if end_coor is None:
end_snapped = start_snapped[:]
end_snapped[direction] = self.division[direction] # highest legal index
else:
end_cell, end_distance, end_match, end_nearest = \
self.locate_cell(end_coor)
end_snapped = end_nearest[:]
# recall that upper index limit must be +1 in a slice:
line_slice = start_snapped[:]
line_slice[direction] = \
slice(start_snapped[direction], end_snapped[direction]+1, 1)
# note that if all start_match are true, then the plane
# was not snapped
return tuple(line_slice), not array(start_match).all()
def gridplane_slice(self, value, constant_coor=0):
"""
Compute a slice for a plane through the grid,
defined by coor[constant_coor]=value.
Return a tuple that can be used as slice, plus a boolean
parameter "snapped" reflecting if the plane was snapped
onto a grid plane (i.e., value did not correspond to
an existing grid plane).
"""
start_coor = self.min_coor.copy()
start_coor[constant_coor] = value
start_cell, start_distance, start_match, start_nearest = \
self.locate_cell(start_coor)
start_snapped = [0]*self.nsd
start_snapped[constant_coor] = start_nearest[constant_coor]
# recall that upper index limit must be +1 in a slice:
end_snapped = [self.division[i] for i in range(self.nsd)]
end_snapped[constant_coor] = start_snapped[constant_coor]
plane_slice = [slice(start_snapped[i], end_snapped[i]+1, 1) \
for i in range(self.nsd)]
plane_slice[constant_coor] = start_nearest[constant_coor]
return tuple(plane_slice), not start_match[constant_coor]
class BoxGrid(UniformBoxGrid):
"""
Extension of class UniformBoxGrid to non-uniform box grids.
The coordinate vectors (in each space direction) can have
arbitrarily spaced coordinate values.
The coor argument must be a list of nsd (number of
space dimension) components, each component contains the
grid coordinates in that space direction (stored as an array).
"""
def __init__(self, coor, dirnames=('x', 'y', 'z')):
UniformBoxGrid.__init__(self,
min=[a[0] for a in coor],
max=[a[-1] for a in coor],
division=[len(a)-1 for a in coor],
dirnames=dirnames)
# override:
self.coor = coor
def __repr__(self):
s = self.__class__.__name__ + '(coor=%s)' % self.coor
return s
def locate_cell(self, point):
raise NotImplementedError('Cannot locate point in cells in non-uniform grids')
def _test(g, points=None):
result = 'g=%s' % str(g)
def fv(*args):
# vectorized evaluation function
return zeros(g.shape)+2
def fs(*args):
# scalar version
return 2
fv_arr = g.vectorized_eval(fv)
fs_arr = zeros(g.shape)
coor = [0.0]*g.nsd
itparts = ['all', 'interior', 'all_boundary', 'interior_boundary',
'corners']
if g.nsd == 3:
itparts += ['all_edges', 'interior_edges']
for domain_part in itparts:
result += '\niterator over "%s"\n' % domain_part
for i in g.iter(domain_part, vectorized_version=False):
if isinstance(i, int): i = [i] # wrap index as list (if 1D)
for k in range(g.nsd):
coor[k] = g.coor[k][i[k]]
result += '%s %s\n' % (i, coor)
if domain_part == 'all': # fs_arr shape corresponds to all points
fs_arr[i] = fs(*coor)
result += 'vectorized iterator over "%s":\n' % domain_part
for slices in g.iter(domain_part, vectorized_version=True):
if domain_part == 'all':
fs_arr[slices] = fv(*g.coor)
# else: more complicated
for slice_ in slices:
result += 'slice: %s values: %s\n' % (slice_, fs_arr[slice_])
# boundary slices...
return result
class Field(object):
"""
General base class for all grids. Holds a connection to a
grid, a name of the field, and optionally a list of the
independent variables and a string with a description of the
field.
"""
def __init__(self, grid, name,
independent_variables=None,
description=None,
**kwargs):
self.grid = grid
self.name = name
self.independent_variables = independent_variables
if self.independent_variables is None:
# copy grid.dirnames as independent variables:
self.independent_variables = self.grid.dirnames
# metainformation:
self.meta = {'description': description,}
self.meta.update(kwargs) # user can add more meta information
class BoxField(Field):
"""
Field over a BoxGrid or UniformBoxGrid grid.
============= =============================================
Attributes Description
============= =============================================
grid reference to the underlying grid instance
values array holding field values at the grid points
============= =============================================
"""
def __init__(self, grid, name, vector=0, **kwargs):
"""
Initialize scalar or vector field over a BoxGrid/UniformBoxGrid.
============= ===============================================
Arguments Description
============= ===============================================
*grid* grid instance
*name* name of the field
*vector* scalar field if 0, otherwise the no of vector
components (spatial dimensions of vector field)
*values* (*kwargs*) optional array with field values
============= ===============================================
Here is an example::
>>> g = UniformBoxGrid(min=[0,0], max=[1.,1.], division=[3, 4])
>>> print g
domain=[0,1]x[0,1] indices=[0:3]x[0:4]
>>> u = BoxField(g, 'u')
>>> u.values = u.grid.vectorized_eval(lambda x,y: x + y)
>>> i = 1; j = 2
>>> print 'u(%g, %g)=%g' % (g.coor[X][i], g.coor[Y][j], u.values[i,j])
u(0.333333, 0.5)=0.833333
>>> # visualize:
>>> from scitools.std import surf
>>> surf(u.grid.coorv[X], u.grid.coorv[Y], u.values)
``u.grid.coorv`` is a list of coordinate arrays that are
suitable for Matlab-style visualization of 2D scalar fields.
Also note how one can access the coordinates and u value at
a point (i,j) in the grid.
"""
Field.__init__(self, grid, name, **kwargs)
if vector > 0:
# for a vector field we add a "dimension" in values for
# the various vector components (first index):
self.required_shape = [vector]
self.required_shape += list(self.grid.shape)
else:
self.required_shape = self.grid.shape
if 'values' in kwargs:
values = kwargs['values']
self.set_values(values)
else:
# create array of scalar field grid point values:
self.values = zeros(self.required_shape)
# doesn't work: self.__getitem__ = self.values.__getitem__
#self.__setitem__ = self.values.__setitem__
def copy_values(self, values):
"""Take a copy of the values array and reshape it if necessary."""
self.set_values(values.copy())
def set_values(self, values):
"""Attach the values array to this BoxField object."""
if values.shape == self.required_shape:
self.values = values # field data are provided
else:
try:
values.shape = self.required_shape
self.values = values
except ValueError:
raise ValueError(
'values array is incompatible with grid size; '\
'shape is %s while required shape is %s' % \
(values.shape, self.required_shape))
def update(self):
"""Update the self.values array (if grid has been changed)."""
if self.grid.shape != self.values.shape:
self.values = zeros(self.grid.shape)
# these are slower than u_ = u.values; u_[i] since an additional
# function call is required compared to NumPy array indexing:
def __getitem__(self, i): return self.values[i]
def __setitem__(self, i, v): self.values[i] = v
def __str__(self):
if len(self.values.shape) > self.grid.nsd:
s = 'Vector field with %d components' % self.values.shape[-1]
else:
s = 'Scalar field'
s += ', over ' + str(self.grid)
return s
def gridline(self, start_coor, direction=0, end_coor=None,
snap=True):
"""
Return a coordinate array and corresponding field values
along a line starting with start_coor, in the given
direction, and ending in end_coor (default: grid boundary).
Two more boolean values are also returned: fixed_coor
(the value of the fixed coordinates, which may be different
from those in start_coor if snap=True) and snapped (True if
the line really had to be snapped onto the grid, i.e.,
fixed_coor differs from coordinates in start_coor.
If snap is True, the line is snapped onto the grid, otherwise
values along the line must be interpolated (not yet implemented).
>>> g2 = UniformBoxGrid.init_fromstring('[-1,1]x[-1,2] [0:3]x[0:4]')
>>> print g2
UniformBoxGrid(min=[-1. -1.], max=[ 1. 2.],
division=[3, 4], dirnames=('x', 'y'))
>>> print g2.coor
[array([-1. , -0.33333333, 0.33333333, 1. ]),
array([-1. , -0.25, 0.5 , 1.25, 2. ])]
>>> u = BoxField(g2, 'u')
>>> u.values = u.grid.vectorized_eval(lambda x,y: x + y)
>>> xc, uc, fixed_coor, snapped = u.gridline((-1,0.5), 0)
>>> print xc
[-1. -0.33333333 0.33333333 1. ]
>>> print uc
[-0.5 0.16666667 0.83333333 1.5 ]
>>> print fixed_coor, snapped
[0.5] False
>>> #plot(xc, uc, title='u(x, y=%g)' % fixed_coor)
"""
if not snap:
raise NotImplementedError('Use snap=True, no interpolation impl.')
slice_index, snapped = \
self.grid.gridline_slice(start_coor, direction, end_coor)
fixed_coor = [self.grid[s][i] for s,i in enumerate(slice_index) \
if not isinstance(i, slice)]
if len(fixed_coor) == 1:
fixed_coor = fixed_coor[0] # avoid returning list of length 1
return self.grid.coor[direction][slice_index[direction].start:\
slice_index[direction].stop], \
self.values[slice_index], fixed_coor, snapped
def gridplane(self, value, constant_coor=0, snap=True):
"""
Return two one-dimensional coordinate arrays and
corresponding field values over a plane where one coordinate,
constant_coor, is fixed at a value.
If snap is True, the plane is snapped onto a grid plane such
that the points in the plane coincide with the grid points.
Otherwise, the returned values must be interpolated (not yet impl.).
"""
if not snap:
raise NotImplementedError('Use snap=True, no interpolation impl.')
slice_index, snapped = self.grid.gridplane_slice(value, constant_coor)
if constant_coor == 0:
x = self.grid.coor[1]
y = self.grid.coor[2]
elif constant_coor == 1:
x = self.grid.coor[0]
y = self.grid.coor[2]
elif constant_coor == 2:
x = self.grid.coor[0]
y = self.grid.coor[1]
fixed_coor = self.grid.coor[constant_coor][slice_index[constant_coor]]
return x, y, self.values[slice_index], fixed_coor, snapped
def _rank12rankd_mesh(a, shape):
"""
Given rank 1 array a with values in a mesh with the no of points
described by shape, transform the array to the right "mesh array"
with the same shape.
"""
shape = list(shape)
shape.reverse()
if len(a.shape) == 1:
return a.reshape(shape).transpose()
else:
raise ValueError('array a cannot be multi-dimensional (not %s), ' \
'break it up into one-dimensional components' \
% a.shape)
def fenics_mesh2UniformBoxGrid(fenics_mesh, division=None):
"""
Turn a regular, structured DOLFIN finite element mesh into
a UniformBoxGrid object. (Application: plotting with scitools.)
Standard DOLFIN numbering numbers the nodes along the x[0] axis,
then x[1] axis, and so on.
"""
if hasattr(fenics_mesh, 'structured_data'):
coor = fenics_mesh.structured_data
min_coor = [c[0] for c in coor]
max_coor = [c[-1] for c in coor]
division = [len(c)-1 for c in coor]
else:
if division is None:
raise ValueError('division must be given when fenics_mesh does not have a strutured_data attribute')
else:
coor = fenics_mesh.coordinates() # numpy array
min_coor = coor[0]
max_coor = coor[-1]
return UniformBoxGrid(min=min_coor, max=max_coor,
division=division)
def fenics_mesh2BoxGrid(fenics_mesh, division=None):
"""
Turn a structured DOLFIN finite element mesh into
a BoxGrid object.
Standard DOLFIN numbering numbers the nodes along the x[0] axis,
then x[1] axis, and so on.
"""
if hasattr(fenics_mesh, 'structured_data'):
coor = fenics_mesh.structured_data
return BoxGrid(coor)
else:
if division is None:
raise ValueError('division must be given when fenics_mesh does not have a strutured_data attribute')
else:
c = fenics_mesh.coordinates() # numpy array
shape = [n+1 for n in division] # shape for points in each dir.
c2 = [c[:,i] for i in range(c.shape[1])] # split x,y,z components
for i in range(c.shape[1]):
c2[i] = _rank12rankd_mesh(c2[i], shape)
# extract coordinates in the different directions
coor = []
if len(c2) == 1:
coor = [c2[0][:]]
elif len(c2) == 2:
coor = [c2[0][:,0], c2[1][0,:]]
elif len(c2) == 3:
coor = [c2[0][:,0,0], c2[1][0,:,0], c2[2][0,0,:]]
return BoxGrid(coor)
def FEniCSBoxField(fenics_function, division=None, uniform_mesh=True):
"""
Turn a DOLFIN P1 finite element field over a structured mesh into
a BoxField object.
Standard DOLFIN numbering numbers the nodes along the x[0] axis,
then x[1] axis, and so on.
If the DOLFIN function employs elements of degree > 1, the function
is automatically interpolated to elements of degree 1.
"""
# Get mesh
fenics_mesh = fenics_function.function_space().mesh()
# Interpolate if necessary
element = fenics_function.ufl_element()
if element.degree() != 1:
if element.value_size() == 1:
fenics_function \
= interpolate(u, FunctionSpace(fenics_mesh, 'P', 1))
else:
fenics_function \
= interpolate(u, VectorFunctionSpace(fenics_mesh, 'P', 1,
element.value_size()))
import dolfin
if dolfin.__version__[:3] == "1.0":
nodal_values = fenics_function.vector().get_local().copy()
else:
d2v = fenics.dof_to_vertex_map(fenics_function.function_space())
nodal_values = fenics_function.vector().get_local().copy()
nodal_values[d2v] = fenics_function.vector().get_local().copy()
if uniform_mesh:
grid = fenics_mesh2UniformBoxGrid(fenics_mesh, division)
else:
grid = fenics_mesh2BoxGrid(fenics_mesh, division)
if nodal_values.size > grid.npoints:
# vector field, treat each component separately
ncomponents = int(nodal_values.size/grid.npoints)
try:
nodal_values.shape = (ncomponents, grid.npoints)
except ValueError as e:
raise ValueError('Vector field (nodal_values) has length %d, there are %d grid points, and this does not match with %d components' % (nodal_values.size, grid.npoints, ncomponents))
vector_field = [_rank12rankd_mesh(nodal_values[i,:].copy(),
grid.shape) \
for i in range(ncomponents)]
nodal_values = array(vector_field)
bf = BoxField(grid, name=fenics_function.name(),
vector=ncomponents, values=nodal_values)
else:
try:
nodal_values = _rank12rankd_mesh(nodal_values, grid.shape)
except ValueError as e:
raise ValueError('DOLFIN function has vector of size %s while the provided mesh has %d points and shape %s' % (nodal_values.size, grid.npoints, grid.shape))
bf = BoxField(grid, name=fenics_function.name(),
vector=0, values=nodal_values)
return bf
def update_from_fenics_array(fenics_array, box_field):
"""
Update the values in a BoxField object box_field with a new
DOLFIN array (fenics_array). The array must be reshaped and
transposed in the right way
(therefore box_field.copy_values(fenics_array) will not work).
"""
nodal_values = fenics_array.copy()
if len(nodal_values.shape) > 1:
raise NotImplementedError # no support for vector valued functions yet
# the problem is in _rank12rankd_mesh
try:
nodal_values = _rank12rankd_mesh(nodal_values, box_field.grid.shape)
except ValueError as e:
raise ValueError(
'DOLFIN function has vector of size %s while '
'the provided mesh demands %s' %
(nodal_values.size, grid.shape))
box_field.set_values(nodal_values)
return box_field
def _str_equal(a, b):
"""Return '' if a==b, else string with indication of differences."""
if a == b:
return ''
else:
r = [] # resulting string with indication of differences
for i, chars in enumerate(zip(a, b)):
a_, b_ = chars
if a_ == b_:
r.append(a_)
else:
r.append('[')
r.append(a_)
r.append('|')
r.append(b_)
r.append(']')
return ''.join(r)
def test_2Dmesh():
g1 = UniformBoxGrid(min=0, max=1, division=4)
expected = """\
g=domain=[0,1] indices=[0:4]
iterator over "all"
[0] [0.0]
[1] [0.25]
[2] [0.5]
[3] [0.75]
[4] [1.0]
vectorized iterator over "all":
slice: slice(0, 5, 1) values: [ 2. 2. 2. 2. 2.]
iterator over "interior"
[1] [0.25]
[2] [0.5]
[3] [0.75]
vectorized iterator over "interior":
slice: slice(1, 4, 1) values: [ 2. 2. 2.]
iterator over "all_boundary"
[0, 4] [0.0]
vectorized iterator over "all_boundary":
slice: slice(0, 1, 1) values: [ 2.]
slice: slice(4, 5, 1) values: [ 2.]
iterator over "interior_boundary"
[0, 4] [0.0]
vectorized iterator over "interior_boundary":
slice: slice(0, 1, 1) values: [ 2.]
slice: slice(4, 5, 1) values: [ 2.]
iterator over "corners"
[0] [0.0]
[4] [1.0]
vectorized iterator over "corners":
slice: slice(0, 1, 1) values: [ 2.]
slice: slice(4, 5, 1) values: [ 2.]
iterator over "all_edges" is not implemented
iterator over "all_edges" is not implemented
iterator over "interior_edges" is not implemented
iterator over "interior_edges" is not implemented
"""
computed = _test(g1, [0.7, 0.5])
msg = _str_equal(expected, computed)
# print 'msg: [%s]' % msg
print('msg: [%s]' % msg)
success = not msg
assert success, msg
# Demonstrate functionality with structured mesh class
spec = '[0,1]x[-1,2] with indices [0:3]x[0:2]'
g2 = UniformBoxGrid.init_fromstring(spec)
_test(g2, [(0.2,0.2), (1,2)])
g3 = UniformBoxGrid(min=(0,0,-1), max=(1,1,1), division=(4,1,2))
_test(g3)
# print 'g3=\n%s' % str(g3)
print('g3=\n%s' % str(g3))
# print 'g3[Z]=', g3[Z]
print('g3[Z]=', g3[Z])
# print 'g3[Z][1] =', g3[Z][1]
print('g3[Z][1] =', g3[Z][1])
# print 'dx, dy, dz spacings:', g3.delta
print('dx, dy, dz spacings:', g3.delta)
g4 = UniformBoxGrid(min=(0,-1), max=(1,1),
division=(4,2), dirnames=('y','z'))
_test(g4)
# print 'g4["y"][-1]:', g4["y"][-1]
print('g4["y"][-1]:', g4["y"][-1])
def _test3():
from numpy import sin, zeros, exp
# check vectorization evaluation:
g = UniformBoxGrid(min=(0,0), max=(1,1), division=(3,3))
try:
g.vectorized_eval(lambda x,y: 2)
except TypeError as msg:
# fine, expect to arrive here
# print '*** Expected error in this test:', msg
print('*** Expected error in this test:', msg)
try:
g.vectorized_eval(lambda x,y: zeros((2,2))+2)
except IndexError as msg:
# fine, expect to arrive here
# print '*** Expected error in this test:', msg
print('*** Expected error in this test:', msg)
a = g.vectorized_eval(lambda x,y: sin(x)*exp(y-x))
# print a
print(a)
a = g.vectorized_eval(lambda x,y: zeros(g.shape)+2)
# print a
print(a)
def _test_field(g):
# print 'grid: %s' % g
print('grid: %s' % g)
# function: 1 + x + y + z
def f(*args):
sum = 1.0
for x in args:
sum = sum + x
return sum
u = BoxField(g, 'u')
v = BoxField(g, 'v', vector=g.nsd)
u.values = u.grid.vectorized_eval(f) # fill field values
if g.nsd == 1:
v[:,X] = u.values + 1 # 1st component
elif g.nsd == 2:
v[:,:,X] = u.values + 1 # 1st component
v[:,:,Y] = u.values + 2 # 2nd component
elif g.nsd == 3:
v[:,:,:,X] = u.values + 1 # 1st component
v[:,:,:,Y] = u.values + 2 # 2nd component
v[:,:,:,Z] = u.values + 3 # 3rd component
# write out field values at the mid point of the grid
# (convert g.shape to NumPy array and divide by 2 to find
# approximately the mid point)
midptindex = tuple(array(g.shape,int)/2)
ptcoor = g[midptindex]
# tuples with just one item does not work as indices
# print 'mid point %s has indices %s' % (ptcoor, midptindex)
print('mid point %s has indices %s' % (ptcoor, midptindex))
# print 'f%s=%g' % (ptcoor, f(*ptcoor))
print('f%s=%g' % (ptcoor, f(*ptcoor)))
# print 'u at %s: %g' % (midptindex, u[midptindex])
print('u at %s: %g' % (midptindex, u[midptindex]))
v_index = list(midptindex); v_index.append(slice(g.nsd))
# print 'v at %s: %s' % (midptindex, v[v_index])
print('v at %s: %s' % (midptindex, v[v_index]))
# test extraction of lines:
if u.grid.nsd >= 2:
direction = u.grid.nsd-1
coor, u_coor, fixed_coor, snapped = \
u.gridline(u.grid.min_coor, direction)
# if snapped: print 'Error: snapped line'
if snapped: print('Error: snapped line')
# print 'line in x[%d]-direction, starting at %s' % \
# (direction, u.grid.min_coor)
print('line in x[%d]-direction, starting at %s' % \
(direction, u.grid.min_coor))
# print coor
print(coor)
# print u_coor
print(u_coor)
direction = 0
point = u.grid.min_coor.copy()
point[direction+1] = u.grid.max_coor[direction+1]
coor, u_coor, fixed_coor, snapped = \
u.gridline(u.grid.min_coor, direction)
# if snapped: print 'Error: snapped line'
if snapped: print('Error: snapped line')
# print 'line in x[%d]-direction, starting at %s' % \
# (direction, point)
print('line in x[%d]-direction, starting at %s' % \
(direction, point))
# print coor
print(coor)
# print u_coor
print(u_coor)
if u.grid.nsd == 3:
y_center = (u.grid.max_coor[1] + u.grid.min_coor[1])/2.0
xc, yc, uc, fixed_coor, snapped = \
u.gridplane(value=y_center, constant_coor=1)
# print 'Plane y=%g:' % fixed_coor,
print('Plane y=%g:' % fixed_coor,)
# if snapped: print ' (snapped from y=%g)' % y_center
if snapped: print(' (snapped from y=%g)' % y_center)
else: print()
print(xc)
print(yc)
print(uc)
def _test4():
g1 = UniformBoxGrid(min=0, max=1, division=4)
_testbox(g1)
spec = '[0,1]x[-1,2] with indices [0:4]x[0:3]'
g2 = UniformBoxGrid.init_fromstring(spec)
_testbox(g2)
g3 = UniformBoxGrid(min=(0,0,-1), max=(1,1,1), division=(4,3,2))
_testbox(g3)
if __name__ == '__main__':
test_2Dmesh()
thank you very much for your work.
In the magneto statics section I did not really understand how you came from the rot rot equation to the grad div-type one (before restricting to two dimensions, eq. (4.20)): For a homogenious medium with mu = const this step is trivial by using the Coulomb gauge, but I think this is not valid here anymore. Could you please explain this step further and add more details?
The file ft08.navier_stokes_cylinder.py uses two preconditioners 'hypre_amg' and 'sor'.
However, now DOLFIN deletes these two.
The current solver and preconditioner list as following:
cholesky | Simplicial LDLT
cholmod | 'CHOLMOD' sparse Cholesky factorisation
default | default LU solver
sparselu | Supernodal LU factorization for general matrices
umfpack | UMFPACK (Unsymmetric MultiFrontal sparse LU factorization)
bicgstab | Biconjugate gradient stabilized method
cg | Conjugate gradient method
default | default Eigen Krylov method
gmres | Generalised minimal residual (GMRES)
minres | Minimal residual
default | default
ilu | Incomplete LU
jacobi | Jacobi
none | None
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I change the plot code because WSL cannot use GUI.
"""
FEniCS tutorial demo program: Nonlinear Poisson equation.
-div(q(u)*grad(u)) = f in the unit square.
u = u_D on the boundary.
"""
from __future__ import print_function
# Warning: from fenics import * will import both `sym` and
# `q` from FEniCS. We therefore import FEniCS first and then
# overwrite these objects.
from fenics import *
def q(u):
"Return nonlinear coefficient"
return 1 + u**2
# Use SymPy to compute f from the manufactured solution u
import sympy as sym
x, y = sym.symbols('x[0], x[1]')
u = 1 + x + 2*y
f = - sym.diff(q(u)*sym.diff(u, x), x) - sym.diff(q(u)*sym.diff(u, y), y)
f = sym.simplify(f)
u_code = sym.printing.ccode(u)
f_code = sym.printing.ccode(f)
print('u =', u_code)
print('f =', f_code)
# Create mesh and define function space
mesh = UnitSquareMesh(8, 8)
V = FunctionSpace(mesh, 'P', 1)
# Define boundary condition
u_D = Expression(u_code, degree=2)
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, u_D, boundary)
# Define variational problem
u = Function(V) # Note: not TrialFunction!
v = TestFunction(V)
f = Expression(f_code, degree=2)
F = q(u)*dot(grad(u), grad(v))*dx - f*v*dx
# Compute solution
solve(F == 0, u, bc)
# Plot solution
import matplotlib.pyplot as plt
plot(u)
plt.savefig('u.png')
plt.cla
# Compute maximum error at vertices. This computation illustrates
# an alternative to using compute_vertex_values as in poisson.py.
u_e = interpolate(u_D, V)
import numpy as np
error_max = np.abs(u_e.vector().get_local() - u.vector().get_local()).max()
print('error_max = ', error_max)
# Hold plot
# interactive()
Hi,
I have searched a lot for Discontinous Galerkin Navierstokes equation fenics code. But did not find . If some one have this code please provide.
Thank You,
Debendra
The section "Taking advantage of structured mesh data" says:
In the directory src/modules, associated with this book, we have included the Python module BoxField that provides utilities for working with structured mesh data in FEniCS.
However, there is no directory modules in src.
Is there any tutorial that shows a solution for a simple 2D cylinder shape? Thanks.
$ python3 ft01_poisson.py
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Solving linear variational problem.
*** Warning: Degree of exact solution may be inadequate for accurate result in errornorm.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
error_L2 = 0.008235098073354827
error_max = 1.33226762955e-15
Traceback (most recent call last):
File "ft01_poisson.py", line 60, in <module>
interactive()
NameError: name 'interactive' is not defined
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I delete the plot code because WSL cannot use it.
"""
FEniCS tutorial demo program: Convection-diffusion-reaction for a system
describing the concentration of three species A, B, C undergoing a simple
first-order reaction A + B --> C with first-order decay of C. The velocity
is given by the flow field w from the demo navier_stokes_cylinder.py.
u_1' + w . nabla(u_1) - div(eps*grad(u_1)) = f_1 - K*u_1*u_2
u_2' + w . nabla(u_2) - div(eps*grad(u_2)) = f_2 - K*u_1*u_2
u_3' + w . nabla(u_3) - div(eps*grad(u_3)) = f_3 + K*u_1*u_2 - K*u_3
"""
from __future__ import print_function
from dolfin import *
T = 5.0 # final time
num_steps = 500 # number of time steps
dt = T / num_steps # time step size
eps = 0.01 # diffusion coefficient
K = 10.0 # reaction rate
# Read mesh from file
mesh = Mesh('navier_stokes_cylinder/cylinder.xml.gz')
# Define function space for velocity
W = VectorFunctionSpace(mesh, 'P', 2)
# Define function space for system of concentrations
P1 = FiniteElement('P', triangle, 1)
element = MixedElement([P1, P1, P1])
V = FunctionSpace(mesh, element)
# Define test functions
v_1, v_2, v_3 = TestFunctions(V)
# Define functions for velocity and concentrations
w = Function(W)
u = Function(V)
u_n = Function(V)
# Split system functions to access components
u_1, u_2, u_3 = split(u)
u_n1, u_n2, u_n3 = split(u_n)
# Define source terms
f_1 = Expression('pow(x[0]-0.1,2)+pow(x[1]-0.1,2)<0.05*0.05 ? 0.1 : 0',
degree=1)
f_2 = Expression('pow(x[0]-0.1,2)+pow(x[1]-0.3,2)<0.05*0.05 ? 0.1 : 0',
degree=1)
f_3 = Constant(0)
# Define expressions used in variational forms
k = Constant(dt)
K = Constant(K)
eps = Constant(eps)
# Define variational problem
F = ((u_1 - u_n1) / k)*v_1*dx + dot(w, grad(u_1))*v_1*dx \
+ eps*dot(grad(u_1), grad(v_1))*dx + K*u_1*u_2*v_1*dx \
+ ((u_2 - u_n2) / k)*v_2*dx + dot(w, grad(u_2))*v_2*dx \
+ eps*dot(grad(u_2), grad(v_2))*dx + K*u_1*u_2*v_2*dx \
+ ((u_3 - u_n3) / k)*v_3*dx + dot(w, grad(u_3))*v_3*dx \
+ eps*dot(grad(u_3), grad(v_3))*dx - K*u_1*u_2*v_3*dx + K*u_3*v_3*dx \
- f_1*v_1*dx - f_2*v_2*dx - f_3*v_3*dx
# Create time series for reading velocity data
timeseries_w = TimeSeries('navier_stokes_cylinder/velocity_series')
# Create VTK files for visualization output
vtkfile_u_1 = File('reaction_system/u_1.pvd')
vtkfile_u_2 = File('reaction_system/u_2.pvd')
vtkfile_u_3 = File('reaction_system/u_3.pvd')
# Create progress bar
# progress = Progress('Time-stepping')
progress = Progress('Time-stepping', num_steps)
# set_log_level(PROGRESS)
# Time-stepping
t = 0
for n in range(num_steps):
# Update current time
t += dt
# Read velocity from file
timeseries_w.retrieve(w.vector(), t)
# Solve variational problem for time step
solve(F == 0, u)
# Save solution to file (VTK)
_u_1, _u_2, _u_3 = u.split()
vtkfile_u_1 << (_u_1, t)
vtkfile_u_2 << (_u_2, t)
vtkfile_u_3 << (_u_3, t)
# Update previous solution
u_n.assign(u)
# Update progress bar
# progress.update(t / T)
set_log_level(LogLevel.PROGRESS)
progress += 1
set_log_level(LogLevel.ERROR)
# Hold plot
# interactive()
Using FEniCS with Spyder IDLE
I used PPA fenics (Ubuntu 19.04). Followed the instructions. Downloading never yield any error message. I did all of this (installation) inside my home directory where Anaconda3 (Spyder) is installed. Command
$ apt-cache showpkg fenics
shown fenics dependencies and it seems just OK. Example: (2 2019.1) python3-uft, python3-ffc, and more
How can I link Spyder in my home directory to all those packages and libraries as result of PPA installation? Do I need to add some PATH to my list of PATH? Do I need to move some packages to my home directory?
I cannot figure out how to proceed from here. I believe there is a simply way to solve this problem but I have run out of ideas.
Before of PPA installation I fear this will happen someway. At the moment I do know where is FEniCS, i.e /var/lib/apt/lists/ ?
Hello,
Today after updating in my system (Ubuntu), the fenics tells me that 'interactive' is not defined.
Is there something wrong with my updating?
What should I do to handle it?
Thank you,
Osbert
Hello.
I install FeniCS on ubuntu 18.04 following the download webpage.
However, when I finish, and run
python -c 'import fenics'
the system tells me no module named fenics.
Is there something wrong?
Besides the list of references at the beginning, could you please give some references for further research in the appropriate subsections? For a beginner in FEM methods and special treatment of physical theories in FEM, a few (beginner) references for e.g. the non-standard examples (elasticity theory, magneto statics, navier-stokes) would be very helpful.
On page 93, the equation (4.8) should be
- div(kappa*grad(u)) = f.
The book read -f on the right hand side.
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I change the plot code because WSL cannot use GUI but it can save figure.
"""
FEniCS tutorial demo program: Poisson equation with Dirichlet conditions.
Test problem is chosen to give an exact solution at all nodes of the mesh.
-Laplace(u) = f in the unit square
u = u_D on the boundary
u_D = 1 + x^2 + 2y^2
f = -6
"""
# from __future__ import print_function
from dolfin import *
# Create mesh and define function space
mesh = UnitSquareMesh(8, 8)
V = FunctionSpace(mesh, 'P', 1)
# Define boundary condition
u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2)
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, u_D, boundary)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
f = Constant(-6.0)
a = dot(grad(u), grad(v))*dx
L = f*v*dx
# Compute solution
u = Function(V)
solve(a == L, u, bc)
# Plot solution and mesh
# plot(u)
# plot(mesh)
import matplotlib.pyplot as plt
plot(mesh)
plt.savefig('mesh.pdf')
plt.cla()
plot(u)
plt.savefig('u.pdf')
plt.cla()
# Save solution to file in VTK format
vtkfile = File('poisson/solution.pvd')
vtkfile << u
# Compute error in L2 norm
error_L2 = errornorm(u_D, u, 'L2')
# Compute maximum error at vertices
vertex_values_u_D = u_D.compute_vertex_values(mesh)
vertex_values_u = u.compute_vertex_values(mesh)
import numpy as np
error_max = np.max(np.abs(vertex_values_u_D - vertex_values_u))
# Print errors
print('error_L2 =', error_L2)
print('error_max =', error_max)
# Hold plot
# interactive()
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I delete the plot code because WSL cannot use it.
"""
FEniCS tutorial demo program: Linear elastic problem.
-div(sigma(u)) = f
The model is used to simulate an elastic beam clamped at
its left end and deformed under its own weight.
"""
from __future__ import print_function
from dolfin import *
from ufl import nabla_div
import numpy as np
# Scaled variables
L = 1; W = 0.2
mu = 1
rho = 1
delta = W/L
gamma = 0.4*delta**2
beta = 1.25
lambda_ = beta
g = gamma
# Create mesh and define function space
mesh = BoxMesh(Point(0, 0, 0), Point(L, W, W), 10, 3, 3)
V = VectorFunctionSpace(mesh, 'P', 1)
# Define boundary condition
tol = 1E-14
def clamped_boundary(x, on_boundary):
return on_boundary and x[0] < tol
bc = DirichletBC(V, Constant((0, 0, 0)), clamped_boundary)
# Define strain and stress
def epsilon(u):
return 0.5*(nabla_grad(u) + nabla_grad(u).T)
#return sym(nabla_grad(u))
def sigma(u):
return lambda_*nabla_div(u)*Identity(d) + 2*mu*epsilon(u)
# Define variational problem
u = TrialFunction(V)
d = u.geometric_dimension() # space dimension
v = TestFunction(V)
f = Constant((0, 0, -rho*g))
T = Constant((0, 0, 0))
a = inner(sigma(u), epsilon(v))*dx
L = dot(f, v)*dx + dot(T, v)*ds
# Compute solution
u = Function(V)
solve(a == L, u, bc)
# Plot solution
# plot(u, title='Displacement', mode='displacement')
# Plot stress
s = sigma(u) - (1./3)*tr(sigma(u))*Identity(d) # deviatoric stress
von_Mises = sqrt(3./2*inner(s, s))
V = FunctionSpace(mesh, 'P', 1)
von_Mises = project(von_Mises, V)
# plot(von_Mises, title='Stress intensity')
# Compute magnitude of displacement
u_magnitude = sqrt(dot(u, u))
u_magnitude = project(u_magnitude, V)
# plot(u_magnitude, 'Displacement magnitude')
print('min/max u:',
u_magnitude.vector().get_local().min(),
u_magnitude.vector().get_local().max())
# Save solution to file in VTK format
File('elasticity/displacement.pvd') << u
File('elasticity/von_mises.pvd') << von_Mises
File('elasticity/magnitude.pvd') << u_magnitude
# Hold plot
# interactive()
Mesh data files for the tutorial examples are not provided, for instance navier_stokes_cylinder/cylinder.xml.gz, needed by the Reactive Flow example, ft09_reaction_system.py
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I delete the plot code because WSL cannot use it.
"""
FEniCS tutorial demo program: Incompressible Navier-Stokes equations
for flow around a cylinder using the Incremental Pressure Correction
Scheme (IPCS).
u' + u . nabla(u)) - div(sigma(u, p)) = f
div(u) = 0
"""
from __future__ import print_function
from dolfin import *
from mshr import *
import numpy as np
T = 5.0 # final time
num_steps = 5000 # number of time steps
dt = T / num_steps # time step size
mu = 0.001 # dynamic viscosity
rho = 1 # density
# Create mesh
channel = Rectangle(Point(0, 0), Point(2.2, 0.41))
cylinder = Circle(Point(0.2, 0.2), 0.05)
domain = channel - cylinder
mesh = generate_mesh(domain, 64)
# Define function spaces
V = VectorFunctionSpace(mesh, 'P', 2)
Q = FunctionSpace(mesh, 'P', 1)
# Define boundaries
inflow = 'near(x[0], 0)'
outflow = 'near(x[0], 2.2)'
walls = 'near(x[1], 0) || near(x[1], 0.41)'
cylinder = 'on_boundary && x[0]>0.1 && x[0]<0.3 && x[1]>0.1 && x[1]<0.3'
# Define inflow profile
inflow_profile = ('4.0*1.5*x[1]*(0.41 - x[1]) / pow(0.41, 2)', '0')
# Define boundary conditions
bcu_inflow = DirichletBC(V, Expression(inflow_profile, degree=2), inflow)
bcu_walls = DirichletBC(V, Constant((0, 0)), walls)
bcu_cylinder = DirichletBC(V, Constant((0, 0)), cylinder)
bcp_outflow = DirichletBC(Q, Constant(0), outflow)
bcu = [bcu_inflow, bcu_walls, bcu_cylinder]
bcp = [bcp_outflow]
# Define trial and test functions
u = TrialFunction(V)
v = TestFunction(V)
p = TrialFunction(Q)
q = TestFunction(Q)
# Define functions for solutions at previous and current time steps
u_n = Function(V)
u_ = Function(V)
p_n = Function(Q)
p_ = Function(Q)
# Define expressions used in variational forms
U = 0.5*(u_n + u)
n = FacetNormal(mesh)
f = Constant((0, 0))
k = Constant(dt)
mu = Constant(mu)
rho = Constant(rho)
# Define symmetric gradient
def epsilon(u):
return sym(nabla_grad(u))
# Define stress tensor
def sigma(u, p):
return 2*mu*epsilon(u) - p*Identity(len(u))
# Define variational problem for step 1
F1 = rho*dot((u - u_n) / k, v)*dx \
+ rho*dot(dot(u_n, nabla_grad(u_n)), v)*dx \
+ inner(sigma(U, p_n), epsilon(v))*dx \
+ dot(p_n*n, v)*ds - dot(mu*nabla_grad(U)*n, v)*ds \
- dot(f, v)*dx
a1 = lhs(F1)
L1 = rhs(F1)
# Define variational problem for step 2
a2 = dot(nabla_grad(p), nabla_grad(q))*dx
L2 = dot(nabla_grad(p_n), nabla_grad(q))*dx - (1/k)*div(u_)*q*dx
# Define variational problem for step 3
a3 = dot(u, v)*dx
L3 = dot(u_, v)*dx - k*dot(nabla_grad(p_ - p_n), v)*dx
# Assemble matrices
A1 = assemble(a1)
A2 = assemble(a2)
A3 = assemble(a3)
# Apply boundary conditions to matrices
[bc.apply(A1) for bc in bcu]
[bc.apply(A2) for bc in bcp]
# Create XDMF files for visualization output
xdmffile_u = XDMFFile('navier_stokes_cylinder/velocity.xdmf')
xdmffile_p = XDMFFile('navier_stokes_cylinder/pressure.xdmf')
# Create time series (for use in reaction_system.py)
timeseries_u = TimeSeries('navier_stokes_cylinder/velocity_series')
timeseries_p = TimeSeries('navier_stokes_cylinder/pressure_series')
# Save mesh to file (for use in reaction_system.py)
File('navier_stokes_cylinder/cylinder.xml.gz') << mesh
# Create progress bar
# progress = Progress('Time-stepping')
progress = Progress('Time-stepping', num_steps)
# set_log_level(PROGRESS)
# Time-stepping
t = 0
for n in range(num_steps):
# Update current time
t += dt
# Step 1: Tentative velocity step
b1 = assemble(L1)
[bc.apply(b1) for bc in bcu]
solve(A1, u_.vector(), b1, 'bicgstab', 'hypre_amg')
# Step 2: Pressure correction step
b2 = assemble(L2)
[bc.apply(b2) for bc in bcp]
solve(A2, p_.vector(), b2, 'bicgstab', 'hypre_amg')
# Step 3: Velocity correction step
b3 = assemble(L3)
solve(A3, u_.vector(), b3, 'cg', 'sor')
# Plot solution
# plot(u_, title='Velocity')
# plot(p_, title='Pressure')
# Save solution to file (XDMF/HDF5)
xdmffile_u.write(u_, t)
xdmffile_p.write(p_, t)
# Save nodal values to file
timeseries_u.store(u_.vector(), t)
timeseries_p.store(p_.vector(), t)
# Update previous solution
u_n.assign(u_)
p_n.assign(p_)
# Update progress bar
# progress.update(t / T)
set_log_level(LogLevel.PROGRESS)
progress += 1
set_log_level(LogLevel.ERROR)
print('u max:', u_.vector().get_local().max())
# Hold plot
# interactive()
I'm currently working with your Fenics tutorial. Here
one can choose between PDF and HTML, plus there's also a linkt to the tutorial examples. The link to the tutorial examples seems to be broken.
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I delete the plot code because WSL cannot use it.
"""
FEniCS tutorial demo program: Poisson equation with a combination of
Dirichlet, Neumann, and Robin boundary conditions.
-div(kappa*grad(u)) = f
This program illustrates a number of different topics:
- How to solve a problem using three different approaches of varying
complexity: solve / LinearVariationalSolver / assemble + solve
- How to compute fluxes
- How to set combinations of boundary conditions
- How to set parameters for linear solvers
- How to create manufactured solutions with SymPy
- How to create unit tests
- How to represent solutions as structured fields
"""
from __future__ import print_function
from dolfin import *
from boxfield import *
import numpy as np
#---------------------------------------------------------------------
# Solvers
#---------------------------------------------------------------------
def solver(kappa, f, u_D, Nx, Ny,
degree=1,
linear_solver='Krylov',
abs_tol=1E-5,
rel_tol=1E-3,
max_iter=1000):
"""
Solve -div(kappa*grad(u) = f on (0, 1) x (0, 1) with 2*Nx*Ny Lagrange
elements of specified degree and u = u_D on the boundary.
"""
# Create mesh and define function space
mesh = UnitSquareMesh(Nx, Ny)
V = FunctionSpace(mesh, 'P', degree)
# Define boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, u_D, boundary)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
a = kappa*dot(grad(u), grad(v))*dx
L = f*v*dx
# Set linear solver parameters
prm = LinearVariationalSolver.default_parameters()
if linear_solver == 'Krylov':
# prm.linear_solver = 'gmres'
# prm.preconditioner = 'ilu'
# prm.krylov_solver.absolute_tolerance = abs_tol
# prm.krylov_solver.relative_tolerance = rel_tol
# prm.krylov_solver.maximum_iterations = max_iter
prm["linear_solver"] = 'gmres'
prm["preconditioner"] = 'ilu'
prm["krylov_solver"]["absolute_tolerance"] = abs_tol
prm["krylov_solver"]["relative_tolerance"] = rel_tol
prm["krylov_solver"]["maximum_iterations"] = max_iter
else:
prm["linear_solver"] = 'lu'
# Compute solution
u = Function(V)
solve(a == L, u, bc, solver_parameters = prm)
return u
def solver_objects(kappa, f, u_D, Nx, Ny,
degree=1,
linear_solver='Krylov',
abs_tol=1E-5,
rel_tol=1E-3,
max_iter=1000):
"Same as the solver() function but using LinearVariationalSolver"
# Create mesh and define function space
mesh = UnitSquareMesh(Nx, Ny)
V = FunctionSpace(mesh, 'P', degree)
# Define boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, u_D, boundary)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
a = kappa*dot(grad(u), grad(v))*dx
L = f*v*dx
# Compute solution
u = Function(V)
problem = LinearVariationalProblem(a, L, u, bc)
solver = LinearVariationalSolver(problem)
# Set linear solver parameters
prm = solver.parameters
if linear_solver == 'Krylov':
prm.linear_solver = 'gmres'
prm.preconditioner = 'ilu'
prm.krylov_solver.absolute_tolerance = abs_tol
prm.krylov_solver.relative_tolerance = rel_tol
prm.krylov_solver.maximum_iterations = max_iter
else:
prm.linear_solver = 'lu'
# Compute solution
solver.solve()
return u
def solver_linalg(kappa, f, u_D, Nx, Ny,
degree=1,
linear_solver='Krylov',
abs_tol=1E-5,
rel_tol=1E-3,
max_iter=1000):
"Same as the solver() function but assembling and solving Ax = b"
# Create mesh and define function space
mesh = UnitSquareMesh(Nx, Ny)
V = FunctionSpace(mesh, 'P', degree)
# Define boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, u_D, boundary)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
a = kappa*dot(grad(u), grad(v))*dx
L = f*v*dx
# Assemble linear system
A = assemble(a)
b = assemble(L)
# Apply boundary conditions
bc.apply(A, b)
# Create linear solver and set parameters
if linear_solver == 'Krylov':
solver = KrylovSolver('gmres', 'ilu')
solver.parameters.absolute_tolerance = abs_tol
solver.parameters.relative_tolerance = rel_tol
solver.parameters.maximum_iterations = max_iter
else:
solver = LUSolver()
# Compute solution
u = Function(V)
solver.solve(A, u.vector(), b)
return u
def solver_bcs(kappa, f, boundary_conditions, Nx, Ny,
degree=1,
subdomains=[],
linear_solver='Krylov',
abs_tol=1E-5,
rel_tol=1E-3,
max_iter=1000):
"""
Solve -div(kappa*grad(u) = f on (0, 1) x (0, 1) with 2*Nx*Ny Lagrange
elements of specified degree and u = u_D on the boundary. This version
of the solver uses a specified combination of Dirichlet, Neumann, and
Robin boundary conditions.
"""
# Create mesh and define function space
mesh = UnitSquareMesh(Nx, Ny)
V = FunctionSpace(mesh, 'P', degree)
# Check if we have subdomains
if subdomains:
if not isinstance(kappa, (list, tuple, np.ndarray)):
raise TypeError(
'kappa must be array if we have sudomains, not %s'
% type(kappa))
materials = CellFunction('size_t', mesh)
materials.set_all(0)
for m, subdomain in enumerate(subdomains[1:], 1):
subdomain.mark(materials, m)
kappa_values = kappa
V0 = FunctionSpace(mesh, 'DG', 0)
kappa = Function(V0)
help = np.asarray(materials.get_local(), dtype=np.int32)
kappa.vector()[:] = np.choose(help, kappa_values)
else:
if not isinstance(kappa, (Expression, Constant)):
raise TypeError(
'kappa is type %s, must be Expression or Constant'
% type(kappa))
# Define boundary subdomains
tol = 1e-14
class BoundaryX0(SubDomain):
def inside(self, x, on_boundary):
return on_boundary and near(x[0], 0, tol)
class BoundaryX1(SubDomain):
def inside(self, x, on_boundary):
return on_boundary and near(x[0], 1, tol)
class BoundaryY0(SubDomain):
def inside(self, x, on_boundary):
return on_boundary and near(x[1], 0, tol)
class BoundaryY1(SubDomain):
def inside(self, x, on_boundary):
return on_boundary and near(x[1], 1, tol)
# Mark boundaries
boundary_markers = MeshFunction('size_t', mesh, mesh.topology().dim()-1, 0)
boundary_markers.set_all(9999)
bx0 = BoundaryX0()
bx1 = BoundaryX1()
by0 = BoundaryY0()
by1 = BoundaryY1()
bx0.mark(boundary_markers, 0)
bx1.mark(boundary_markers, 1)
by0.mark(boundary_markers, 2)
by1.mark(boundary_markers, 3)
# Redefine boundary integration measure
ds = Measure('ds', domain=mesh, subdomain_data=boundary_markers)
# Collect Dirichlet conditions
bcs = []
for i in boundary_conditions:
if 'Dirichlet' in boundary_conditions[i]:
bc = DirichletBC(V, boundary_conditions[i]['Dirichlet'],
boundary_markers, i)
bcs.append(bc)
debug1 = False
if debug1:
# Print all vertices that belong to the boundary parts
for x in mesh.coordinates():
if bx0.inside(x, True): print('%s is on x = 0' % x)
if bx1.inside(x, True): print('%s is on x = 1' % x)
if by0.inside(x, True): print('%s is on y = 0' % x)
if by1.inside(x, True): print('%s is on y = 1' % x)
# Print the Dirichlet conditions
print('Number of Dirichlet conditions:', len(bcs))
if V.ufl_element().degree() == 1: # P1 elements
d2v = dof_to_vertex_map(V)
coor = mesh.coordinates()
for i, bc in enumerate(bcs):
print('Dirichlet condition %d' % i)
boundary_values = bc.get_boundary_values()
for dof in boundary_values:
print(' dof %2d: u = %g' % (dof, boundary_values[dof]))
if V.ufl_element().degree() == 1:
print(' at point %s' %
(str(tuple(coor[d2v[dof]].tolist()))))
# Define trial and test functions
u = TrialFunction(V)
v = TestFunction(V)
# Collect Neumann integrals
integrals_N = []
for i in boundary_conditions:
if 'Neumann' in boundary_conditions[i]:
if boundary_conditions[i]['Neumann'] != 0:
g = boundary_conditions[i]['Neumann']
integrals_N.append(g*v*ds(i))
# Collect Robin integrals
integrals_R_a = []
integrals_R_L = []
for i in boundary_conditions:
if 'Robin' in boundary_conditions[i]:
r, s = boundary_conditions[i]['Robin']
integrals_R_a.append(r*u*v*ds(i))
integrals_R_L.append(r*s*v*ds(i))
# Simpler Robin integrals
integrals_R = []
for i in boundary_conditions:
if 'Robin' in boundary_conditions[i]:
r, s = boundary_conditions[i]['Robin']
integrals_R.append(r*(u - s)*v*ds(i))
# Sum integrals to define variational problem
a = kappa*dot(grad(u), grad(v))*dx + sum(integrals_R_a)
L = f*v*dx - sum(integrals_N) + sum(integrals_R_L)
# Simpler variational problem
F = kappa*dot(grad(u), grad(v))*dx + \
sum(integrals_R) - f*v*dx + sum(integrals_N)
a, L = lhs(F), rhs(F)
# Set linear solver parameters
prm = LinearVariationalSolver.default_parameters()
if linear_solver == 'Krylov':
prm["linear_solver"] = 'gmres'
prm["preconditioner"] = 'ilu'
prm["krylov_solver"]["absolute_tolerance"] = abs_tol
prm["krylov_solver"]["relative_tolerance"] = rel_tol
prm["krylov_solver"]["maximum_iterations"] = max_iter
else:
prm["linear_solver"] = 'lu'
# Compute solution
u = Function(V)
solve(a == L, u, bcs, solver_parameters=prm)
return u
#---------------------------------------------------------------------
# Utility functions
#---------------------------------------------------------------------
def compute_errors(u_e, u):
"""Compute various measures of the error u - u_e, where
u is a finite element Function and u_e is an Expression."""
# Get function space
V = u.function_space()
# Explicit computation of L2 norm
error = (u - u_e)**2*dx
E1 = sqrt(abs(assemble(error)))
# Explicit interpolation of u_e onto the same space as u
u_e_ = interpolate(u_e, V)
error = (u - u_e_)**2*dx
E2 = sqrt(abs(assemble(error)))
# Explicit interpolation of u_e to higher-order elements.
# u will also be interpolated to the space Ve before integration
Ve = FunctionSpace(V.mesh(), 'P', 5)
u_e_ = interpolate(u_e, Ve)
error = (u - u_e)**2*dx
E3 = sqrt(abs(assemble(error)))
# Infinity norm based on nodal values
u_e_ = interpolate(u_e, V)
E4 = abs(u_e_.vector().get_local() - u.vector().get_local()).max()
# L2 norm
E5 = errornorm(u_e, u, norm_type='L2', degree_rise=3)
# H1 seminorm
E6 = errornorm(u_e, u, norm_type='H10', degree_rise=3)
# Collect error measures in a dictionary with self-explanatory keys
errors = {'u - u_e': E1,
'u - interpolate(u_e, V)': E2,
'interpolate(u, Ve) - interpolate(u_e, Ve)': E3,
'infinity norm (of dofs)': E4,
'L2 norm': E5,
'H10 seminorm': E6}
return errors
def compute_convergence_rates(u_e, f, u_D, kappa,
max_degree=3, num_levels=5):
"Compute convergences rates for various error norms"
h = {} # discretization parameter: h[degree][level]
E = {} # error measure(s): E[degree][level][error_type]
# Iterate over degrees and mesh refinement levels
degrees = range(1, max_degree + 1)
for degree in degrees:
n = 8 # coarsest mesh division
h[degree] = []
E[degree] = []
for i in range(num_levels):
h[degree].append(1.0 / n)
u = solver(kappa, f, u_D, n, n, degree, linear_solver='direct')
errors = compute_errors(u_e, u)
E[degree].append(errors)
print('2 x (%d x %d) P%d mesh, %d unknowns, E1 = %g' %
(n, n, degree, u.function_space().dim(), errors['u - u_e']))
n *= 2
# Compute convergence rates
from math import log as ln # log is a fenics name too
etypes = list(E[1][0].keys())
rates = {}
for degree in degrees:
rates[degree] = {}
for error_type in sorted(etypes):
rates[degree][error_type] = []
for i in range(1, num_levels):
Ei = E[degree][i][error_type]
Eim1 = E[degree][i - 1][error_type]
r = ln(Ei / Eim1) / ln(h[degree][i] / h[degree][i - 1])
rates[degree][error_type].append(round(r, 2))
return etypes, degrees, rates
def flux(u, kappa):
"Return -kappa*grad(u) projected into same space as u"
V = u.function_space()
mesh = V.mesh()
degree = V.ufl_element().degree()
W = VectorFunctionSpace(mesh, 'P', degree)
flux_u = project(-kappa*grad(u), W)
return flux_u
def normalize_solution(u):
"Normalize u: return u divided by max(u)"
u_array = u.vector().get_local()
u_max = np.max(np.abs(u_array))
u_array /= u_max
u.vector()[:] = u_array
#u.vector().set_local(u_array) # alternative
return u
#---------------------------------------------------------------------
# Unit tests (functions beginning with test_)
# These unit tests can be run by calling `py.test poisson_extended.py`
#---------------------------------------------------------------------
def test_solvers():
"Reproduce exact solution to machine precision with different solvers"
solver_functions = (solver, solver_objects, solver_linalg)
tol = {'direct': {1: 1E-11, 2: 1E-11, 3: 1E-11},
'Krylov': {1: 1E-14, 2: 1E-05, 3: 1E-03}}
u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2)
kappa = Expression('x[0] + x[1]', degree=1)
f = Expression('-8*x[0] - 10*x[1]', degree=1)
for Nx, Ny in [(3, 3), (3, 5), (5 ,3)]:
for degree in 1, 2, 3:
for linear_solver in 'direct', 'Krylov':
for solver_function in solver_functions:
print('solving on 2 x (%d x %d) mesh with P%d elements'
% (Nx, Ny, degree)),
print(' %s solver, %s function' %
(linear_solver, solver_function.__name__))
u = solver_function(kappa, f, u_D, Nx, Ny, degree,
linear_solver=linear_solver,
abs_tol=0.1*tol[linear_solver][degree],
rel_tol=0.1*tol[linear_solver][degree])
V = u.function_space()
u_D_Function = interpolate(u_D, V)
u_D_array = u_D_Function.vector().get_local()
error_max = (u_D_array - u.vector().get_local()).max()
msg = 'max error: %g for 2 x (%d x %d) mesh, ' \
'degree = %d, %s solver, %s' % \
(error_max, Nx, Ny, degree, linear_solver,
solver_function.__name__)
print(msg)
assert error_max < tol[linear_solver][degree], msg
def test_normalize_solution():
u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2)
f = Constant(-6.0)
u = solver(f, u_D, 4, 2, 1, linear_solver='direct')
u = normalize_solution(u)
computed = u.vector().get_local().max()
expected = 1.0
assert abs(expected - computed) < 1E-15
#---------------------------------------------------------------------
# Demo programs
#---------------------------------------------------------------------
def demo_test():
"Solve test problem and plot solution"
u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2)
kappa = Expression('x[0] + x[1]', degree=1)
f = Expression('-8*x[0] - 10*x[1]', degree=1)
u = solver(kappa, f, u_D, 8, 8, 1)
vtkfile = File('poisson_extended/solution_test.pvd')
vtkfile << u
# plot(u)
def demo_flux(Nx=8, Ny=8):
"Solve test problem and compute flux"
# Compute solution
u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2)
kappa = Expression('x[0] + x[1]', degree=1)
f = Expression('-8*x[0] - 10*x[1]', degree=1)
u = solver(kappa, f, u_D, Nx, Ny, 1, linear_solver='direct')
# Compute and plot flux
flux_u = flux(u, kappa)
flux_u_x, flux_u_y = flux_u.split(deepcopy=True)
# plot(u, title=u.label())
# plot(flux_u, title=flux_u.label())
# plot(flux_u_x, title=flux_u_x.label())
# plot(flux_u_y, title=flux_u_y.label())
# Exact flux expressions
u_e = lambda x, y: 1 + x**2 + 2*y**2
flux_x_exact = lambda x, y: -(x+y)*2*x
flux_y_exact = lambda x, y: -(x+y)*4*y
# Compute error in flux
coor = u.function_space().mesh().coordinates()
for i, value in enumerate(flux_u_x.compute_vertex_values()):
print('vertex %d, x = %s, -p*u_x = %g, error = %g' %
(i, tuple(coor[i]), value, flux_x_exact(*coor[i]) - value))
for i, value in enumerate(flux_u_y.compute_vertex_values()):
print('vertex %d, x = %s, -p*u_y = %g, error = %g' %
(i, tuple(coor[i]), value, flux_y_exact(*coor[i]) - value))
def demo_convergence_rates():
"Compute convergence rates in various norms for P1, P2, P3"
# Define exact solution and coefficients
omega = 1.0
u_e = Expression('sin(omega*pi*x[0])*sin(omega*pi*x[1])',
degree=6, omega=omega)
f = 2*omega**2*pi**2*u_e
u_D = Constant(0)
kappa = Constant(1)
# Compute and print convergence rates
etypes, degrees, rates = compute_convergence_rates(u_e, f, u_D, kappa)
for error_type in etypes:
print('\n' + error_type)
for degree in degrees:
print('P%d: %s' % (degree, str(rates[degree][error_type])[1:-1]))
def demo_structured_mesh():
"Use structured mesh data to create plots with Matplotlib"
# Define exact solution (Mexican hat) and coefficients
from sympy import exp, sin, pi
import sympy as sym
H = lambda x: exp(-16*(x-0.5)**2)*sin(3*pi*x)
x, y = sym.symbols('x[0], x[1]')
u = H(x)*H(y)
u_code = sym.printing.ccode(u)
u_code = u_code.replace('M_PI', 'pi')
print('C code for u:', u_code)
u_D = Expression(u_code, degree=1)
kappa = 1 # Note: Can't use Constant(1) here because of sym.diff (!)
f = sym.diff(-kappa*sym.diff(u, x), x) + \
sym.diff(-kappa*sym.diff(u, y), y)
f = sym.simplify(f)
f_code = sym.printing.ccode(f)
f_code = f_code.replace('M_PI', 'pi')
f = Expression(f_code, degree=1)
flux_u_x_exact = sym.lambdify([x, y], -kappa*sym.diff(u, x),
modules='numpy')
print('C code for f:', f_code)
kappa = Constant(1)
nx = 22; ny = 22
# Compute solution and represent as a structured field
u = solver(kappa, f, u_D, nx, ny, 1, linear_solver='direct')
u_box = FEniCSBoxField(u, (nx, ny))
# Set coordinates and extract values
X = 0; Y = 1
u_ = u_box.values
# Iterate over 2D mesh points (i, j)
debug2 = False
if debug2:
for j in range(u_.shape[1]):
for i in range(u_.shape[0]):
print('u[%d, %d] = u(%g, %g) = %g' %
(i, j,
u_box.grid.coor[X][i], u_box.grid.coor[Y][j],
u_[i, j]))
# Make surface plot
# import matplotlib.pyplot as plt
# from mpl_toolkits.mplot3d import Axes3D
# from matplotlib import cm
# fig = plt.figure()
# ax = fig.gca(projection='3d')
cv = u_box.grid.coorv
# ax.plot_surface(cv[X], cv[Y], u_, cmap=cm.coolwarm,
# rstride=1, cstride=1)
# plt.title('Surface plot of solution')
# plt.savefig('poisson_extended/surface_plot.png')
# plt.savefig('poisson_extended/surface_plot.pdf')
# Make contour plot
# fig = plt.figure()
# ax = fig.gca()
# cs = ax.contour(cv[X], cv[Y], u_, 7)
# plt.clabel(cs)
# plt.axis('equal')
# plt.title('Contour plot of solution')
# plt.savefig('poisson_extended/contour_plot.png')
# plt.savefig('poisson_extended/contour_plot.pdf')
# Plot u along a line y = const and compare with exact solution
start = (0, 0.4)
x, u_val, y_fixed, snapped = u_box.gridline(start, direction=X)
u_e_val = [u_D((x_, y_fixed)) for x_ in x]
# plt.figure()
# plt.plot(x, u_val, 'r-')
# plt.plot(x, u_e_val, 'bo')
# plt.legend(['P1 elements', 'exact'], loc='best')
# plt.title('Solution along line y=%g' % y_fixed)
# plt.xlabel('x'); plt.ylabel('u')
# plt.savefig('poisson_extended/line_plot.png')
# plt.savefig('poisson_extended/line_plot.pdf')
# Plot the numerical and exact flux along the same line
flux_u = flux(u, kappa)
flux_u_x, flux_u_y = flux_u.split(deepcopy=True)
flux2_x = flux_u_x if flux_u_x.ufl_element().degree() == 1 \
else interpolate(flux_x,
FunctionSpace(u.function_space().mesh(), 'P', 1))
flux_u_x_box = FEniCSBoxField(flux_u_x, (nx,ny))
x, flux_u_val, y_fixed, snapped = \
flux_u_x_box.gridline(start, direction=X)
y = y_fixed
# plt.figure()
# plt.plot(x, flux_u_val, 'r-')
# plt.plot(x, flux_u_x_exact(x, y_fixed), 'bo')
# plt.legend(['P1 elements', 'exact'], loc='best')
# plt.title('Flux along line y=%g' % y_fixed)
# plt.xlabel('x'); plt.ylabel('u')
# plt.savefig('poisson_extended/line_flux.png')
# plt.savefig('poisson_extended/line_flux.pdf')
# plt.show()
def demo_bcs():
"Compute and plot solution using a combination of boundary conditions"
# Define manufactured solution in sympy and derive f, g, etc.
import sympy as sym
x, y = sym.symbols('x[0], x[1]') # needed by UFL
u = 1 + x**2 + 2*y**2 # exact solution
u_e = u # exact solution
u_00 = u.subs(x, 0) # restrict to x = 0
u_01 = u.subs(x, 1) # restrict to x = 1
f = -sym.diff(u, x, 2) - sym.diff(u, y, 2) # -Laplace(u)
f = sym.simplify(f) # simplify f
g = -sym.diff(u, y).subs(y, 1) # compute g = -du/dn
r = 1000 # Robin data, arbitrary
s = u # Robin data, u = s
# Collect variables
variables = [u_e, u_00, u_01, f, g, r, s]
# Turn into C/C++ code strings
variables = [sym.printing.ccode(var) for var in variables]
# Turn into FEniCS Expressions
variables = [Expression(var, degree=2) for var in variables]
# Extract variables
u_e, u_00, u_01, f, g, r, s = variables
# Define boundary conditions
boundary_conditions = {0: {'Dirichlet': u_00}, # x = 0
1: {'Dirichlet': u_01}, # x = 1
2: {'Robin': (r, s)}, # y = 0
3: {'Neumann': g}} # y = 1
# Compute solution
kappa = Constant(1)
Nx = Ny = 8
u = solver_bcs(kappa, f, boundary_conditions, Nx, Ny,
degree=1, linear_solver='direct')
# Compute maximum error at vertices
mesh = u.function_space().mesh()
vertex_values_u_e = u_e.compute_vertex_values(mesh)
vertex_values_u = u.compute_vertex_values(mesh)
error_max = np.max(np.abs(vertex_values_u_e -
vertex_values_u))
print('error_max =', error_max)
# Save and plot solution
vtkfile = File('poisson_extended/solution_bcs.pvd')
vtkfile << u
# plot(u)
def demo_solvers():
"Reproduce exact solution to machine precision with different linear solvers"
# Tolerance for tests
tol = 1E-10
# Define exact solution and coefficients
import sympy as sym
x, y = sym.symbols('x[0], x[1]')
u = 1 + x**2 + 2*y**2
f = -sym.diff(u, x, 2) - sym.diff(u, y, 2)
f = sym.simplify(f)
# Generate C/C++ code for UFL expressions
u_code = sym.printing.ccode(u)
f_code = sym.printing.ccode(f)
# Define FEniCS Expressions
u_e = Expression(u_code, degree=2)
f = Expression(f_code, degree=2)
kappa = Constant(1)
# Define boundary conditions
boundary_conditions = {0: {'Dirichlet': u_e},
1: {'Dirichlet': u_e},
2: {'Dirichlet': u_e},
3: {'Dirichlet': u_e}}
# Iterate over meshes and degrees
for Nx, Ny in [(3, 3), (3, 5), (5, 3), (20, 20)]:
for degree in 1, 2, 3:
for linear_solver in 'direct', 'Krylov':
print('\nSolving on 2 x (%d x %d) mesh with P%d elements '
'using solver "%s".' \
% (Nx, Ny, degree, linear_solver)),
# Compute solution
u = solver_bcs(kappa, f, boundary_conditions, Nx, Ny,
degree=degree,
linear_solver=linear_solver,
abs_tol=0.001*tol,
rel_tol=0.001*tol)
# Compute maximum error at vertices
mesh = u.function_space().mesh()
vertex_values_u_e = u_e.compute_vertex_values(mesh)
vertex_values_u = u.compute_vertex_values(mesh)
error_max = np.max(np.abs(vertex_values_u_e -
vertex_values_u))
print('error_max =', error_max)
assert error_max < tol
if __name__ == '__main__':
# List of demos
demos = (demo_test,
demo_flux,
demo_convergence_rates,
demo_structured_mesh,
demo_bcs,
demo_solvers)
# Pick a demo
for nr in range(len(demos)):
print('%d: %s (%s)' % (nr, demos[nr].__doc__, demos[nr].__name__))
print('')
nr = input('Pick a demo: ')
# Run demo
demos[int(nr)]()
# Hold plot
# interactive()
The visualization of the solution of the heat equation in Paraview 4.0.1 only works in the described manner if one loads the vtu file instead of the pvd one. Maybe this could be hinted somewhere. Best wishes Johannes
Hello,
It seems that there's a mistake at line 72 of https://github.com/hplgit/fenics-tutorial/blob/master/pub/python/vol1/ft11_magnetostatics.py
It also seems this mistake comes from misreading a Wikipedia table (https://en.wikipedia.org/wiki/Permeability_(electromagnetism)#Values_for_some_common_materials) where one sees that -6.4 x 10 ^-6 is the value of the susceptibility of copper, not its permeability. The permeability should be 1.256629×10−6 or som which is positive, not negative as claimed.
Also at line 70 of the above file, one reads "values[0] = 1e-5 # iron (should really be 2.5e-1)" But according to the Wikipedia table, only 99.95% pure iron should get the very high 2.5e-1 permeability. In the same table it is written that iron pure at 99.8% has a permeability of 6.3×10−3.
So it's really not clear to me as to why "should really be 2.5e-1" is written as a comment to the code.
On page 84, it reads
g(x, y) = 4, y = 1.
However, as the definition of g, it should equal to -4 on y = 1.
So the extension is
g(x, y) = -4y.
On page 85, we need to include the Neumann condition as
g = Expression('-4*x[1]', degree = 1)
By the way, the Dirichlet boundary condition also can be written as
boundary = '(near(x[0], 0, 1e-14) || near(x[0], 1, 1e-14)) && on_boundary'.
I think it is another compact implementation.
The list in section 2.3.10 Plotting the solution using ParaView on page 27 seams to be broken.
Item 7 is interrupted and the numbering afterwards starts from 1 again
As far as I can tell the problem is in /src/vol1/fundamentals_poisson.do.txt in line 1035.
https://github.com/hplgit/fenics-tutorial/blob/master/src/vol1/fundamentals_poisson.do.txt#L1035
There are newlines in the source code which cause the trouble
In the section 4.3.1 Using expressions to define subdomains on page 88, it reads as
kappa = K()
where K is a Expression class.
I think the code should be written as
kappa = K(degree = 0)
since Expression must have a degree.
Hello:
I want to know how to define Neumann boundary condition with fenics such as (du/dy=0),are there any examples to refer to,a more easier example?(I didn't understand the example in fenics-manual-2011).
Thank you for your help!
Best wishes!
I use FEniCS installed in WSL of Windows 10.
The version is 2019.01.
Then I update demo files in the tutorial.
I delete the plot code because WSL cannot use it.
"""
FEniCS tutorial demo program: Poisson equation with Dirichlet conditions.
Test problem is chosen to give an exact solution at all nodes of the mesh.
-Laplace(u) = f in the unit square
u = u_D on the boundary
u = 1 + x^2 + 2y^2 = u_D
f = -6
This is an extended version of the demo program poisson.py which
encapsulates the solver as a Python function.
"""
from __future__ import print_function
from dolfin import *
import numpy as np
def solver(f, u_D, Nx, Ny, degree=1):
"""
Solve -Laplace(u) = f on [0,1] x [0,1] with 2*Nx*Ny Lagrange
elements of specified degree and u = u_D (Expresssion) on
the boundary.
"""
# Create mesh and define function space
mesh = UnitSquareMesh(Nx, Ny)
V = FunctionSpace(mesh, 'P', degree)
# Define boundary condition
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, u_D, boundary)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
a = dot(grad(u), grad(v))*dx
L = f*v*dx
# Compute solution
u = Function(V)
solve(a == L, u, bc)
return u
def run_solver():
"Run solver to compute and post-process solution"
# Set up problem parameters and call solver
u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2)
f = Constant(-6.0)
u = solver(f, u_D, 8, 8, 1)
# Plot solution and mesh
# plot(u)
# plot(u.function_space().mesh())
# Save solution to file in VTK format
vtkfile = File('poisson_solver/solution.pvd')
vtkfile << u
def test_solver():
"Test solver by reproducing u = 1 + x^2 + 2y^2"
# Set up parameters for testing
tol = 1E-10
u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2)
f = Constant(-6.0)
# Iterate over mesh sizes and degrees
for Nx, Ny in [(3, 3), (3, 5), (5, 3), (20, 20)]:
for degree in 1, 2, 3:
print('Solving on a 2 x (%d x %d) mesh with P%d elements.'
% (Nx, Ny, degree))
# Compute solution
u = solver(f, u_D, Nx, Ny, degree)
# Extract the mesh
mesh = u.function_space().mesh()
# Compute maximum error at vertices
vertex_values_u_D = u_D.compute_vertex_values(mesh)
vertex_values_u = u.compute_vertex_values(mesh)
error_max = np.max(np.abs(vertex_values_u_D - \
vertex_values_u))
# Check maximum error
msg = 'error_max = %g' % error_max
assert error_max < tol, msg
if __name__ == '__main__':
run_solver()
# interactive()
In my tutorial (version Nov 9, 2016) I identified the following formulas:
Mostly these contain tensor structure or (from my point of view) a more or less missleading
contraction of more than one Nabla.
Some introductory formulas from the linear elasticity problem:
The Navier-Stokes equations:
Some of the formulas in the discussion about the outflow boundary conditions for Navier-Stokes.
Derived from the Maxwell equations:
In this context the discussion (grad(u) vs. nabla_grad(u)) could be moved to somewhere at the beginning, because it is very enlighting in order to understand the more complicated formulas of continuum mechanics and covers the index structure in more detail.
Greetings, its my first time in GitHub. I am told that I could get some help over projects that i cant figure out my mistakes. The thing is a project has been asigned to me which is creating a code that solves flow around a cylinder problem. In this case the cylinder is shaped just like a bullet, a half circle plus a half rectangle combined. I have copied the code from fenicsproject.org and modified it for my use, which is shared below.
"""
FEniCS tutorial demo program: Incompressible Navier-Stokes equations
for flow around a cylinder using the Incremental Pressure Correction
Scheme (IPCS).
u' + u . nabla(u)) - div(sigma(u, p)) = f
div(u) = 0
"""
from __future__ import print_function
from fenics import *
from mshr import *
import numpy as np
T = 5.0 # final time
num_steps = 5000 # number of time steps
dt = T / num_steps # time step size
mu = 0.001 # dynamic viscosity
rho = 1 # density
# Create mesh
channel = Rectangle(Point(0, 0), Point(2.2, 0.41))
cylinder = Circle(Point(0.2, 0.2), 0.05) + Rectangle(Point(0.2,0.15), Point(0.3,0.25))
domain = channel - cylinder
mesh = generate_mesh(domain, 64)
# Define function spaces
V = VectorFunctionSpace(mesh, 'P', 2)
Q = FunctionSpace(mesh, 'P', 1)
# Define boundaries
inflow = 'near(x[0], 0)'
outflow = 'near(x[0], 2.2)'
walls = 'near(x[1], 0) || near(x[1], 0.41)'
cylinder = 'on_boundary && x[0]>0.1 && x[0]<0.3 && x[1]>0.1 && x[1]<0.3'
# Define inflow profile
inflow_profile = ('4.0*1.5*x[1]*(0.41 - x[1]) / pow(0.41, 2)', '0')
# Define boundary conditions
bcu_inflow = DirichletBC(V, Expression(inflow_profile, degree=2), inflow)
bcu_walls = DirichletBC(V, Constant((0, 0)), walls)
bcu_cylinder = DirichletBC(V, Constant((0, 0)), cylinder)
bcp_outflow = DirichletBC(Q, Constant(0), outflow)
bcu = [bcu_inflow, bcu_walls, bcu_cylinder]
bcp = [bcp_outflow]
# Define trial and test functions
u = TrialFunction(V)
v = TestFunction(V)
p = TrialFunction(Q)
q = TestFunction(Q)
# Define functions for solutions at previous and current time steps
u_n = Function(V)
u_ = Function(V)
p_n = Function(Q)
p_ = Function(Q)`
# Define expressions used in variational forms
U = 0.5*(u_n + u)
n = FacetNormal(mesh)
f = Constant((0, 0))
k = Constant(dt)
mu = Constant(mu)
rho = Constant(rho)
# Define symmetric gradient
def epsilon(u):
return sym(nabla_grad(u))
# Define stress tensor
def sigma(u, p):
return 2*mu*epsilon(u) - p*Identity(len(u))
# Define variational problem for step 1
F1 = rho*dot((u - u_n) / k, v)*dx \
+ rho*dot(dot(u_n, nabla_grad(u_n)), v)*dx \
+ inner(sigma(U, p_n), epsilon(v))*dx \
+ dot(p_n*n, v)*ds - dot(mu*nabla_grad(U)*n, v)*ds \
- dot(f, v)*dx
a1 = lhs(F1)
L1 = rhs(F1)
# Define variational problem for step 2
a2 = dot(nabla_grad(p), nabla_grad(q))*dx
L2 = dot(nabla_grad(p_n), nabla_grad(q))*dx - (1/k)*div(u_)*q*dx
# Define variational problem for step 3
a3 = dot(u, v)*dx
L3 = dot(u_, v)*dx - k*dot(nabla_grad(p_ - p_n), v)*dx
# Assemble matrices
A1 = assemble(a1)
A2 = assemble(a2)
A3 = assemble(a3)
# Apply boundary conditions to matrices
[bc.apply(A1) for bc in bcu]
[bc.apply(A2) for bc in bcp]
# Create XDMF files for visualization output
xdmffile_u = XDMFFile('navier_stokes_cylinder/velocity.xdmf')
xdmffile_p = XDMFFile('navier_stokes_cylinder/pressure.xdmf')
# Create time series (for use in reaction_system.py)
timeseries_u = TimeSeries('navier_stokes_cylinder/velocity_series')
timeseries_p = TimeSeries('navier_stokes_cylinder/pressure_series')
# Save mesh to file (for use in reaction_system.py)
File('navier_stokes_cylinder/cylinder.xml.gz') << mesh
# Create progress bar
# progress = Progress('Time-stepping')
progress = Progress('Time-stepping', num_steps)
# set_log_level(PROGRESS)
# Time-stepping
t = 0
for n in range(num_steps):
# Update current time
t += dt
# Step 1: Tentative velocity step
b1 = assemble(L1)
[bc.apply(b1) for bc in bcu]
solve(A1, u_.vector(), b1, 'bicgstab', 'hypre_amg')
# Step 2: Pressure correction step
b2 = assemble(L2)
[bc.apply(b2) for bc in bcp]
solve(A2, p_.vector(), b2, 'bicgstab', 'hypre_amg')
# Step 3: Velocity correction step
b3 = assemble(L3)
solve(A3, u_.vector(), b3, 'cg', 'sor')
# Plot solution
# plot(u_, title='Velocity')
# plot(p_, title='Pressure')
# Save solution to file (XDMF/HDF5)
xdmffile_u.write(u_, t)
xdmffile_p.write(p_, t)
# Save nodal values to file
timeseries_u.store(u_.vector(), t)
timeseries_p.store(p_.vector(), t)`
# Update previous solution
u_n.assign(u_)
p_n.assign(p_)
` # Update progress bar`
` # progress.update(t / T)`
` set_log_level(LogLevel.PROGRESS)`
` progress += 1`
` set_log_level(LogLevel.ERROR)`
` print('u max:', u_.vector().get_local().max())`
# Hold plot
# interactive()
When I run this code it converges to this:
u max: 6.59379743087e+19
u max: 3.72600712496e+38
u max: 1.20155908645e+76
and then this error pops up:
RuntimeError Traceback (most recent call last)
in
131 b2 = assemble(L2)
132 [bc.apply(b2) for bc in bcp]
--> 133 solve(A2, p_.vector(), b2, 'bicgstab', 'hypre_amg')
134
135 # Step 3: Velocity correction step
/usr/local/lib/python3.6/dist-packages/dolfin/fem/solving.py in solve(*args, **kwargs)
225 raise RuntimeError("Not expecting keyword arguments when solving linear algebra problem.")
226
--> 227 return dolfin.la.solver.solve(*args)
228
229
/usr/local/lib/python3.6/dist-packages/dolfin/la/solver.py in solve(A, x, b, method, preconditioner)
70 """
71
---> 72 return cpp.la.solve(A, x, b, method, preconditioner)
RuntimeError:
*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
*** Remember to include the error message listed below and, if possible,
*** include a minimal running example to reproduce the error.
*** -------------------------------------------------------------------------
*** Error: Unable to solve linear system using PETSc Krylov solver.
*** Reason: Solution failed to converge in 0 iterations (PETSc reason DIVERGED_NANORINF, residual norm ||r|| = 0.000000e+00).
*** Where: This error was encountered inside PETScKrylovSolver.cpp.
*** Process: 0
*** DOLFIN version: 2019.1.0
*** Git changeset: 74d7efe1e84d65e9433fd96c50f1d278fa3e3f3f
*** -------------------------------------------------------------------------
I do not know what this error means, I do not know how to solve this equation and I do not know what have I done wrong. Side note, I am just a beginner in this coding environment so any positive/negative feedback is welcome.
If anyone who understands the problem please do not hesitate to help.
Sorry if i couldnt adress "insert code" operator correctly.
OS: Ubuntu18.04
python:3.6.8
dolfin version:2019.1.0
Hello, when I ran example fenics-tutorial/pub/python/vol1/ft03_heat.py
, I got errors:
Solving linear variational problem. Traceback (most recent call last): File "/home/gaoya/Files/python/fenics-tutorial/pub/python/vol1/ft03_heat.py", line 66, in <module> error = np.abs(u_e.vector().array() - u.vector().array()).max() AttributeError: 'dolfin.cpp.la.PETScVector' object has no attribute 'array'
So I wonder how to solve it?
And by the way I found many of these examples are out of date (like using plt.show()
instead of interactive()
or unuseful, I tried those 12 example files, and the results showed below:
ft01_poisson.py: pass
ft02_poisson_membrane.py: pass
ft03_heat.py: error: AttributeError: 'dolfin.cpp.la.PETScVector' object has no attribute 'array'
ft04_heat_gaussian.py: pass
ft05_poisson_nonlinear.py: line 58, in <module> error_max = np.abs(u_e.vector().array() - u.vector().array()).max() AttributeError: 'dolfin.cpp.la.PETScVector' object has no attribute 'array'
ft06_elasticity.py: Traceback (most recent call last): File "/home/gaoya/Files/python/fenics-tutorial/pub/python/vol1/ft06_elasticity.py", line 50, in <module> a = inner(sigma(u), epsilon(v))*dx File "/home/gaoya/Files/python/fenics-tutorial/pub/python/vol1/ft06_elasticity.py", line 42, in sigma return lambda_*nabla_div(u)*Identity(d) + 2*mu*epsilon(u) NameError: name 'nabla_div' is not defined
ft07_navier_stokes_channel.py: Traceback (most recent call last): File "/home/gaoya/Files/python/fenics-tutorial/pub/python/vol1/ft07_navier_stokes_channel.py", line 118, in <module> error = np.abs(u_e.vector().array() - u_.vector().array()).max() AttributeError: 'dolfin.cpp.la.PETScVector' object has no attribute 'array'
ft08_navier_stokes_cylinder.py: Traceback (most recent call last): File "/home/gaoya/Files/python/fenics-tutorial/pub/python/vol1/ft08_navier_stokes_cylinder.py", line 115, in <module> set_log_level(PROGRESS) NameError: name 'PROGRESS' is not defined
ft09_reaction_system.py: Traceback (most recent call last): File "/home/gaoya/Files/python/fenics-tutorial/pub/python/vol1/ft09_reaction_system.py", line 76, in <module> set_log_level(PROGRESS) NameError: name 'PROGRESS' is not defined
ft10_poisson_extended.py: Traceback (most recent call last): File "/home/gaoya/Files/python/fenics-tutorial/pub/python/vol1/ft10_poisson_extended.py", line 22, in <module> from boxfield import * ModuleNotFoundError: No module named 'boxfield'
ft11_magnetostatics.py: Traceback (most recent call last): File "/home/gaoya/Files/python/fenics-tutorial/pub/python/vol1/ft11_magnetostatics.py", line 74, in <module> mu = Permeability(markers, degree=1) File "/home/gaoya/Files/python/fenics-tutorial/pub/python/vol1/ft11_magnetostatics.py", line 65, in __init__ self.markers = markers File "/usr/lib/python3/dist-packages/dolfin/function/expression.py", line 438, in __setattr__ elif name in self._parameters: File "/usr/lib/python3/dist-packages/dolfin/function/expression.py", line 432, in __getattr__ return self._parameters[name] File "/usr/lib/python3/dist-packages/dolfin/function/expression.py", line 432, in __getattr__ return self._parameters[name] File "/usr/lib/python3/dist-packages/dolfin/function/expression.py", line 432, in __getattr__ return self._parameters[name] [Previous line repeated 991 more times] File "/usr/lib/python3/dist-packages/dolfin/function/expression.py", line 429, in __getattr__ if name == 'user_parameters': RecursionError: maximum recursion depth exceeded in comparison
ft12_poisson_solver.py: pass
Only 4 examples passed!
Hello,
I tried lots of ways to install “fenics library” on Linux- Ubuntu, but each time I had the following error:
Collecting package metadata (current_repodata.json): failed
UnavailableInvalidChannel: The channel is not accessible or is invalid.
channel name: pkgs/main
channel url: https://repo.anaconda.com/pkgs/main
error code: 403
You will need to adjust your conda configuration to proceed.
Use conda config --show channels
to view your configuration's current state,
and use conda config --show-sources
to view config file locations.
Would anybody let me know how to solve this problem?
In file ft07_navier_stokes_channel.py, when we define expressions used in variational forms, the rho is defined as rho = Constant(rho).
But in the file ft08_navier_stokes_cylinder.py, the sentence is missing.
Should I add the sentence in the file ft08?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.