Git Product home page Git Product logo

fembase.jl's Introduction

JuliaFEM.jl - an open source solver for both industrial and academia usage

logo

DOI License Gitter Build Status Coverage Status Stable documentation Latest documentation Issues


Everything is outdated. See other FEM options from here: https://github.com/JuliaPDE/SurveyofPDEPackages?tab=readme-ov-file#fem


JuliaFEM organization web-page: http://www.juliafem.org

The JuliaFEM project develops open-source software for reliable, scalable, distributed Finite Element Method.

The JuliaFEM software library is a framework that allows for the distributed processing of large Finite Element Models across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. The basic design principle is: everything is nonlinear. All physics models are nonlinear from which the linearization are made as a special cases.

At the moment, users can perform the following analyses with JuliaFEM: elasticity, thermal, eigenvalue, contact mechanics, and quasi-static solutions. Typical examples in industrial applications include non-linear solid mechanics, contact mechanics, finite strains, and fluid structure interaction problems. For visualization, JuliaFEM uses ParaView which prefers XDMF file format using XML to store light data and HDF to store large data-sets, which is more or less the open-source standard.

Vision

On one hand, the vision of the JuliaFEM includes the opportunity for massive parallelization using multiple computers with MPI and threading as well as cloud computing resources in Amazon, Azure and Google Cloud services together with a company internal server. And on the other hand, the real application complexity including the simulation model complexity as well as geometric complexity. Not to forget that the reuse of the existing material models as well as the whole simulation models are considered crucial features of the JuliaFEM package.

Recreating the wheel again is definitely not anybody's goal, and thus we try to use and embrace good practices and formats as much as possible. We have implemented Abaqus / CalculiX input-file format support and maybe will in the future extend to other FEM solver formats. Using modern development environments encourages the user towards fast development time and high productivity. For developing and creating new ideas and tutorials, we have used Jupyter notebooks to make easy-to-use handouts.

The user interface for JuliaFEM is Jupyter Notebook, and Julia language itself is a real programming language. This makes it possible to use JuliaFEM as a part of a bigger solution cycle, including for example data mining, automatic geometry modifications, mesh generation, solution, and post-processing and enabling efficient optimization loops.

Installing JuliaFEM

Inside Julia REPL, type:

Pkg.add("JuliaFEM")

Initial road map

JuliaFEM current status: project planning

Version Number of degree of freedom Number of cores
0.1.0 1 000 000 10
0.2.0 10 000 000 100
1.0.0 100 000 000 1 000
2.0.0 1 000 000 000 10 000
3.0.0 10 000 000 000 100 000

We strongly believe in the test driven development as well as building on top of previous work. Thus all the new code in this project should be 100% tested. Also other people have wisdom in style as well:

The Zen of Python:

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Errors should never pass silently.

Citing

If you like using our package, please consider citing our article

@article{frondelius2017juliafem,
  title={Julia{FEM} - open source solver for both industrial and academia usage},
  volume={50}, 
  url={https://rakenteidenmekaniikka.journal.fi/article/view/64224},
  DOI={10.23998/rm.64224},
  number={3},
  journal={Rakenteiden Mekaniikka},
  author={Frondelius, Tero and Aho, Jukka},
  year={2017},
  pages={229-233}
}

Contributing

Developing JuliaFEM encourages good practices, starting from unit testing both for smaller and larger functions and continuing to full integration testing of different platforms.

Interested in participating? Please start by reading contributing.

fembase.jl's People

Contributors

ahojukka5 avatar femtocleaner[bot] avatar kristofferc avatar rezarastak avatar sebastianpech avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

fembase.jl's Issues

README does not attract new users

Explain the main goals of the package. Why this kind of package is needed? README.md should be written so that the user gets interested in the package. Maybe advertise, how easily some basic tasks can be performed?

get_integration_points for Seg2 not working anymore

julia> get_integration_points(Element(Seg2, (1, 2)))
ERROR: MethodError: no method matching FEMBase.Point{IntegrationPoint}(::Int64, ::Float64, ::Float64)
Closest candidates are:
  FEMBase.Point{IntegrationPoint}(::Any, ::Any, ::Any, ::Any, ::Any) where P<:FEMBase.AbstractPoint at C:\Users\jahx06\.julia\dev\FEMBase\src\types.jl:9
  FEMBase.Point{IntegrationPoint}(::Any, ::Any, ::Tuple) at C:\Users\jahx06\.julia\dev\FEMBase\src\types.jl:66
  FEMBase.Point{IntegrationPoint}(::Any, ::Any, ::Array{T,1} where T) at C:\Users\jahx06\.julia\dev\FEMBase\src\types.jl:70
Stacktrace:
 [1] (::getfield(FEMBase, Symbol("##31#32")))(::Tuple{Int64,Tuple{Float64,Float64}}) at .\none:0
 [2] iterate at .\generator.jl:47 [inlined]
 [3] collect(::Base.Generator{Base.Iterators.Enumerate{Base.Iterators.Zip{Tuple{Tuple{Float64,Float64},Tuple{Float64,Float64}}}},getfield(FEMBase, Symbol("##31#32"))}) at .\array.jl:606
 [4] get_integration_points(::Element{Seg2}) at C:\Users\jahx06\.julia\dev\FEMBase\src\elements.jl:377
 [5] top-level scope at none:0

Perhaps tweak the way loads are defined to specific dimension

Right now, a load is applied to a set of dimensions by giving a string with the dimension after the name of the load, for example

update!(support_elements, "displacement load 1", 0.0)
update!(support_elements, "displacement load 2", 0.0)

means we apply a load in dimension 1 and 2.

In the problem this is then checked using something like

            if haskey(element, "displacement load")
                T = element("displacement load", ip, time)
                f_ext += w*vec(T*N)
            end

            for i=1:dim
                if haskey(element, "displacement load $i")
                    b = element("displacement load $i", ip, time)
                    f_ext[i:dim:end] += w*vec(b*N)
                end
            end

I would argue that it would be better to specify the dimensions in some type of vector instead of in a string, for example:

update!(support_elements, "displacement load", 1, 1.0) # dim 1
update!(support_elements, "displacement load", 2:3, 1.0) # dim 2, 3
update!(support_elements, "displacement load", [1, 3], 1.0) # dim 1, 3
update!(support_elements, "displacement load", 1.0) # dim 1, 2 ,(3) all dimension by default

and the code using the load would get the dimensions it is applied to and the value as

b, dim = element("displacement load", ip, time)

This would require fewer dictionary looksups and be a bit more "Julian".

Simulation times increases in long simulations

image

image

In my test setup, the tool was not moving at all, so I just run a similar loop about 500 times. The first run is about 0.5 second while the last one was 2 seconds. The only thing change is that new values are updated to fields at the end of each increment. My guess is that querying data from fields is getting slower when the field contains more time increments. At first, the field is interpolated in time direction here, and I don't immediately see any reasons why fetching the newest value from the field should slow down as the size of the data vector grows.

Single element assemble + override default assemble operation

So, if we introduce "local buffer", containing workspace/memory for assembling one element without any additional memory allocations, we could go back to assemble one element at a time.

It would be cool if we can make dispatching work so that we can override or extend assembling of a single element if we have some special case we need to take care of. Would it work if we have something like:

function assemble_stiffness_matrix!(problem, assembly, other_stuff, Val{:E1})
    # my special assembly procedure
end

where :E1 is the name of the element.

We also should construct everything such that we can assemble mass matrix, stiffness matrix, force vector and so on separately or all together.

Element function accepts incompatible input arguments

Element function allows creation of Quad8 type element with four nodes:

julia> element2 = Element(Quad8, [1,2,3,4])
FEMBase.Element{FEMBasis.Quad8}(-1, [1, 2, 3, 4], FEMBase.Point{")FEMBase.IntegrationPoint}[], Dict{String,FEMBase.AbstractField}(), FEMBasis.Quad8())

It might good to check compatibility of input arguments here.

Performance problem with interpolate

The current interpolate has a bit too much overhead to be acceptable

For example

X = Dict(
    1 => [0.0, 0.0, 0.0],
    2 => [1.0, 0.0, 0.0],
    3 => [1.0, 1.0, 0.0],
    4 => [0.0, 1.0, 0.0],
    5 => [0.0, 0.0, 1.0],
    6 => [1.0, 0.0, 1.0],
    7 => [1.0, 1.0, 1.0],
    8 => [0.0, 1.0, 1.0])

u = Dict(
    1 => [0.0, 0.0, 0.0],
    2 => [-1/3, 0.0, 0.0],
    3 => [-1/3, -1/3, 0.0],
    4 => [0.0, -1/3, 0.0],
    5 => [0.0, 0.0, 1.0],
    6 => [-1/3, 0.0, 1.0],
    7 => [-1/3, -1/3, 1.0],
    8 => [0.0, -1/3, 1.0])

element = Element(Hex8, (1, 2, 3, 4, 5, 6, 7, 8))
update!(element, "geometry", 0.0 => X)
update!(element, "displacement", 0.0 => u)
update!(element, "youngs modulus", 288.0)
update!(element, "poissons ratio", 1/3)

ip = get_integration_points(element)

using BenchmarkTools

Benchmarking this gives

julia> @btime element("youngs modulus", $ip, 1.0)
  87.709 ns (2 allocations: 32 bytes)

This is a bit long and there are two allocations.

I tried to store the field using two vectors, instead of a dictionary with the assumption that the number of keys are small with moderate success. Using d54d5d3 I got

julia> @btime element("youngs modulus", $ip, 1.0)
  65.823 ns (2 allocations: 32 bytes)

Package broken

(@v1.5) pkg> add FEMBase
   Updating registry at `~/.julia/registries/General`
   Updating git-repo `https://github.com/JuliaRegistries/General.git`
  Resolving package versions...
Updating `~/.julia/environments/v1.5/Project.toml`
  [fbcbbc08] + FEMBase v0.3.1
Updating `~/.julia/environments/v1.5/Manifest.toml`
  [49dc2e85] + Calculus v0.5.1
  [fbcbbc08] + FEMBase v0.3.1
  [353fb843] + FEMBasis v0.2.0
  [be8e8821] + FEMQuad v0.3.2

(@v1.5) pkg> test FEMBase
    Testing FEMBase
Status `/tmp/jl_yvKAwY/Project.toml`
  [fbcbbc08] FEMBase v0.3.1
  [353fb843] FEMBasis v0.2.0
  [be8e8821] FEMQuad v0.3.2
  [a759f4b9] TimerOutputs v0.5.6
  [37e2e46d] LinearAlgebra
  [2f01184e] SparseArrays
  [10745b16] Statistics
  [8dfed614] Test
Status `/tmp/jl_yvKAwY/Manifest.toml`
  [49dc2e85] Calculus v0.5.1
  [fbcbbc08] FEMBase v0.3.1
  [353fb843] FEMBasis v0.2.0
  [be8e8821] FEMQuad v0.3.2
  [a759f4b9] TimerOutputs v0.5.6
  [2a0f44e3] Base64
  [8ba89e20] Distributed
  [b77e0a4c] InteractiveUtils
  [8f399da3] Libdl
  [37e2e46d] LinearAlgebra
  [56ddb016] Logging
  [d6f4376e] Markdown
  [de0858da] Printf
  [9a3f8284] Random
  [9e88b42a] Serialization
  [6462fe0b] Sockets
  [2f01184e] SparseArrays
  [10745b16] Statistics
  [8dfed614] Test
  [4ec0a83e] Unicode
ERROR: LoadError: LoadError: Evaluation into the closed module `Calculus` breaks incremental compilation because the side effects will not be permanent. This is likely due to some other module mutating `Calculus` with `eval` during precompilation - don't do this.
Stacktrace:
 [1] eval at ./boot.jl:331 [inlined]
 [2] simplify(::Expr) at /home/jukka/.julia/packages/Calculus/mbqhh/src/symbolic.jl:96
 [3] calculate_interpolation_polynomials(::Expr, ::Array{Float64,2}) at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/create_basis.jl:25
 [4] create_basis(::Symbol, ::String, ::Tuple{Tuple{Float64},Tuple{Float64}}, ::Expr) at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/create_basis.jl:48
 [5] top-level scope at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/lagrange_segments.jl:4
 [6] include(::Function, ::Module, ::String) at ./Base.jl:380
 [7] include at ./Base.jl:368 [inlined]
 [8] include(::String) at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/FEMBasis.jl:4
 [9] top-level scope at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/FEMBasis.jl:16
 [10] include(::Function, ::Module, ::String) at ./Base.jl:380
 [11] include(::Module, ::String) at ./Base.jl:368
 [12] top-level scope at none:2
 [13] eval at ./boot.jl:331 [inlined]
 [14] eval(::Expr) at ./client.jl:467
 [15] top-level scope at ./none:3
in expression starting at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/lagrange_segments.jl:4
in expression starting at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/FEMBasis.jl:16
ERROR: LoadError: Failed to precompile FEMBasis [353fb843-c566-51e6-ba49-78b3e3d5ebb5] to /home/jukka/.julia/compiled/v1.5/FEMBasis/x3cNT_EipM7.ji.
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1305
 [3] _require(::Base.PkgId) at ./loading.jl:1030
 [4] require(::Base.PkgId) at ./loading.jl:928
 [5] require(::Module, ::Symbol) at ./loading.jl:923
 [6] include(::Function, ::Module, ::String) at ./Base.jl:380
 [7] include(::Module, ::String) at ./Base.jl:368
 [8] top-level scope at none:2
 [9] eval at ./boot.jl:331 [inlined]
 [10] eval(::Expr) at ./client.jl:467
 [11] top-level scope at ./none:3
in expression starting at /home/jukka/.julia/packages/FEMBase/qGDOl/src/FEMBase.jl:19
ERROR: LoadError: Failed to precompile FEMBase [fbcbbc08-f1bf-5204-9233-b69f5d396135] to /home/jukka/.julia/compiled/v1.5/FEMBase/6HUoq_EipM7.ji.
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1305
 [3] _require(::Base.PkgId) at ./loading.jl:1030
 [4] require(::Base.PkgId) at ./loading.jl:928
 [5] require(::Module, ::Symbol) at ./loading.jl:923
 [6] include(::String) at ./client.jl:457
 [7] top-level scope at none:6
in expression starting at /home/jukka/.julia/packages/FEMBase/qGDOl/test/runtests.jl:4
ERROR: Package FEMBase errored during testing

Consider using symbols instead of strings for fields

Currently, fields are indexed using Strings (haskey(element, "flux")). For cases like this, using Symbols is usually better since Symbols are interned and therefore represented by their pointer. This means comparing symbols is just a pointer comparison and hashing them is faster than for strings. For example, the columns in a DataFrame are indexed using Symbols.

Update fields with dictionaries is not consistent

I just realized that we have two kinds of results when updating fields with dictionaries. First one, when we already have a DVTV field, and then update with dictionary:

using FEMBase
T = Dict(1 => 1.0, 2 => 2.0)
element = Element(Seg2, (1, 2))
update!(element, "temperature", 0.0 => (0.0, 0.0))
update!(element, "temperature", 1.0 => T)
element.fields["temperature"]

# output

DVTV{2,Float64}(Pair{Float64,Tuple{Float64,Float64}}[0.0=>(0.0, 0.0), 1.0=>(1.0, 2.0)])

However, if we create a a new field:

update!(element, "temperature2", 0.0 => T)
element.fields["temperature2"]

# output

DVTVd{Float64}(Pair{Float64,Dict{Int64,Float64}}[0.0=>Dict(2=>2.0,1=>1.0)])

It cannot be like this. For sure this is causing confusion. Because it is very unlikely that the user actually wants to create DVTVd, we should always have DVTV and creating DVTVd should need some special attention, maybe

temp_field = field(0.0 => T)
update!(element, "temperature", temp_field)

Changes to 0.3.0

Some of changes are probably breaking backward compatibility. New ideas welcome.

  • Give connectivity as tuple + other things that does not need calculations.

  • Element id required, change problem.elements from array to dict? In output writing we need some order anyway and id-numering elements is quite standard in other FEMs.

  • Naming: element.properties -> element.basis.

  • More intelligent way to calculate global dofs. Give unknown field name of problem with a tuple of symbols, (:u1, :u2, :u3) for vector field, :T for temperature field and so on. Or ("Displacement 1", "Displacement 2"). For hybrid problems, e.g. displacement + pressure, have ((:u1, :u2, :u3), :p). The question is how to set up this information to BoundaryProblems and how to update elements after global solution.

  • A separate algorithm to calculate global dofs for each element before assembly.

  • Just like element.fields, also solver and problem have equivalent way to create and update fields.

Tri3 -> Topology + Basis

Currently, we define elements using style

element = Element(Tri3, [1, 2, 3])

and it is somewhat implicitly assumed that the basis for the element is a nodal basis, e.g. N(xi, eta) = (xi, eta, 1-xi-eta) in this particular case. We should extend this definition by explicitly giving a basis, which defaults to nodal basis. So we can have Tri3{CG}, Tri3{Morley}, Tri3{DKT} and so on. We could implement this several ways, here's couple options:

abstract type AbstractBasis end
struct CG <: AbstractBasis end
struct Morley <: AbstractBasis end
abstract type AbstractTopology end
struct Tet10{T<:AbstractBasis} <: AbstractTopology end
abstract type AbstractElement end

struct Element{T<:AbstractTopology} <: AbstractElement
    # rest of stuff ...
    topology :: T
end

Element(Tet10{CG}())

# output

Element{Tet10{CG}}(Tet10{CG}())

Other way is

struct Tet4 <: AbstractTopology end

struct Element2{T<:AbstractTopology, B<:AbstractBasis} <: AbstractElement
    # rest of stuff ...
    topology :: T
    basis :: B
end

Element2(Tet4(), CG())

# output

Element2{Tet4,CG}(Tet4(), CG())

I don't know which one is better or is there maybe third and better way to think this. Anyway we can do this change so that element = Element(Tri3, [1, 2, 3]) is still working and having CG basis as default one:

function Element2(::Type{T}, connectivity) where {T <: AbstractTopology}
    return Element2(T(), CG())
end

Element2(Tet4, [1, 2, 3, 4])

# output

Element2{Tet4,CG}(Tet4(), CG())

Give nodes, elements, integration points, name instead of number?

Most of the FEM packages are "naming" nodes and elements by some running integer j. However, in e.g. Code Aster, each node or element is having a name. By default, nodes are prefixed with N and running number, elements with M and running number.

Advantages

I see a couple of benefits of this kind of naming approach:

  • Explicit is better than implicit. Currently, we are finding the dof number based on node id number, and this is a somewhat inflexible approach. By having a name, e.g. :coupling_node, we explicitly say that "do not use id number to calculate anything", and an arbitrary dof mapping/handler must be used to find dofs corresponding to that node, dofs[:coupling_node_1].
  • Give additional information about nodes or elements in their name. For example, we could have :coupling_node_1, :coupling_node_2, and so on. When importing several meshes, if having an integer-type node id, there most likely will happen collisions because same node ids are used in different bodies, and renumbering will be needed anyway. We could prefix nodes and elements in that case, e.g. :body1_N1, :body1_N2, :body2_N1, and so on.
  • Naming also most likely explicitly describe the type. Now 5 can mean node 5, element 5 or integration point 5. If we have :N5, :E5 and :IP5, it's immediately clear which one is a node, which one is an element and which one is an integration point. Yet one more advantage is how we should handle ABAQUS surfaces, which are defined in ABAQUS using element id + S1 ... S4, and in current JuliaFEM implementation we should figure out some explicit new id number for them. If using names, we could have :E1S1, :E1S2, :E1S3, :E1S4.
  • ABAQUS is using negative node ids internally, giving hints that they also have been finding the usual positive id range to be insufficient for all needs. For example, if we need to automatically generate internal/virtual nodes or elements during the simulation, we need a private namespace to avoid name collisions. This could be done easily by prefixing automatically generated stuff.

Disadvantages

I cannot find any of them, but probably there is some. At least we cannot calculate anything based on the node id number, no dim*(j-1)+i, where j is node number and i is local dof number, but I think this is actually something what we (should) want.

Changes to parsers

In AsterReader, we can directly assign names to mesh. In code aster meshes, nodes usually are prefixed with N and elements with M, like described above. In AbaqusReader, we just have to give some prefix to nodes and elements.

What do you think?

Abstract stress calculation from integration point

In the integration point, we need to have a place for a material model and we need to define a way to ask a stress state and jacobian from there given some strain or displacement. It should be abstract presentation enough so that we can have very different approaches for material modeling, e.g. microscale models added to integration point. In the same spirit as in #43, we could then have a possibility to override or extend a stress calculation for some particular integration points.

Time-dependent element connectivity?

Would it make sense to make connectivity time-dependent? If we define if using a field, i.e.

update!(element, :connectivity, 0.0 => (1, 2, 3, 4))
update!(element, :connectivity, 1.0 => (5, 6, 7, 8))

we could easily implement different mesh refinement strategies.

Fields converting Tensor variables to Arrays

How to reproduce:

  1. Update a value to integration point
    update!(ip, variable, time => value),
    where value <: AbstractTensor.
    typeof(value)
    [ Info: Tensors.SymmetricTensor{2,3,Float64,6}

  2. Check the realized type
    typeof(ip(variable, time))
    [ Info: Array{Float64,2}

Enhancements for fields

It would make sense to add some extra information for fields. I have some ideas:

  • to show that should all components of a vector or tensor field be active, we need some flag (see issue #47)
  • some fields may depend on time but might still not be possible to interpolate at least with linear interpolation, we should have some options to define how to interpolate in a time direction (nearest neighbor, linear, quadratic) and also how to extrapolate, when outside of time span
  • possibility to store "dual" variables needed to perform AD
  • possibility to store "increment" variables, i.e. if we have ´displacement`, but we actually solve increment \delta u, which is then added to displacement, u_t = u_{t-1} + \delta u, when converged

TimerOutputs

Error: TimerOutputs in its dependencies

ERROR: LoadError: ArgumentError: Package FEMBase does not have TimerOutputs in its dependencies:
- If you have FEMBase checked out for development and have
  added TimerOutputs as a dependency but haven't updated your primary
  environment's manifest file, try `Pkg.resolve()`.
- Otherwise you may need to report an issue with FEMBase

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.