In this blog post, a guideline to discretization is set-up in order to deepen your knowledge about meshing techniques. There will be a small section about all the important topics and connection to real-life practical utilization will be made.

Content:

Discretization

Element types and degree of freedom

Mesh density

Element order

Integration methods

Influence of 3D geometry

Meshing errors

**Discretization**

The software splits the component into small elements, so called finite elements which is called discretization. FEM meshing involves creating a suitable mesh that represents the geometry of the problem domain and discretizes it into finite elements. These elements are interconnected at specific points called nodes to form a mesh. This meshing process is crucial because the accuracy and efficiency of the simulation depend on the quality of the mesh. Mesh density, accuracy, time, computational power, quality of elements are all connected and one affects the other. In the following, most of the meshing aspects and "must-to-knows" are described and the connection between the affecting factors are established.

**Element types and degrees of freedom**

Elements can be 1-2 and 3 dimensional depending on the simulated problem. One thing is common: nodes have degrees of freedom, and its a question of the dimension, order, DOF's and type of the mesh to determine in how many directions and rotations the element is capable for movements. Degrees of freedom represents the number of ways a structure can move or deform. The degrees of freedom are associated with the nodes or points within each element. Typically, a 3D solid element can be a tetrahedron or hexahedron in shape with either flat or curved surfaces. Each node of the element will have three translational degrees of freedom. The element can thus deform in all three directions in space. These 3D elements can be simplified to 2D or even 1D and it is advised to do so when the problem allows for simplification as less elements will be used thus, less degree of freedom has to be calculated and the problem will converge faster.

Engineers must use elements that are suitable for the given simulation type in order to predict behavior correctly. 2D elements can not be used when bending is present and shells must be selected. Neither can bars be used when transverse loading is applied as they can't take bending into consideration and beams have to be used to achieve correct results. (see picture below).

**Mesh density**

Generally speaking, the more elements there are in a body ( the more portions the body is split into), the more precise the result becomes and the more time it takes to calculate the final results. Therefore it is a leverage between precision and time as a body can be meshed with few hundreds of elements and ran in 10 minutes, but can also be discretized into 1 million small portions and the runtime would increase to 1 day.

The body on the left side contains 3214 elements; meanwhile, the right side shows 164960. As you can imagine, the runtime would be significantly longer in case of the 2nd option, but how do you decide the needed number of elements that would not overkill the computer but would still provide the required accuracy?

One commonly employed method involves conducting a mesh convergence study to determine the optimal mesh density. This technique entails progressively refining the mesh by reducing the element size while simultaneously plotting the stress results against the number of elements utilized. Initially, as the mesh density increases, the stress values experience a substantial rise. However, the disparity in stress between iterations diminishes once a certain mesh density is attained. Is it worth to go from 500 to 1000 elements to account for a 2.5% stress increase from 78 to 80 [MPa]; meanwhile the computation power would consequently double?

Subsequently, it becomes the engineer's responsibility to ascertain whether the added precision justifies the doubling of computational resources and time, or if a reasonably accurate outcome at a reduced cost would suffice.

**Element order**

Selecting Thetra or hexa element is only the first step but we can also distinguish between first (linear) and second order (quadratic) elements.

A linear element, also known as a lower-order element, exhibits a linear shape function, meaning that the displacements within the mesh region connecting the nodes change in a linear manner relative to the distance between the nodes. Linear elements converge faster as there are fewer nodes in each elements though those are not so precise and can result in false values during large deformation as their sides are not able to bend but only tilt (distort).

A quadratic element, or a higher-order element, employs a non-linear shape function to describe its behavior. The displacements between the nodes, are interpolated using a higher order polynomial, allowing for a more accurate representation of the actual displacements.

Using quadratic elements will be the most precise and can be used in most of the circumstances though those require the most computational power and hence the slowest to calculate.

1, The picture on the left represents a 1D bar element with a displacement curve between the 2 nodes.

2, It is clear that in a single linear element, the result will be a linearly increasing/decreasing straight curve which deviates significantly from the actual result curve. Meanwhile, using a single quadratic element, the result can preciously be predicted.

3, Though, there is a workaround by splitting up the length and using several linear elements to fit smaller straight curves onto the quadratic as the picture on the right represents.

The difference between linear and quadratic formulation is even more pronounced when talking about shear locking and hour-glassing effects. These phenomenon can cause false shear results when linear elements are used in bending.

__Hour-glassing__

Hour-glassing can occur in first-order (linear), reduced-integration elements (C3D8R). In the given schematic, both visualization lines remain unchanged in length due to the single integration point. Consequently, the angle (90) between the visualization lines also remain unaffected by the applied deformation. This situation can give rise to zero-strain deformation modes, which lead to the generation of inaccurate and misleading results.

__Shear-locking__

Shear locking can occur in first-order (linear), fully-integrated elements (C3D8). This phenomenon arises when artificial shear strain emerges due to the inability of the element edges to undergo bending. Consequently, these elements exhibit excessive stiffness in problems dominated by bending, which is not representative of the physical behavior.

Generally, linear elements fit the purpose as long as displacements, materials and boundary conditions are linear or for thermal simulations. Once large deformation and bending takes place, it is advised to choose quadratic as that will realistically represent the natural behavior of the elements be it with reduced or full integration.

**Integration methods**

As mentioned, both reduced and full integration methods can be used, but what are these?

The integration points are locations across the area or volume of 2 or 3D element where the software calculates stresses. The integration method can be full or reduced depending on the required accuracy. As the pictures shows below, reduced integration means that less integration points exist in the element. On the contrary, values are calculated at several locations during full integration. From here, its obvious that a reduced integration method will converge faster but will loose accuracy compared to the full method. On the contrary, full integration is more precise but converges slower and more power is required.

What are these used for? Displacements are calculated at the nodes, but stresses are not. Instead, displacements are interpolated to the middle of the element with the help of shape functions where strains and stresses can be calculated with higher accuracy. After which, those stresses are extrapolated back to the nodes across the area of the element. Each of the element that shares that node will contribute to the stress value at the node. The more integration points exist in the middle of the element, the more points contribute to a single node therefore, more detail can be extracted and the less error will be introduced.

We have already agreed that using quadratic elements is necessary in case of bending. It is though not necessary to use full integration unless the result must be the most precise as the gained accuracy will cost a lot of computational power. The thumb rule is that quadratic elements with reduced integration method will work most of the time!

**Influence of 3D geometry**

By now its clear, that volume and areas of bodies are split into small elements where the values are calculated. It also evident, that a significantly smaller area can only be meshed with a significantly smaller mesh size compared to the neighboring larger area to capture the effects accurately. Why is it a problem? On the picture below, two identical yet different models are presented together with their meshed counterparts. The first picture shows the full detailed geometry including fillets, chamfers and small edges. Consequently, small surfaces! On the bottom, the same model but a simplified version is shown where the small surfaces are merged, fillets are removed and small edges that would split surfaces are deleted.

Looking at the meshed counterparts on the right side might help you to understand the importance of added details when it comes to meshing. Simplified version includes 225,149 meanwhile, the detailed version is meshed using 648,610 elements (3x the simplified one).

Detailed geometry will result in more detailed mesh. Smaller surfaces are split into smaller portions which will subsequently increase the mesh density. Greater mesh density means more elements in the same body which will drastically increase the calculation time and required power. Of course, it also increases the accuracy of the simulation. The question you have to ask yourself is do you need that accuracy everywhere?

Simplified geometry will allow for larger elements, hence less dense mesh, faster calculation without massive computer power requirement. Though, It comes at a price as the accuracy will decrease. Another important difference is a phenomenon called stress concentrations.

*Sharp edges are generally avoided in any design and fillets are added because stresses will always be the highest at sharp edges and that could lead to the failure of the component.*Now, deleting those exact edges because of meshing purposes will help the software to mesh the bodies but will provide false results at the sharp edges.The combination of the 2 can be the golden path as mesh size can be defined locally and not just globally. What it means is that, the mesh size globally can be large because the simplified geometry allows for that. However, fillets can still be used (and consequently smaller elements) in areas where failure is expected or areas which are in interest and high accuracy is required.

This method is heavily used in the industry though, one must be careful during the evaluation of simplified geometries as these can be misleading. The best approach is to get rid of unnecessary details where accuracy is of little significance and only add fillets and more elements to areas which are of interest!

**Meshing errors**

The preceding section discussed the significance of 3D geometry. However, it did not address the comparative impact of element quality on simulation errors when comparing simplified and detailed geometries. Simplified geometries are always easier to mesh as basically the same or really similar sized elements can be used on the entire body.

On the contrary, when portion of the body is meshed with significantly smaller element size compared to the neighboring area, a transition between the 2 will exist. This transition from small to large elements will be made by the software automatically, but unfortunately it's not the most efficient process and it results in elements which might show large aspect ratios (largely distorted elements). Consequently, achieving convergence in the affected areas becomes highly improbable, necessitating engineers to manually and locally refine the problematic mesh. This might not seem like a great issue based on the shown example below, but when dealing with extremely complex geometries, the number of problematic areas are high and the manual adjustment of the mesh will not only be time-consuming but also may not yield convergence in any case.

The picture above shows what aspect ratio in reality looks like and what the possible solutions are. The top picture shows the scenario when large elements are used to save on runtime but details (fillets) are also added. As a result, the area in between the fillets will be meshed with large deformed elements.

One possible solution can be the use of small elements everywhere in the model, which would solve the aspect ratio issue but would increase the runtime significantly.

Another solution is the simplification of the geometry which then allows the use of larger elements (faster run time), meanwhile the element shapes also look good. The drawback of this solution could be if the simplification is used in a sensitive area.

*What is aspect ratio and it's consequences?*

The quality of the mesh is normally evaluated in terms of aspect ratio which describes how well the elements can be shaped into the volume of the body by comparing the edge length of a single element. An element with high aspect ratio is one where one of the edge of the element is way longer than the rest, resulting in an elongated element shape rather than a squared one. It is important to note that high aspect ratio elements can lead to false results or non-converging simulations. As a result of high aspect ratios, simulation might terminate with errors such as *"largely distorted elements are present" *or *"Negative Jacobian values are calculated". *
In the following, consequences of large aspect ratios are discussed:

*Mesh distorsion*

When loads and boundary conditions are applied, the body will deform and displacements of the nodes and elements will take place. As a result of the displacement, the nodes and elements are pushed, stretched within the body and elements will change shape during deformation. Altering the shape of a healthy element will not cause problems However, attempting to modify the shape of an already distorted element with high aspect ratio is perilous and will not lead to convergence.

*Numerical stability*

The computation of results occurs at multiple points within each element, and these values are subsequently averaged. However, when dealing with an element that has a large aspect ratio, the integrated values within the same element may differ significantly and values might be excessively large or small. This discrepancy in values makes it challenging to obtain an accurate average, and it can potentially lead to erroneous outcomes.

*Convergence issues*

FEM simulations typically use iterative solvers to obtain a solution. The presence of large aspect ratio elements can significantly slow down the convergence rate of the iterative solvers or even prevent convergence altogether. This can happen because the large aspect ratio elements can introduce poorly conditioned equations or create difficulties in propagating the solution between elements.

*Ill-conditioned matrices*

FEM simulations involve solving a system of linear equations that arise from the discretization process. Large aspect ratio elements can result in ill-conditioned stiffness matrices, where __the matrix becomes very sensitive to small changes__ in the input data. Ill-conditioning can make the system difficult to solve accurately and efficiently, leading to premature termination.

*Negative Jacobian *
The Jacobian matrix is used to transform the derivatives of the shape functions from the physical domain to the reference domain, or vice versa. The Jacobian matrix plays a crucial role in mapping elements between different coordinate systems. This is a common procedure both in case of linear and quadratic elements where the sides of the are tilted or curved, but most importantly differ from the perfect unit element. Tilted and curved edges help to accommodate the elements to complex geometries, but it makes the integration process difficult and therefore the solution can not be obtained in the actual cartesian coordinate system.

Instead, the complex shapes are transformed into a universal ideal shape. The 4 sided element is transformed into a square with unit length of 2 with coordinates varying from -1 to 1. This element formulation is called the natural coordinate system and integrating the function becomes a straightforward task using Gauss quadrature. Shortly: elements are mapped from cartesian to natural coordinate system where integration is possible after which, elements must be converted back to the original cartesian system. This transformation is facilitated by the Jacobian, which serves as a mapping method for transitioning between the real and natural coordinate systems.

In FEM, the quality check that we call Jacobian, is actually the determinant of the transformation matrix. In order to transfer elements back to cartesian from natural coordinate system, the Jacobian (transformation matrix) must be inversed meaning that the matrix should be invertible. This is the case when the element is originally a perfect rectangle with the same unit length, and then the Jacobian (determinant) becomes 1. However, inverting the Jacobian of a highly distorted high aspect ratio element will result in a small number or even a negativ one. That is when the simulation terminates as a negativ determinant indicates that inverting the matrix is not possible and solution can't be found. The element on the left is permissible, but the element on the right is not and the mapping of that would result in a negative Jacobian.

Practically speaking, the Jacobian is a matrix which is a measure of how much of distortion the element undergoes while mapping from Cartesian to Natural coordinates. Therefore, an element with small aspect ratio will require less effort to convert to a perfect shape than an element which is heavily distorted at the first place.

The above described aspect ratio and it's consequences hopefully provides you insights into the complexity of meshing and why it is a good idea to get rid of those elements from your model.

**Final thoughts**

In short, hexa elements converge faster and more reliable. Some elements are capable of distributing strain/stress linearly throughout the elements, others can only show 1 constant value. Some behave fine in bending others show values with significant errors due to their nature. Geometries most of the time are not ideal, and simplifying edges, fillets might be necessary prior to running simulations in order to decrease runtime and computational power.

As a thumb rule: The use of quadratic higher order elements with reduced integration will suit most of the tasks. Quadratic elements can only be used to shapes that are suitable so simplifying geometries is advantageous but only in areas which are out of interest.

Meshing technique is a research topic in itself, and what's presented here serves only as introduction to what they are. Finally, its the simulation engineer's responsibility to choose the correct mesh type, density, order and integration method in order to obtain reliable numbers.

## Comments