Author: Ivan Grega. This repository is distributed under the CC BY license.
The repository contains data which can be used readily. The data is split into 5 files
Download each file and unzip. The following files will be obtained each with a size of approximately 470MB:
These catalogues can be readily used with the lattices
package
(see https://github.com/igrega348/lattices.git).
For instance, we can load the catalogue using
from lattices import Catalogue
cat = Catalogue.from_file('cat_00.lat', indexing=0)
and then load a lattice from the catalogue using
from lattices import Lattice
lat = Lattice(**cat[0])
lat = Lattice(**cat['trig_Z03.7_E6154_p_0.01_-6248866208548460135'])
Note that regular expressions can be used when loading the catalogue. For instance, to select unit cells with zero imperfections, one can do
cat = Catalogue.from_file('cat_00.lat', 0, regex='.*_p_0.0_.*')
Each file cat_XX.lat
is a text file containing unit cell data.
Lines
----- lattice_transition -----
are delimiters between different unit cells.
The following fields are present for each unit cell:
Name, Base name, Imperfection level, Nodal hash:
We use the following naming convention:
[base_name]_p_[imperfection_level]_[nodal_hash]
where base_name
is of the form:
[crystal_symmetry]_[connectivity]_[code]
For example:
trig_Z03.7_E6154_p_0.01_-6248866208548460135
is a unit cell originating from trigonal symmetry, with average nodal connectivity of $Z=3.7$, unique code E6154, imperfection level $p=0.01$, and a unique hash of nodal positions of $-6248866208548460135$
Imperfection kind:
How imperfections were applied. All data here uses sphere_surf
(alias for sphere surface).
This means, for instance, that for $p=0.01$,
the nodal positions were displaced from their original positions
by a fixed value of $0.01$ in a random direction.
Normalized unit cell paramters (a,b,c,alpha,beta,gamma):
The geometrical parametrs of the unit cell. These are used to transform nodal positions from reduced coordinates lying in a unit cube to the transformed coordinates depending on the crystal symmetry class.
Compliance tensors (Mandel):
Compliance tensors at various relative densities (here $\bar{\rho}\in \{0.001,0.003,0.01\}$). The data is a list of 21 numbers in the format of flattened upper triangular part of the $6\times6$ matrix in Mandel notation.
Nodal positions:
3 columns of data for $x,y,z$ positions of each node. Whitespace separated.
Bar connectivities:
2 columns of data specifying edge adjacency.
Fundamental edge adjacency:
2 columns of data specifying the fundamental edge adjacency. See https://github.com/igrega348/lattices.git for more info.
Fundamental tesselation vectors: 3 columns of data specifying the tesselation vector (in reduced coordinates) needed to obtain fundamental edge vector. See https://github.com/igrega348/lattices.git for more info.
For reproducibility, we include scripts that can be used to generate the data. Follow these instructions to reproduce the data:
scripts.zip
to obtain folder scripts
.lattices
package from https://github.com/igrega348/lattices.git.lattices/requirements.txt
can be used to set up the environment.
Activate the environment.Use prepare_data.py
to generate Abaqus input scripts.
On Windows using Powershell
for ($i=0; $i -lt 50; $i++) {
"Running slice $i"
python prepare_data.py $i
}
On Linux using bash
for i in {0..49} {
echo "Running slice $i"
python prepare_data.py $i
}
This will use file catalogue.lat
to generate input scripts.
By default, we generate input scripts at imperfection levels
[0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.07, 0.10] with 10 realizations per level.
This generates a lot of data, that's why we split the dataset into 50 slices
and generate 50 separate files.
If you'd like to try out the script without generating so much data, you
can set GENERATE='BASE'
in the main()
call.
This will only generate imperfection level 0.0 (pristine geometry) for
all lattices. You can also reduce the number of chunks from 50 in the main()
call.
The outputs of this call are
input_files_00.tar.gz
input_files_01.tar.gz
...
m_cat_00.lat
m_cat_01.lat
...
Use (or inspect and modify) script abq_submit.sh
The script is set up to run on cluster with PBS job scheduling system.
You'll also need file abq_analyse_parallel.py
which will be
run using Abaqus CAE.
The script will produce files
output_00.tar.gz
output_01.tar.gz
...
You don't need to unpack the archives but note that each of the gzipped files has structure:
./outputs/
./outputs/000000_00.json
...
Use post_process_FE.py
to post-process the archives.
On Windows using Powershell
for ($i=0; $i -lt 50; $i++) {
python post_process_FE.py $i
}
On Linux using bash
for i in {0..49} {
python post_process_FE.py $i
}
The script post_process_FE.py
will look to combine catalogues with FE data in the following scheme:
m_cat_00.lat + output_00.tar.gz --> cat_00.lat
m_cat_01.lat + output_01.tar.gz --> cat_01.lat
...
You can combine the resulting catalogue files -- we combined sets of 10 files
to go from 50 files to 5 files. The resulting set of files cat_XX.lat
is the same as the catalogue files provided.
Congratulations! If everything went well, you have generated the dataset.
In a number of places in our workflow, we use random number generators. For instance, the nodal imperfections which are applied to the lattices have a random nature. We do not provide a seed which will let you reproduce the numbers precisely. However, if you use the scripts to generate data, what you should find is that for the lattices without imperfections (..._p0.0...), the compliance tensors at matching relative densities will match.