In this guide, we will walk you through setting up and running DualSPHysics, a Smoothed-Particle Hydrodynamics (SPH) simulator, available as one of the built-in tools via the Inductiva API.
We will cover:
Setting up DualSPHysics for use with our API.
Example code to help you get started with simulations.
Using commands like
gencase
anddualsphysics
to run your simulations.An advanced Turbine example to show how to execute commands through the Inductiva API.
DualSPHysics#
DualSPHysics is a Smoothed-Particle Hydrodynamics (SPH) simulator. The simulator
is usually configured by a single file with the extension .xml
. This file
contains all the information about the simulation, including the geometry,
the physical properties of the fluids, the boundary conditions, the numerical
parameters, and the output files. Sometimes the configuration can also use extra
geometry files.
To run a DualSPHysics simulation you will need to set the commands. In general,
there are two main commands that are required to run a simulation: gencase
and
dualsphysics
. The gencase
command is used to generate the case files that
will be used by the dualsphysics
command to run the simulation.
There are other commands that allow you to post-process the results. Examples of this are:
partvtk
: to generate VTK files with the particle trajectories;isosurface
: to generate VTK files with the isosurfaces of the fluid;measuretool
: to generate CSV files with measurements of the fluid properties.
For an extensive list of commands, please refer to the DualSPHysics documentation. You can pass the API commands in lowercase, and we will handle the rest for you!
Example code#
In this example, we run a classical CFD case of a flow over a cylinder.
"""DualSPHysics example."""
import inductiva
# Instantiate machine group
machine_group = inductiva.resources.MachineGroup("c2-standard-4")
machine_group.start()
# Download the configuration files into a folder
input_dir = inductiva.utils.download_from_url(
"https://storage.googleapis.com/inductiva-api-demo-files/"
"dualsphysics-input-example.zip",
unzip=True)
commands = [
"gencase config flow_cylinder -save:all",
"dualsphysics flow_cylinder flow_cylinder -dirdataout data -svres",
("partvtk -dirin flow_cylinder/data -savevtk flow_cylinder/PartFluid "
"-onlytype:-all,+fluid")
]
# Initialize the Simulator
dualsphysics = inductiva.simulators.DualSPHysics()
# Run simulation with config files in the input directory
task = dualsphysics.run(input_dir=input_dir,
commands=commands,
on=machine_group)
task.wait()
task.download_outputs()
machine_group.terminate()
Running examples/chrono/09_Turbine
#
We will now demonstrate how to run a slightly more complex example included
in the DualSPHysics distribution, located in the examples
folder.
In general, the procedure involves transforming the shell script that controls
the several steps of the simulation into a list of commands that will then be
passed via the run()
method as shown above.
We are going to run the simulation stored in examples/chrono/09_Turbine
.
Originally, this simulation is orchestrated by a shell script that receives parameters from the user. The script takes those parameters, performs some initial configurations and then runs a series of Dualphysics commands, including pre-processing, simulation, and a number of post-processing steps.
We are going to start from the original file
examples/chrono/09_Turbine/xCaseTurbine_linux64_CPU.sh
and extract the
relevant command lines that will be executed via Inductiva API. The original
shell script looks like this (it’s quite long!):
#!/bin/bash
fail () {
echo Execution aborted.
read -n1 -r -p "Press any key to continue..." key
exit 1
}
# "name" and "dirout" are named according to the testcase
export name=CaseTurbine
export dirout=${name}_out
export diroutdata=${dirout}/data
# "executables" are renamed and called from their directory
export dirbin=../../../bin/linux
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${dirbin}
export gencase="${dirbin}/GenCase_linux64"
export dualsphysicscpu="${dirbin}/DualSPHysics5.2CPU_linux64"
export dualsphysicsgpu="${dirbin}/DualSPHysics5.2_linux64"
export boundaryvtk="${dirbin}/BoundaryVTK_linux64"
export partvtk="${dirbin}/PartVTK_linux64"
export partvtkout="${dirbin}/PartVTKOut_linux64"
export measuretool="${dirbin}/MeasureTool_linux64"
export computeforces="${dirbin}/ComputeForces_linux64"
export isosurface="${dirbin}/IsoSurface_linux64"
export flowtool="${dirbin}/FlowTool_linux64"
export floatinginfo="${dirbin}/FloatingInfo_linux64"
export tracerparts="${dirbin}/TracerParts_linux64"
option=-1
if [ -e $dirout ]; then
while [ "$option" != 1 -a "$option" != 2 -a "$option" != 3 ]
do
echo -e "The folder "${dirout}" already exists. Choose an option.
[1]- Delete it and continue.
[2]- Execute post-processing.
[3]- Abort and exit.
"
read -n 1 option
done
else
option=1
fi
if [ $option -eq 1 ]; then
# "dirout" to store results is removed if it already exists
if [ -e ${dirout} ]; then rm -r ${dirout}; fi
# CODES are executed according the selected parameters of execution in this testcase
${gencase} ${name}_Def ${dirout}/${name} -save:all
if [ $? -ne 0 ] ; then fail; fi
${dualsphysicscpu} ${dirout}/${name} ${dirout} -dirdataout data -svres
if [ $? -ne 0 ] ; then fail; fi
fi
if [ $option -eq 2 -o $option -eq 1 ]; then
export dirout2=${dirout}/particles
${partvtk} -dirin ${diroutdata} -savevtk ${dirout2}/PartFloating -onlytype:-all,+floating
if [ $? -ne 0 ] ; then fail; fi
${partvtk} -dirin ${diroutdata} -savevtk ${dirout2}/PartFluid -onlytype:-all,+fluid
if [ $? -ne 0 ] ; then fail; fi
${partvtkout} -dirin ${diroutdata} -savevtk ${dirout2}/PartFluidOut -SaveResume ${dirout2}/_ResumeFluidOut
if [ $? -ne 0 ] ; then fail; fi
export dirout2=${dirout}/boundary
${boundaryvtk} -loadvtk AutoActual -motiondata ${diroutdata} -savevtkdata ${dirout2}/Turbine -onlytype:floating -savevtkdata ${dirout2}/Bound.vtk -onlytype:fixed
export dirout2=${dirout}/surface
${isosurface} -dirin ${diroutdata} -saveiso ${dirout2}/Surface
if [ $? -ne 0 ] ; then fail; fi
fi
if [ $option != 3 ];then
echo All done
else
echo Execution aborted
fi
read -n1 -r -p "Press any key to continue..." key
As you can see above, the main simulation steps involve these two commands:
# CODES are executed according the selected parameters of execution in this testcase
${gencase} ${name}_Def ${dirout}/${name} -save:all
if [ $? -ne 0 ] ; then fail; fi
${dualsphysicscpu} ${dirout}/${name} ${dirout} -dirdataout data -svres
if [ $? -ne 0 ] ; then fail; fi
The post-processing commands include calls to partvtk
, partvtkout
,
boundaryvtk
, and isosurface
. These commands generate the data required to
produce the visualization displayed above using ParaView.
The script also sets several variables at the beginning, some of which relate to the paths for DualSPHysics commands, assuming a local installation. In our Inductiva Python script, we won’t need these, as the API automatically handles all command paths on the remote machine. There are also a number of variable related to filenames and input/output directories:
export name=CaseTurbine
export dirout=${name}_out
export diroutdata=${dirout}/data
For a matter of convenience, we will want to keep the control over input / output directories, so we will convert and keep these variables in our python script. Additionally, we’ll take this opportunity to explicitly add a few more output directories and make the style a bit more Pythonic. So, our Python script should look something like this:
# Let's keep the original variables for the directory names, but we will
# make them more pythonic and we will add a few others for readibility
name="CaseTurbine"
dirout=f"{name}_out"
data_dirout=f"{dirout}/data"
particles_dirout=f"{dirout}/particles"
boundary_dirout=f"{dirout}/boundary"
surface_dirout=f"{dirout}/surface"
Next, we need to convert the seven command lines into strings, while making use
of the variables we just defined. We’ll also be able to call the DualSPHysics
commands (such as gencase
, dualsphysics
, partvtk
, etc.) directly, as they
are all pre-installed on the machine we will spin up later.
commands = [
f"gencase {name}_Def {dirout}/{name} -save:all",
f"dualsphysics {dirout}/{name} {dirout} -dirdataout data -svres",
f"partvtk -dirin {data_dirout} -savevtk {particles_dirout}/PartFloating -onlytype:-all,+floating",
f"partvtk -dirin {data_dirout} -savevtk {particles_dirout}/PartFluid -onlytype:-all,+fluid",
f"partvtkout -dirin {data_dirout} -savevtk {particles_dirout}/PartFluidOut -SaveResume {particles_dirout}/_ResumeFluidOut",
f"boundaryvtk -loadvtk AutoActual -motiondata {data_dirout} -savevtkdata {boundary_dirout}/Turbine -onlytype:floating -savevtkdata {boundary_dirout}/Bound.vtk -onlytype:fixed",
f"isosurface -dirin {data_dirout} -saveiso {surface_dirout}/Surface"
]
The rest of the Python script follows the usual pattern. Here is the final resulting script, ready to be executed:
import inductiva
# Let's keep the original variables for the directory names, but we will
# make them more pythonic and we will add a few others for readibility
name="CaseTurbine"
dirout=f"{name}_out"
data_dirout=f"{dirout}/data"
particles_dirout=f"{dirout}/particles"
boundary_dirout=f"{dirout}/boundary"
surface_dirout=f"{dirout}/surface"
# Now we build the list of commands that we will pass to run().
# We basically map all the commands in the original shell script 1 to 1.
# The main changes are related with the new variables we defined above.
# Also, we call the commands (gencase, dualsphysics, partvtk, etc) directly.
commands = [
f"gencase {name}_Def {dirout}/{name} -save:all",
f"dualsphysics {dirout}/{name} {dirout} -dirdataout data -svres",
f"partvtk -dirin {data_dirout} -savevtk {particles_dirout}/PartFloating -onlytype:-all,+floating",
f"partvtk -dirin {data_dirout} -savevtk {particles_dirout}/PartFluid -onlytype:-all,+fluid",
f"partvtkout -dirin {data_dirout} -savevtk {particles_dirout}/PartFluidOut -SaveResume {particles_dirout}/_ResumeFluidOut",
f"boundaryvtk -loadvtk AutoActual -motiondata {data_dirout} -savevtkdata {boundary_dirout}/Turbine -onlytype:floating -savevtkdata {boundary_dirout}/Bound.vtk -onlytype:fixed",
f"isosurface -dirin {data_dirout} -saveiso {surface_dirout}/Surface"
]
# From this point on, we follow the typical structure for Inductiva API script.
# We start a machine group. We will use 64 vcpus.
machine_group = inductiva.resources.MachineGroup(
machine_type="n2d-highcpu-64",
spot=True,
data_disk_gb=20)
machine_group.start()
# Initialize the Simulator
dualsphysics = inductiva.simulators.DualSPHysics()
# Run simulation with config files in the input directory
# We still point to the original directory, where all the
# configuration files and required 3D models are.
task = dualsphysics.run(
input_dir="examples/chrono/09_Turbine",
commands=commands,
on=machine_group)
# Let's wait for the task to conclude.
task.wait()
# Shut down the machine.
machine_group.terminate()
# Just see some stats for now. We will download the results later.
task.print_summary()
Now, we can run this script, and the output should look something like this:
■ Tier: Power-User
■ Credits: 1000.00 US$
■ Global User quotas
CURRENT USAGE MAX ALLOWED
Maximum simultaneous instances 0 instance 100 instance
Maximum price per hour across all instances 0 USD 270 USD
Maximum tasks per week 3 task N/A
Maximum number of VCPUs 0 vcpu 1000 vcpu
Maximum time a machine group can stay idle before termination N/A 120 minute
■ Instance User quotas
MAX ALLOWED
Maximum disk size 2000 GB
Maximum time a machine group can stay up before automatic termination 48 hour
Maximum amount of RAM per VCPU 6 GB
■ Registering MachineGroup configurations:
· Name: api-57yp64vdyz50a8fx0ttuz2n90
· Machine Type: n2d-highcpu-64
· Data disk size: 20 GB
· Maximum idle time: 30 minutes
· Auto terminate timestamp: 2024/08/20 20:01:22
· Number of machines: 1
· Spot: True
· Estimated cloud cost of machine group: 0.670 $/h
· You are spending 3.3x less by using spot machines.
Starting MachineGroup(name="api-57yp64vdyz50a8fx0ttuz2n90"). This may take a few minutes.
Note that stopping this local process will not interrupt the creation of the machine group. Please wait...
Machine Group api-57yp64vdyz50a8fx0ttuz2n90 with n2d-highcpu-64 machines successfully started in 0:00:29.
The machine group is using the following quotas:
USED BY RESOURCE CURRENT USAGE MAX ALLOWED
Maximum number of VCPUs 64 64 1000
Maximum simultaneous instances 1 1 100
Maximum price per hour across all instances 0.6727 0.6727 270
■ Using production image of DualSPHysics version 5.2.1
■ Task Information:
· ID: u8v7p1v7wfyvvkyc0iq0s632k
· Simulator: DualSPHysics
· Version: 5.2.1
· Image: docker://inductiva/kutu:dualsphysics_v5.2.1
· Local input directory: examples/chrono/09_Turbine
· Submitting to the following computational resources:
· Machine Group api-57yp64vdyz50a8fx0ttuz2n90 with n2d-highcpu-64 machines
Preparing upload of the local input directory examples/chrono/09_Turbine (2.25 MB).
Input archive size: 1.09 MB
Uploading input archive...
100%|██████████████████████████████████████████████████████████████████████████████| 1.09M/1.09M [00:01<00:00, 715kB/s]
Local input directory successfully uploaded.
■ Task u8v7p1v7wfyvvkyc0iq0s632k submitted to the queue of the Machine Group api-57yp64vdyz50a8fx0ttuz2n90 with n2d-highcpu-64 machines.
Number of tasks ahead in the queue: 0
· Consider tracking the status of the task via CLI:
inductiva tasks list --id u8v7p1v7wfyvvkyc0iq0s632k
· Or, tracking the logs of the task via CLI:
inductiva logs u8v7p1v7wfyvvkyc0iq0s632k
· You can also get more information about the task via the CLI command:
inductiva tasks info u8v7p1v7wfyvvkyc0iq0s632k
Waiting for task u8v7p1v7wfyvvkyc0iq0s632k to complete...
Go to https://console.inductiva.ai/tasks/u8v7p1v7wfyvvkyc0iq0s632k for more details.
■ Task u8v7p1v7wfyvvkyc0iq0s632k successfully queued and waiting to be picked-up for execution...
The task u8v7p1v7wfyvvkyc0iq0s632k is about to start.
■ Task u8v7p1v7wfyvvkyc0iq0s632k has started and is now running remotely.
■ Task u8v7p1v7wfyvvkyc0iq0s632k completed successfully.
Downloading stdout and stderr files to u8v7p1v7wfyvvkyc0iq0s632k...
Partial download completed to u8v7p1v7wfyvvkyc0iq0s632k.
Successfully requested termination of MachineGroup(name="api-57yp64vdyz50a8fx0ttuz2n90").
Termination of the machine group freed the following quotas:
FREED BY RESOURCE CURRENT USAGE MAX ALLOWED
Maximum number of VCPUs 64 0 1000
Maximum simultaneous instances 1 0 100
Maximum price per hour across all instances 0.6727 0 270
Task status: success
Wall clock time: 0:09:35
Time breakdown:
Input upload: 1.81 s
Time in queue: 10.71 s
Container image download: 1.23 s
Input download: 0.09 s
Input decompression: 0.01 s
Computation: 0:06:20
Output upload: 0:03:01
Data:
Size of zipped output: 3.52 GB
Size of unzipped output: 5.35 GB
Number of output files: 2541
That’s it!
We can now donwload the results to our local machine using Inductiva’s CLI:
inductiva tasks download u8v7p1v7wfyvvkyc0iq0s632k
Downloading and decompressing data will take a few minutes (depending on your network connection):
Downloading simulation outputs to inductiva_output/u8v7p1v7wfyvvkyc0iq0s632k/output.zip...
100%|█████████████████████████████████████████████████████████████████████████████| 3.52G/3.52G [04:43<00:00, 12.4MB/s]
Uncompressing the outputs to u8v7p1v7wfyvvkyc0iq0s632k...
AAs usual, the results are placed in the inductiva_output
folder, within a
subfolder named after the task. Earlier, we set a variable for the internal
directory where all outputs would be placed (dirout
), which was instantiated
as CaseTurbine_out
. Let’s check its contents:
ls -las inductiva_output/u8v7p1v7wfyvvkyc0iq0s632k/CaseTurbine_out
total 36080
0 drwxr-xr-x 22 lsarmento staff 704 19 Aug 12:09 .
0 drwxr-xr-x 14 lsarmento staff 448 19 Aug 12:09 ..
9888 -rw-r--r-- 1 lsarmento staff 5058947 19 Aug 12:07 CaseTurbine.bi4
16 -rw-r--r-- 1 lsarmento staff 4523 19 Aug 12:07 CaseTurbine.out
24 -rw-r--r-- 1 lsarmento staff 10830 19 Aug 12:07 CaseTurbine.xml
9432 -rw-r--r-- 1 lsarmento staff 4827935 19 Aug 12:07 CaseTurbine_All.vtk
1280 -rw-r--r-- 1 lsarmento staff 653967 19 Aug 12:07 CaseTurbine_Bound.vtk
8160 -rw-r--r-- 1 lsarmento staff 4174240 19 Aug 12:07 CaseTurbine_Fluid.vtk
752 -rw-r--r-- 1 lsarmento staff 382901 19 Aug 12:07 CaseTurbine_MkCells.vtk
2488 -rw-r--r-- 1 lsarmento staff 1272363 19 Aug 12:07 CaseTurbine__Actual.vtk
8 -rw-r--r-- 1 lsarmento staff 583 19 Aug 12:07 CaseTurbine_dbg-fillbox.vtk
8 -rw-r--r-- 1 lsarmento staff 1947 19 Aug 12:07 CfgChrono_Scheme.vtk
8 -rw-r--r-- 1 lsarmento staff 854 19 Aug 12:07 CfgInit_Domain.vtk
16 -rw-r--r-- 1 lsarmento staff 4155 19 Aug 12:07 CfgInit_MapCells.vtk
8 -rw-r--r-- 1 lsarmento staff 2415 19 Aug 12:07 Floating_Materials.xml
3880 -rw-r--r-- 1 lsarmento staff 1985884 19 Aug 12:07 Rotor.stl
8 -rw-r--r-- 1 lsarmento staff 909 19 Aug 12:07 Run.csv
104 -rw-r--r-- 1 lsarmento staff 49764 19 Aug 12:07 Run.out
0 drwxr-xr-x 504 lsarmento staff 16128 19 Aug 12:09 boundary
0 drwxr-xr-x 508 lsarmento staff 16256 19 Aug 12:08 data
0 drwxr-xr-x 1006 lsarmento staff 32192 19 Aug 12:08 particles
0 drwxr-xr-x 503 lsarmento staff 16096 19 Aug 12:09 surface
Data for visualization is placed inside the directories boundary
, particles
and surface
. This data can be loaded in ParaView and rendered in a movie as
the one seen above.