1. Running Without Graphical User Interface It is possible to run the program without opening the graphical user interface to perform certain tasks. Most notably this is useful for running analyses on workstations or High Performance Computing (HPC) clusters which might not have any graphical user interface. It can also be used to run a part of an automated workflow administered by scripts or other automation software. 1.1. SIMA Runtime Engine SIMA Runtime Engine (SRE) is a separate executable located in the installation folder. SRE is capable of running certain parts of the program without loading the graphical user interface. Instead the user gives SRE command line arguments which specifies what should be done. Note: In the following, the SIMA Runtime Engine executable is referred to as sre.exe which is the name of the executable in the Windows versions of SIMA. If you’re using SIMA on Linux the name of the executable is sre. Otherwise, the commands shown here are the same on Windows and Linux. Generally, the format of a SRE command is as follows: sre.exe -data [workspace folder] --[command] [command arguments] where [workspace directory] is the location of the workspace directory to run the command in. The folder will be used for files created while executing the command. If the directory already contains a SIMA workspace any tasks present will be loaded before executing the command. [command] is the command to be executed. [command arguments] is arguments specific to the chosen command. To get information about the available commands, open a terminal and execute the command sre.exe --help all 1.2. Command creation/export wizard You can create commands in the SIMA UI using main menu Tools→advanced→Run/Export batch command. Right click the navigation viewer and create new commands. Run a single command or all commands to see that they work before exporting it to a command file. The command file can be run using sre.exe -data [workspace folder] --commands file=[path to file] 1.3. How To Run Simulations using SRE This section shows how to set up and run a workflow with a single simulation and some post processing using SRE. This setup can be useful to distribute large number of simulations on a workstation or HPC cluster. To load the example, go to Help → Examples → Workflow → Run Simulation from the Command Line. The example workflow uses the SIMO semi-submersible example model, runs simulations for a specified sea state, wave seed and wind velocity and does some simple post processing of the results. The final results are stored in a HDF5 file in the workspace directory which can be read back into SIMA or some other application to do further result processing. Note: Simulation length, the number of variables that are used and the amount of post processing performed are all limited in order to focus on running SRE from the command line. Real analyses will typically run longer simulations, vary more variables and do more result processing. The workflow single_simulation in the workflow task WorkflowTask does the following steps: 1. Take inputs for Hs, Tp, windSpeed and seed 2. Runs a simulation with the given variable values 3. Runs the process_results workflow in order to compute the maximum floater offset 4. Store the result in a HDF5 file for further processing of results from multiple simulations To run a simulation from the command line: 1. Export the workflow and SIMO model to an stask file (right-click on the workflow task and choose Export) 2. Open a terminal in the folder where you want to run the simulation 3. Execute the following command (note the quotation marks around the input values): [path to SIMA installation]/sre.exe -data . --run file=[path to stask] task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=1" After the command has completed there should be a file results.h5 located in the current working directory. This can be expanded upon to do a seed variation for a given sea state and wind velocity: sre.exe -data seed_1 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=1" sre.exe -data seed_2 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=2" sre.exe -data seed_3 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=3" sre.exe -data seed_4 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=4" sre.exe -data seed_5 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=5" sre.exe -data seed_6 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=6" sre.exe -data seed_7 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=7" sre.exe -data seed_8 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=8" sre.exe -data seed_9 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=9" sre.exe -data seed_10 --run file=model.stask task=WorkflowTask workflow=single_simulation input="Hs=5.0;Tp=12.0;windSpeed=7.0;seed=10" The result files in each workspace directory seed_[value] can then be loaded into SIMA to do Gumbel fitting or any other result aggregation for the given environmental condition. The process_seed_variation workflow in the example shows how this could be done. Executing the commands directly as shown above will run simulations in sequence which is rarely is ideal as most computers have multiple CPU cores available. When running simulations in a SIMA workflow set or condition set the distribution of simulations is handled by SIMA. In this example we’ve chosen to just ask SIMA to run a single simulation so the scheduling of multiple simulations must then be handled by the user. This can for example be done using a Python script. On a HPC cluster this is typically handled by a separate job scheduling software. Multiple options exist and the particular tool used varies from cluster to cluster. 1.3.1. Considerations when Running on a HPC Cluster HPC clusters typically use a job scheduler to run jobs on multiple compute nodes. When running on such systems some special considerations need to be taken into account: Most job schedulers rely on the user to specify a job script that executes the commands required to perform the job. This script should execute SRE with the appropriate command line arguments. The stask file with the model and workflow need to be located somewhere available from nodes running the simulations. There are typically shared network drives available for this purpose. The result file must be stored somewhere it can be retrieved after the job has been completed. One option is to copy result files to the network drive as a part of the job. In the example setup the input variable output_dir can be used to provide a path to the location where SIMA should store the file. Running a single SRE command as outlined above in a HPC job will execute a single simulation which will utilize approximately one CPU core. If the job scheduling system allows multiple jobs to run on the same node this should be enabled. If this is not possible, job scripts should be set up to run multiple simulations in parallel in order to utilize the resources available on the node. Python integration Export to video