{"id":5265,"date":"2021-03-16T12:07:48","date_gmt":"2021-03-16T12:07:48","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=5265"},"modified":"2021-03-16T15:38:30","modified_gmt":"2021-03-16T15:38:30","slug":"flame","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/flame\/","title":{"rendered":"Flame"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"http:\/\/flame.ac.uk\">FLAME<\/a> is a flexible and generic agent-based modelling platform that can be used to develop models and simulations for complex system applications in many areas such as economics, biology and social sciences, to name a few. It generates a complete agent-based application that can be compiled and deployed on many computing systems ranging from laptops to distributed high performance super computing environments.<\/p>\n<p>Flame consists of an executable app named xparser and a library named libmboard. The xparser app is used to generate C source code from your model description. You code is then compiled and linked against the libmboard library.<\/p>\n<p>Version 0.17.1 of xparser (a component of flame) is installed on the CSF and this is the number we use to version flame.<br \/>\nVersion 0.3.1 of libmboard (the other component of flame) is installed. Serial and parallel (MPI) versions of this are available.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>There are no restrictions on accessing flame on the CSF. <\/p>\n<h2>Set up procedure<\/h2>\n<p>We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job <abbr title=\"add '#$ -V' to your jobscript\">inherit these settings<\/abbr>.<\/p>\n<p>Load <em>one<\/em> of the following modulefiles:<\/p>\n<pre>\r\n# The flame modulefile will set up xparser and libmboard\r\n\r\nmodule load apps\/gcc\/flame\/0.17.1           # Serial version of libmboard\r\nmodule load apps\/gcc\/flame\/0.17.1-mpi       # Serial and parallel versions of libmboard\r\n\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run flame simulations on the login node. You <em>may<\/em> run the xparser command and gcc compiler to generate your simulation executable. Jobs should be submitted to the compute nodes via batch.<\/p>\n<h3>Compiling your model<\/h3>\n<p>The following commands show how to compile a model before running it as a batch job (see below). We use one of the FLAME tutorial models that have already been downloaded. You can run the following commands on the login node:<\/p>\n<pre>\r\n# Work in scratch\r\nmkdir ~\/scratch\/flame\r\ncd ~\/scratch\/flame\r\n\r\n# Load the MPI modulefile that supports serial and parallel usage\r\nmodule load apps\/gcc\/flame\/0.17.1-mpi\r\n\r\n# Copy the already-downloaded tutorials to your current directory\r\ncp -r $FLAME_TUTORIALS .\r\ncd tutorial_models\/model_01\/\r\n\r\n# Generate the serial (1-core) final production (non-debug) source code, then compile (make)\r\nxparser -s -f model_01.xml\r\nmake\r\n\r\n# To compile the parallel version you would use\r\nxparser -p -f model_01.xml\r\nmake\r\n\r\n# You will now have an executable application named 'main'\r\nls -l main\r\n<\/pre>\n<p>You can now run the <code>main<\/code> application in a CSF job &#8211; please see below.<\/p>\n<h3>Serial batch job submission<\/h3>\n<p>The following examples assume you have compiled your FLAME model and therefore have an executable named <code>main<\/code> in the current directory.<\/p>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd             # Job will run from the current directory\r\n                    # NO -V line - we load modulefiles in the jobscript\r\n\r\n# This is the serial-only version of flame\r\nmodule load apps\/gcc\/flame\/0.17.1\r\n\r\n# Here we run 100 iterations with an initial state described in 0.xml\r\n# using geometric (-g) rather than round-robin (-r) partitioning and we\r\n# output every 10 iterations\r\n.\/main 100 0.xml -g -f 10\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Single-node parallel batch job submission<\/h3>\n<p>You must have generate the parallel version of the model using <code>xparser -p<\/code> (see earlier).<\/p>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -pe smp.pe 8     # 8 cores (can be 2--32) is also the number of FLAME partitions\r\n#$ -cwd             # Job will run from the current directory\r\n                    # NO -V line - we load modulefiles in the jobscript\r\n\r\n# This is the serial and parallel version of flame\r\nmodule load apps\/gcc\/flame\/0.17.1-mpi\r\n\r\n# Here we run 100 iterations with an initial state described in 0.xml\r\n# using geometric (-g) rather than round-robin (-r) partitioning and we\r\n# output every 10 iterations.\r\n# We will use $NSLOTS partitions, where $NSLOTS is automatically set to the number of\r\n# cores requested above. You must use $NSLOTS cores, no more!!\r\nmpirun -np $NSLOTS .\/main 100 0.xml -g -f 10\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Multi-node parallel batch job submission<\/h3>\n<p>You must have generate the parallel version of the model using <code>xparser -p<\/code> (see earlier).<\/p>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -pe mpi-24-ib.pe 48     # 48 cores (48 or more in multiples of 24) - the number of FLAME partitions\r\n#$ -cwd                    # Job will run from the current directory\r\n                           # NO -V line - we load modulefiles in the jobscript\r\n\r\n# This is the serial and parallel version of flame\r\nmodule load apps\/gcc\/flame\/0.17.1\r\n\r\n# Here we run 100 iterations with an initial state described in 0.xml\r\n# using geometric (-g) rather than round-robin (-r) partitioning and we\r\n# output every 10 iterations.\r\n# We will use $NSLOTS partitions, where $NSLOTS is automatically set to the number of\r\n# cores requested above. You must use $NSLOTS cores, no more!!\r\nmpirun -np $NSLOTS .\/main 100 0.xml -g -f 10\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"\">APP website<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>None.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview FLAME is a flexible and generic agent-based modelling platform that can be used to develop models and simulations for complex system applications in many areas such as economics, biology and social sciences, to name a few. It generates a complete agent-based application that can be compiled and deployed on many computing systems ranging from laptops to distributed high performance super computing environments. Flame consists of an executable app named xparser and a library named.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/flame\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-5265","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/5265","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=5265"}],"version-history":[{"count":7,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/5265\/revisions"}],"predecessor-version":[{"id":5275,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/5265\/revisions\/5275"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=5265"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}