{"id":10457,"date":"2025-06-25T17:04:53","date_gmt":"2025-06-25T16:04:53","guid":{"rendered":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=10457"},"modified":"2026-03-06T16:37:13","modified_gmt":"2026-03-06T16:37:13","slug":"carla","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/carla\/","title":{"rendered":"CARLA"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><strong>CARLA<\/strong> has been developed from the ground up to support development, training, and validation of autonomous driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites, environmental conditions, full control of all static and dynamic actors, maps generation and much more.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>Though CARLA is open source and freely distributed under MIT License, the license of <a href=\"https:\/\/unrealengine.com\/\" target=\"_blank\" rel=\"noopener\">Unreal Engine<\/a> that CARLA uses is <em>Source-available commercial software with royalty model for commercial use.<\/em><\/p>\n<p><strong>CARLA and Unreal Engine in CSF3 is to be used for academic and research purposes only. They must not be used for commercial purposes.<\/strong><\/p>\n<p>Further license information can be seen at the following links:<\/p>\n<ul>\n<li><a href=\"https:\/\/raw.githubusercontent.com\/carla-simulator\/carla\/refs\/heads\/ue5-dev\/LICENSE\" target=\"_blank\" rel=\"noopener\">CARLA License<\/a><\/li>\n<li><a href=\"https:\/\/www.unrealengine.com\/en-US\/license\" target=\"_blank\" rel=\"noopener\">Unreal Engine license terms<\/a><\/li>\n<li><a href=\"https:\/\/www.unrealengine.com\/en-US\/eula\/unreal\"target=\"_blank\">Unreal Engine EULA<\/a>\n<\/ul>\n<p>To get access to CARLA on the CSF3, you must request it via our form <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/hpc-help\">help form<\/a> and confirm that your usage will comply with the terms and licenses above. Please note that we may ask you for some further info to support your request.<\/p>\n<h2>Set up procedure<\/h2>\n<div class=\"note\">Once you have been informed that your access to CARLA has been granted, you will have to re-login to CSF &#8211; any existing logins will not see the <code>carla<\/code> group membership.<\/div>\n<p>To access the software you must load ONE of the modulefiles:<\/p>\n<pre>\r\n<strong>module load apps\/python\/carla\/0.10.0<\/strong>       #CARLA v0.10.0 with Unreal Engine 5\r\n<strong>module load apps\/python\/carla\/2022<\/strong>         #CARLA Transfuser 2022-tree (v0.9.10.1) with Unreal Engine 4\r\n<\/pre>\n<h2>Running the application &#8211; CARLA v0.10.0 with Unreal Engine 5<\/h2>\n<p>Please do NOT run CARLA on the login nodes. CARLA requires a lot of resources and can overload the login nodes.<br \/>\nJobs should be submitted to the compute nodes via batch (<a href=\"\/csf3\/software\/applications\/carla\/#Batch_job_submission\">example below<\/a>).<\/p>\n<p>Normally, CARLA should be run via batch. However, once-in-a-while some user might need to visualize the scenes they are working on and for this purpose please use a <a href=\"\/csf3\/batch-slurm\/gpu-jobs-slurm\/#Interactive_Jobs\" target=\"_blank\" rel=\"noopener\">interactive job<\/a> (<a href=\"\/csf3\/software\/applications\/carla\/#Interactive_job_example\">details below<\/a>).<\/p>\n<h3>Interactive job example<\/h3>\n<p>The steps for running CARLA in Graphical mode in an interactive session are:<\/p>\n<ol>\n<li>Start an interactive session.<\/li>\n<li>Load CARLA module<\/li>\n<li>Launch CARLA<\/li>\n<\/ol>\n<p>The set of commands needed to accomplish this are:<\/p>\n<pre class=\"slurm\"># Start an interactive session with 1 V100 GPU, 1 CPU core for 1 Hour\r\n# This should land you in an interactive node if resources are available.\r\n# If you do not get an interactive session, try again later\r\n<strong>srun-x11 -p gpuV -G 1 -n 1 -t 0-1 --pty bash<\/strong>\r\n\r\n# Once you have landed in an interactive node load the CARLA module\r\n<strong>module purge\r\nmodule load apps\/python\/carla\/0.10.0<\/strong>\r\n\r\n# Launch CARLA interactive GUI\r\n<strong>CarlaUnreal.sh -nosound<\/strong>      #or just: <strong>carla -nosound<\/strong>\r\n<\/pre>\n<div class=\"note\">CARLA crashing with error message:<br \/>\n<span style=\"color: #cc0000;\">libc++abi: terminating due to uncaught exception of type std::__1::system_error: bind: <strong>Address already in use<\/strong><\/span><br \/>\nSee tip on <a href=\"\/csf3\/software\/applications\/carla\/#Ports_used_by_CARLA_server_process\">how to change port number<\/a> below to learn how to overcome this error.<\/div>\n<p>This should open a new window with the default scene. Please note that it will take some time to load for the first time. Initially it will be all dark and seem as if nothing is happening. Just give it some time. After a few seconds the default scene should come up in the window.<\/p>\n<p>Please note that the frame rate will be low since the rendering is happening over network remotely, not to a monitor directly connected to the hardware where it is running.<br \/>\n\ud83d\udcdd See tip on <a href=\"\/csf3\/software\/applications\/carla\/#Controlling_the_resolution\">controlling the resolution<\/a> below for slightly better frame rate.<\/p>\n<p>At this stage you will be able to navigate the scene with the keys <strong>Q\/E\/W\/S\/A\/D<\/strong>.<\/p>\n<ul>\n<li>Q &#8211; move upwards (towards the top edge of the window)<\/li>\n<li>E &#8211; move downwards (towards the lower edge of the window)<\/li>\n<li>W &#8211; move forwards<\/li>\n<li>S &#8211; move backwards<\/li>\n<li>A &#8211; move left<\/li>\n<li>D &#8211; move right<\/li>\n<\/ul>\n<p>\ud83d\udcdd See <a href=\"#api-carla1.10.0\">Manipulation using API<\/a> section below for interacting with this CARLA server instance.<\/p>\n<h3>Batch job submission<\/h3>\n<p>Create a batch submission script like the following and submit a self-contained job to the batch.<\/p>\n<pre class=\"slurm\">#!\/bin\/bash --login\r\n#SBATCH -p gpuV               # v100 GPU. Other available types- gpuA(A100), gpuL(L40)\r\n#SBATCH -G 1                  # 1 GPU\r\n#SBATCH -t 1-0                # Wallclock limit (1-0 is 1-day &amp; 0-hour, 4-0 is max permitted)\r\n#SBATCH -n 8                  # Select the no. of CPU cores\r\n                              # Can use up to  8 CPUs with an v100 GPU.\r\n                              # Can use up to 12 CPUs with an A100 GPU.\r\n                              # Can use up to 12 CPUs with an L40s GPU.\r\n#SBATCH -J carla\t      # Jobname\r\n#SBATCH -o %x.o%j\t      # %x = SLURM_JOB_NAME\r\n#SBATCH -e %x.e%j\t      # %j = SLURM_JOB_ID\r\n\r\n# Load the module\r\nmodule purge\r\nmodule load apps\/python\/carla\/0.10.0\r\n\r\n# Find free port to run CARLA server process\r\nexport FREEPORT=$(find-freeport)\r\n\r\n# Optional: Save the portnumber to a text file, for use by your python code later.\r\necho $FREEPORT &gt; FREEPORT.txt\r\n\r\n# Launch CRALA server (notice the &amp; at the end of this line)\r\nCarlaUnreal.sh -carla-port=$FREEPORT -RenderOffScreen -nosound &amp;\r\n\r\n# Run your Python script\r\npython myscript.py\r\n         #\r\n         # If you need to know the port number in your python code, you can\r\n         # read the FREEPORT.txt file (see above), or read the environment variable:\r\n         #   import os;\r\n         #   portnum=os.getenv(\"FREEPORT\")\r\n         # Use 127.0.0.1 for the hostname.\r\n<\/pre>\n<p>Submit the jobscript using:<\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Manipulating CARLA server process remotely from login node through its Python API<\/h3>\n<p>You can manipulate CARLA server process running in the compute node through its Python API from the login node itself.<br \/>\nFor this, first create a batch submission script like the following and submit it to the batch.<\/p>\n<pre class=\"slurm\">#!\/bin\/bash --login\r\n#SBATCH -p gpuV               # v100 GPU. Other available types- gpuA(A100), gpuL(L40)\r\n#SBATCH -G 1                  # 1 GPU\r\n#SBATCH -t 1-0                # Wallclock limit (1-0 is 1-day &amp; 0-hour, 4-0 is max permitted)\r\n#SBATCH -n 8                  # Select the no. of CPU cores\r\n                              # Can use up to  8 CPUs with an v100 GPU.\r\n                              # Can use up to 12 CPUs with an A100 GPU.\r\n                              # Can use up to 12 CPUs with an L40s GPU.\r\n#SBATCH -J carla\t      # Jobname\r\n#SBATCH -o %x.o%j\t      # %x = SLURM_JOB_NAME\r\n#SBATCH -e %x.e%j\t      # %j = SLURM_JOB_ID\r\n\r\n# Load the module\r\nmodule purge\r\nmodule load apps\/python\/carla\/0.10.0\r\n\r\n# Find free port to run CARLA server process\r\nexport FREEPORT=$(find-freeport)\r\n\r\necho \"Your CARLA job $SLURM_JOB_ID is running in host: $HOSTNAME\"\r\necho \"CARLA is available in port number:         $FREEPORT\"\r\necho \"Job is using $SLURM_GPUS GPU(s) with ID(s) $CUDA_VISIBLE_DEVICES and $SLURM_NTASKS CPU core(s)\"\r\n\r\nCarlaUnreal.sh -carla-port=$FREEPORT -RenderOffScreen -nosound\r\n\r\n# Comment the above line and uncomment the below line if you also want to watch the scene in real time\r\n#CarlaUnreal.sh -carla-port=$FREEPORT -nosound\r\n<\/pre>\n<p>Submit the jobscript using:<\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p><a id=\"api-carla1.10.0\"><\/a><br \/>\nWait for the job to start. Monitor your queue using the command: <a href=\"\/csf3\/batch-slurm\/s-commands\/#Job_Status\" target=\"_blank\" rel=\"noopener\"><strong>squeue<\/strong><\/a>.<\/p>\n<p>Once the job has started, you can manipulate CARLA server process running in the compute node using its Python API from the login node itself.<br \/>\nFor this you will need the <strong><code>hostname<\/code><\/strong> where the job is running and the <strong><code>port number<\/code><\/strong> once the batch job has started.<br \/>\nRun the following command to view the hostname and the port number after the job has started running:<\/p>\n<pre><strong>head -n2 &lt;jobname&gt;.o&lt;jobid&gt; <\/strong>    # Replace the <strong>&lt;jobname&gt;<\/strong> and <strong>&lt;jobid&gt;<\/strong> with the real ones.\r\n                                # You need to set the #SBATCH -J, -o and -e options \r\n                                # in your jobscript as shown above.\r\n<\/pre>\n<p>Note down the <code>hostname<\/code> where the job is running and the <code>port number<\/code>.<\/p>\n<p>Next run the following commands in login node to run Python and manipulate CARLA server running in a compute node via its API:<\/p>\n<pre><strong>module purge\r\nmodule load apps\/python\/carla\/0.10.0\r\n\r\npython<\/strong>\r\n&gt;&gt;&gt;<strong>import carla<\/strong>\r\n&gt;&gt;&gt;<strong>client = carla.Client('&lt;hostname&gt;', &lt;port_number&gt;)<\/strong> \r\n# Replace &lt;hostname&gt; and &lt;port_number&gt; with the real hostname and port number obtained in previous step\r\n# E.g.: <span style=\"color: #660066;\">client = carla.Client('node804.csf3.man.alces.network', 2000)<\/span>\r\n\r\n# You can then do things like:\r\n&gt;&gt;&gt;<strong>world = client.get_world()<\/strong>\r\n&gt;&gt;&gt;<strong>print(world)<\/strong>\r\n<span style=\"color: #660066;\"><strong>World(id=11172095543979033550)<\/strong><\/span>\r\n\r\n&gt;&gt;&gt;<strong>level = world.get_map()<\/strong>\r\n&gt;&gt;&gt;<strong>print(level)<\/strong>\r\n<span style=\"color: #660066;\"><strong>Map(name=Carla\/Maps\/Town10HD_Opt)<\/strong><\/span>\r\n\r\n&gt;&gt;&gt;<strong>print(client.get_available_maps())<\/strong>\r\n<span style=\"color: #660066;\"><strong>['\/Game\/Carla\/Maps\/Mine_01', '\/Game\/Carla\/Maps\/Town10HD_Opt']<\/strong><\/span>\r\n\r\n&gt;&gt;&gt;<strong>world = client.load_world('\/Game\/Carla\/Maps\/Mine_01')<\/strong>\r\n&gt;&gt;&gt;<strong>level = world.get_map()<\/strong>\r\n&gt;&gt;&gt;<strong>print(level)<\/strong>\r\n<span style=\"color: #660066;\"><strong>Map(name=Carla\/Maps\/Mine_01)<\/strong><\/span>\r\n<\/pre>\n<p><!-- ------CARLA-2022--Starts------------------------------------------------------------------------------------------------------------------------------ --><\/p>\n<h2>Running the application &#8211; CARLA Transfuser 2022-tree (v0.9.10.1) with Unreal Engine 4<\/h2>\n<p>Please do NOT run CARLA on the login nodes. CARLA requires a lot of resources and can overload the login nodes.<br \/>\nJobs should be submitted to the compute nodes via batch (<a href=\"\/csf3\/software\/applications\/carla\/#Batch_job_submission\">example below<\/a>).<\/p>\n<p>Normally, CARLA should be run via batch. However, once-in-a-while some user might need to visualize the scenes they are working on and for this purpose please use a <a href=\"\/csf3\/batch-slurm\/gpu-jobs-slurm\/#Interactive_Jobs\" target=\"_blank\" rel=\"noopener\">interactive job<\/a> (<a href=\"\/csf3\/software\/applications\/carla\/#Interactive_job_example\">details below<\/a>).<\/p>\n<h3>Interactive job example<\/h3>\n<p>The steps for running CARLA in Graphical mode in an interactive session are:<\/p>\n<ol>\n<li>Start an interactive session.<\/li>\n<li>Load CARLA module<\/li>\n<li>Launch CARLA<\/li>\n<\/ol>\n<p>The set of commands needed to accomplish this are:<\/p>\n<pre class=\"slurm\"># Start an interactive session with 1 V100 GPU, 1 CPU core for 1 Hour\r\n# This should land you in an interactive node if resources are available.\r\n# If you do not get an interactive session, try again later\r\n# Rendering of the scene is better with V100. Scenes rendered with A100 has some visual artefacts.\r\n<strong>srun-x11 -p gpuV -G 1 -n 1 -t 0-1 --pty bash<\/strong>\r\n\r\n# Once you have landed in an interactive node load the CARLA module\r\n<strong>module purge\r\nmodule load apps\/python\/carla\/2022<\/strong>\r\n\r\n# Launch CARLA interactive GUI\r\n<strong>CarlaUE4.sh <\/strong>      #or just: <strong>carla <\/strong>\r\n<\/pre>\n<div class=\"note\">CARLA crashing with error message:<br \/>\n<span style=\"color: #cc0000;\">Exception thrown: bind: <strong>Address already in use<\/strong><\/span><br \/>\nSee tip on <a href=\"\/csf3\/software\/applications\/carla\/#Ports_used_by_CARLA_server_process\">how to change port number<\/a> below to learn how to overcome this error.<\/div>\n<p>This should open a new window with the default scene. Please note that it will take some time to load for the first time. Initially it will be all dark and seem as if nothing is happening. Just give it some time. After a few seconds the default scene should come up in the window. <\/p>\n<p>Please note that the frame rate will be low since the rendering is happening over network remotely, not to a monitor directly connected to the hardware where it is running.<br \/>\n\ud83d\udcdd See tip on <a href=\"\/csf3\/software\/applications\/carla\/#Controlling_the_resolution\">controlling the resolution<\/a> below for slightly better frame rate.<\/p>\n<p>At this stage you will be able to navigate the scene with the keys <strong>Q\/E\/W\/S\/A\/D<\/strong>.<\/p>\n<ul>\n<li>Q &#8211; move upwards (towards the top edge of the window)<\/li>\n<li>E &#8211; move downwards (towards the lower edge of the window)<\/li>\n<li>W &#8211; move forwards<\/li>\n<li>S &#8211; move backwards<\/li>\n<li>A &#8211; move left<\/li>\n<li>D &#8211; move right<\/li>\n<\/ul>\n<p>\ud83d\udcdd See <a href=\"#api-carla0.9.10.1\">Manipulation using API<\/a> section below for interacting with this CARLA server instance.<\/p>\n<h3>Batch job submission<\/h3>\n<p>Create a batch submission script like the following and submit a self-contained job to the batch.<\/p>\n<pre class=\"slurm\">#!\/bin\/bash --login\r\n#SBATCH -p gpuV               # v100 GPU. Other available types- gpuA(A100), gpuL(L40)\r\n#SBATCH -G 1                  # 1 GPU\r\n#SBATCH -t 1-0                # Wallclock limit (1-0 is 1-day &amp; 0-hour, 4-0 is max permitted)\r\n#SBATCH -n 8                  # Select the no. of CPU cores\r\n                              # Can use up to  8 CPUs with an v100 GPU.\r\n                              # Can use up to 12 CPUs with an A100 GPU.\r\n                              # Can use up to 12 CPUs with an L40s GPU.\r\n#SBATCH -J carla\t      # Jobname\r\n#SBATCH -o %x.o%j\t      # %x = SLURM_JOB_NAME\r\n#SBATCH -e %x.e%j\t      # %j = SLURM_JOB_ID\r\n\r\n# Load the module\r\nmodule purge\r\nmodule load apps\/python\/carla\/2022\r\n\r\n# Find free port to run CARLA server process\r\nexport FREEPORT=$(find-freeport)\r\n\r\n# Optional: Save the portnumber to a text file, for use by your python code later.\r\necho $FREEPORT &gt; FREEPORT.txt\r\n\r\n# Launch CRALA server (notice the &amp; at the end of this line)\r\n# This is the method to make CARLA v0.9.10 run in off-screen mode\r\nDISPLAY= CarlaUE4.sh -carla-rpc-port=$FREEPORT -opengl &amp;\r\n\r\n# Run your Python script\r\npython myscript.py\r\n         #\r\n         # If you need to know the port number in your python code, you can\r\n         # read the FREEPORT.txt file (see above), or read the environment variable:\r\n         #   import os;\r\n         #   portnum=os.getenv(\"FREEPORT\")\r\n         # Use 127.0.0.1 for the hostname.\r\n<\/pre>\n<p>Submit the jobscript using:<\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Manipulating CARLA server process remotely from login node through its Python API<\/h3>\n<p>You can manipulate CARLA server process running in the compute node through its Python API from the login node itself.<br \/>\nFor this, first create a batch submission script like the following and submit it to the batch.<\/p>\n<pre class=\"slurm\">#!\/bin\/bash --login\r\n#SBATCH -p gpuV               # v100 GPU. Other available types- gpuA(A100), gpuL(L40)\r\n#SBATCH -G 1                  # 1 GPU\r\n#SBATCH -t 1-0                # Wallclock limit (1-0 is 1-day &amp; 0-hour, 4-0 is max permitted)\r\n#SBATCH -n 8                  # Select the no. of CPU cores\r\n                              # Can use up to  8 CPUs with an v100 GPU.\r\n                              # Can use up to 12 CPUs with an A100 GPU.\r\n                              # Can use up to 12 CPUs with an L40s GPU.\r\n#SBATCH -J carla\t      # Jobname\r\n#SBATCH -o %x.o%j\t      # %x = SLURM_JOB_NAME\r\n#SBATCH -e %x.e%j\t      # %j = SLURM_JOB_ID\r\n\r\n# Load the module\r\nmodule purge\r\nmodule load apps\/python\/carla\/2022\r\n\r\n# Find free port to run CARLA server process\r\nexport FREEPORT=$(find-freeport)\r\n\r\necho \"Your CARLA job $SLURM_JOB_ID is running in host: $HOSTNAME\"\r\necho \"CARLA is available in port number:         $FREEPORT\"\r\necho \"Job is using $SLURM_GPUS GPU(s) with ID(s) $CUDA_VISIBLE_DEVICES and $SLURM_NTASKS CPU core(s)\"\r\n\r\n# This is the method to make CARLA v0.9.10 run in off-screen mode\r\nDISPLAY= CarlaUE4.sh -carla-rpc-port=$FREEPORT -opengl\r\n\r\n# Comment the above line and uncomment the below line if you also want to watch the scene in real time\r\n#CarlaUE4.sh -carla-rpc-port=$FREEPORT\r\n<\/pre>\n<p>Submit the jobscript using:<\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p><a id=\"api-carla0.9.10.1\"><\/a><br \/>\nWait for the job to start. Monitor your queue using the command: <a href=\"\/csf3\/batch-slurm\/s-commands\/#Job_Status\" target=\"_blank\" rel=\"noopener\"><strong>squeue<\/strong><\/a>.<\/p>\n<p>Once the job has started, you can manipulate CARLA server process running in the compute node using its Python API from the login node itself.<br \/>\nFor this you will need the <strong><code>hostname<\/code><\/strong> where the job is running and the <strong><code>port number<\/code><\/strong> once the batch job has started.<br \/>\nRun the following command to view the hostname and the port number after the job has started running:<\/p>\n<pre><strong>head -n2 &lt;jobname&gt;.o&lt;jobid&gt; <\/strong>    # Replace the <strong>&lt;jobname&gt;<\/strong> and <strong>&lt;jobid&gt;<\/strong> with the real ones.\r\n                                # You need to set the #SBATCH -J, -o and -e options \r\n                                # in your jobscript as shown above.\r\n<\/pre>\n<p>Note down the <code>hostname<\/code> where the job is running and the <code>port number<\/code>.<\/p>\n<p>Next run the following commands in login node to run Python and manipulate CARLA server running in a compute node via its API:<\/p>\n<pre><strong>module purge\r\nmodule load apps\/python\/carla\/2022\r\n\r\npython<\/strong>\r\n&gt;&gt;&gt;<strong>import carla<\/strong>\r\n&gt;&gt;&gt;<strong>client = carla.Client('&lt;hostname&gt;', &lt;port_number&gt;)<\/strong> \r\n# Replace &lt;hostname&gt; and &lt;port_number&gt; with the real hostname and port number obtained in previous step\r\n# E.g.: <span style=\"color: #660066;\">client = carla.Client('node805.csf3.man.alces.network', 2000)<\/span>\r\n\r\n# You can then do things like:\r\n&gt;&gt;&gt;<strong>world = client.get_world()<\/strong>\r\n&gt;&gt;&gt;<strong>print(world)<\/strong>\r\n<span style=\"color: #660066;\"><strong>World(id=16098500494432822930)<\/strong><\/span>\r\n\r\n&gt;&gt;&gt;<strong>level = world.get_map()<\/strong>\r\n&gt;&gt;&gt;<strong>print(level)<\/strong>\r\n<span style=\"color: #660066;\"><strong>Map(name=Town03)<\/strong><\/span>\r\n\r\n&gt;&gt;&gt;<strong>print(client.get_available_maps())<\/strong>\r\n<span style=\"color: #660066;\"><strong>\r\n['\/Game\/Carla\/Maps\/Town01', '\/Game\/Carla\/Maps\/Town06', '\/Game\/Carla\/Maps\/Town02', \r\n'\/Game\/Carla\/Maps\/Town07', '\/Game\/Carla\/Maps\/Town03', '\/Game\/Carla\/Maps\/Town10HD', \r\n'\/Game\/Carla\/Maps\/Town04', '\/Game\/Carla\/Maps\/Town05']\r\n<\/strong><\/span>\r\n\r\n&gt;&gt;&gt;<strong>world = client.load_world('\/Game\/Carla\/Maps\/Town01')<\/strong>\r\n&gt;&gt;&gt;<strong>level = world.get_map()<\/strong>\r\n&gt;&gt;&gt;<strong>print(level)<\/strong>\r\n<span style=\"color: #660066;\"><strong>Map(name=Town01)<\/strong><\/span>\r\n<\/pre>\n<p><!-- -------CARLA-2022--Ends----------------------------------------------------------------------------------------------------------------------------- --><\/p>\n<h2>Additional useful information and Tips<\/h2>\n<p>If you have a handy tip you would like to share, please <a href=\"\/csf3\/hpc-help\">contact us<\/a>.<\/p>\n<h3>Ports used by CARLA server process<\/h3>\n<p>By default, the CARLA server process running in the compute node runs\/listens on ports:<\/p>\n<p><strong>2000<\/strong>, <strong>2001<\/strong> and <strong>2002<\/strong> in version 0.10.0<br \/>\nand<br \/>\n<strong>2000<\/strong> and <strong>2001<\/strong> in older versions (0.9.10.1)<\/p>\n<p>You will normally not need to change the port numbers. However, with increasing number of users of CARLA it is possible that your interactive session request lands you on a compute node where a CARLA job is already running and using that port because up to 4 GPU jobs can run on a single GPU node. This can prevent your CARLA process from running when your run <strong>CarlaUnreal.sh \/ CarlaUE4.sh<\/strong> (or carla) and throw following error message for CARL 0.10.0:<\/p>\n<p><span style=\"color: #cc0000;\">libc++abi: terminating due to uncaught exception of type std::__1::system_error: bind: <strong>Address already in use<\/strong><\/span><\/p>\n<p>and following error message for CARL 0.9.10.1:<\/p>\n<p><span style=\"color: #cc0000;\">Exception thrown: bind: <strong>Address already in use<\/strong><\/span><\/p>\n<p>In such circumstances you can manually change the port CARLA will use by adding the command-line argument:<br \/>\n<strong><code>-carla-port=<span style=\"color: #660066;\">N<\/span><\/code><\/strong> to the <strong>CarlaUnreal.sh \/ CarlaUE4.sh<\/strong> (or carla) command.<\/p>\n<p>Try using port number (<strong><span style=\"color: #660066;\">N<\/span><\/strong>) <strong>15000<\/strong> and above as these are safe port numbers to use since they are generally not used by other processes.<\/p>\n<pre>E.g.:<strong>\r\nCarlaUnreal.sh -nosound -carla-port=15000\r\nor\r\nCarlaUE4.sh -carla-port=15000\r\n<\/strong><\/pre>\n<p>The second and third port will be automatically set to <strong>N+1<\/strong> and <strong>N+2<\/strong> in case of CARLA version 0.10.0<br \/>\nThe second port will be automatically set to <strong>N+1<\/strong> in case of CARLA version 0.9.10.1<\/p>\n<p>Don&#8217;t forget to use that same port number in the API command:<br \/>\n<code><strong>client = carla.Client('&lt;hostname&gt;', &lt;port_number&gt;)<\/strong><\/code><\/p>\n<h3>Controlling the resolution<\/h3>\n<p>While launching the CARLA simulator in interactive graphical mode, you can control the window size with <strong><code>-ResX=N<\/code><\/strong> and <strong><code>-ResY=N<\/code><\/strong> argument to the <strong>CarlaUnreal.sh \/ CarlaUE4.sh<\/strong> (or carla) command.<\/p>\n<pre>CarlaUnreal.sh -ResX=<em>N<\/em> -ResY=<em>M<\/em> <em>other flags...<\/em>\r\n<\/pre>\n<h3>Dataset<\/h3>\n<p>Downloaded Dataset generated via privileged agent &#8211; autopilot (\/team_code_autopilot\/autopilot.py) is available in CSF3 and can be accessed after loading the <strong><code>apps\/python\/carla\/2022<\/code><\/strong> module:<\/p>\n<pre>\r\nmodule purge\r\nmodule load apps\/python\/carla\/2022\r\nls -l $CARLAROOT\/data\/\r\n<\/pre>\n<h2>Further info<\/h2>\n<p><a href=\"https:\/\/carla-ue5.readthedocs.io\/\" target=\"_blank\" rel=\"noopener\">CARLA 0.10.0 with Unreal Engine 5 Documentation<\/a><br \/>\n<a href=\"https:\/\/carla.readthedocs.io\/en\/0.9.10\/\" target=\"_blank\" rel=\"noopener\">CARLA 0.9.10 with Unreal Engine 4 Documentation<\/a><\/p>\n<h2>Updates<\/h2>\n<p>None.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview CARLA has been developed from the ground up to support development, training, and validation of autonomous driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites, environmental conditions, full control of all static and dynamic actors, maps generation and much more. Restrictions on use Though CARLA.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/carla\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":21,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-10457","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/10457","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/21"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=10457"}],"version-history":[{"count":22,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/10457\/revisions"}],"predecessor-version":[{"id":10680,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/10457\/revisions\/10680"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=10457"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}