CARLA
Overview
CARLA has been developed from the ground up to support development, training, and validation of autonomous driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites, environmental conditions, full control of all static and dynamic actors, maps generation and much more.
Restrictions on use
Though CARLA is open source and freely distributed under MIT License, the license of Unreal Engine that CARLA uses is Source-available commercial software with royalty model for commercial use.
Further license information can be seen at the following links:
To get access to CARLA on the CSF3, you must request it via our form help form and confirm that your usage will comply with the licenses above. Please note that we may ask you for some further info to support your request.
Set up procedure
carla
group membership.To access the software you must load ONE of the modulefiles:
module load apps/python/carla/0.10.0 #CARLA v0.10.0 with Unreal Engine 5 module load apps/python/carla/2022 #CARLA Transfuser 2022-tree (v0.9.10.1) with Unreal Engine 4
Running the application – CARLA v0.10.0 with Unreal Engine 5
Please do NOT run CARLA on the login nodes. CARLA requires a lot of resources and can overload the login nodes.
Jobs should be submitted to the compute nodes via batch (example below).
Normally, CARLA should be run via batch. However, once-in-a-while some user might need to visualize the scenes they are working on and for this purpose please use a interactive job (details below).
Interactive job example
The steps for running CARLA in Graphical mode in an interactive session are:
- Start an interactive session.
- Load CARLA module
- Launch CARLA
The set of commands needed to accomplish this are:
# Start an interactive session with 1 V100 GPU, 1 CPU core for 1 Hour # This should land you in an interactive node if resources are available. # If you do not get an interactive session, try again later srun-x11 -p gpuV -G 1 -n 1 -t 0-1 --pty bash # Once you have landed in an interactive node load the CARLA module module purge module load apps/python/carla/0.10.0 # Launch CARLA interactive GUI CarlaUnreal.sh -nosound #or just: carla -nosound
libc++abi: terminating due to uncaught exception of type std::__1::system_error: bind: Address already in use
See tip on how to change port number below to learn how to overcome this error.
This should open a new window with the default scene. Please note that it will take some time to load for the first time. Initially it will be all dark and seem as if nothing is happening. Just give it some time. After a few seconds the default scene should come up in the window.
Please note that the frame rate will be low since the rendering is happening over network remotely, not to a monitor directly connected to the hardware where it is running.
📝 See tip on controlling the resolution below for slightly better frame rate.
At this stage you will be able to navigate the scene with the keys Q/E/W/S/A/D.
- Q – move upwards (towards the top edge of the window)
- E – move downwards (towards the lower edge of the window)
- W – move forwards
- S – move backwards
- A – move left
- D – move right
📝 See Manipulation using API section below for interacting with this CARLA server instance.
Batch job submission
Create a batch submission script like the following and submit a self-contained job to the batch.
#!/bin/bash --login #SBATCH -p gpuV # v100 GPU. Other available types- gpuA(A100), gpuL(L40) #SBATCH -G 1 # 1 GPU #SBATCH -t 1-0 # Wallclock limit (1-0 is 1-day & 0-hour, 4-0 is max permitted) #SBATCH -n 8 # Select the no. of CPU cores # Can use up to 8 CPUs with an v100 GPU. # Can use up to 12 CPUs with an A100 GPU. # Can use up to 12 CPUs with an L40s GPU. #SBATCH -J carla # Jobname #SBATCH -o %x.o%j # %x = SLURM_JOB_NAME #SBATCH -e %x.e%j # %j = SLURM_JOB_ID # Load the module module purge module load apps/python/carla/0.10.0 # Find free port to run CARLA server process export FREEPORT=$(find-freeport) # Optional: Save the portnumber to a text file, for use by your python code later. echo $FREEPORT > FREEPORT.txt # Launch CRALA server (notice the & at the end of this line) CarlaUnreal.sh -carla-port=$FREEPORT -RenderOffScreen -nosound & # Run your Python script python myscript.py # # If you need to know the port number in your python code, you can # read the FREEPORT.txt file (see above), or read the environment variable: # import os; # portnum=os.getenv("FREEPORT") # Use 127.0.0.1 for the hostname.
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Manipulating CARLA server process remotely from login node through its Python API
You can manipulate CARLA server process running in the compute node through its Python API from the login node itself.
For this, first create a batch submission script like the following and submit it to the batch.
#!/bin/bash --login #SBATCH -p gpuV # v100 GPU. Other available types- gpuA(A100), gpuL(L40) #SBATCH -G 1 # 1 GPU #SBATCH -t 1-0 # Wallclock limit (1-0 is 1-day & 0-hour, 4-0 is max permitted) #SBATCH -n 8 # Select the no. of CPU cores # Can use up to 8 CPUs with an v100 GPU. # Can use up to 12 CPUs with an A100 GPU. # Can use up to 12 CPUs with an L40s GPU. #SBATCH -J carla # Jobname #SBATCH -o %x.o%j # %x = SLURM_JOB_NAME #SBATCH -e %x.e%j # %j = SLURM_JOB_ID # Load the module module purge module load apps/python/carla/0.10.0 # Find free port to run CARLA server process export FREEPORT=$(find-freeport) echo "Your CARLA job $SLURM_JOB_ID is running in host: $HOSTNAME" echo "CARLA is available in port number: $FREEPORT" echo "Job is using $SLURM_GPUS GPU(s) with ID(s) $CUDA_VISIBLE_DEVICES and $SLURM_NTASKS CPU core(s)" CarlaUnreal.sh -carla-port=$FREEPORT -RenderOffScreen -nosound # Comment the above line and uncomment the below line if you also want to watch the scene in real time #CarlaUnreal.sh -carla-port=$FREEPORT -nosound
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Wait for the job to start. Monitor your queue using the command: squeue.
Once the job has started, you can manipulate CARLA server process running in the compute node using its Python API from the login node itself.
For this you will need the hostname
where the job is running and the port number
once the batch job has started.
Run the following command to view the hostname and the port number after the job has started running:
head -n2 <jobname>.o<jobid> # Replace the <jobname> and <jobid> with the real ones. # You need to set the #SBATCH -J, -o and -e options # in your jobscript as shown above.
Note down the hostname
where the job is running and the port number
.
Next run the following commands in login node to run Python and manipulate CARLA server running in a compute node via its API:
module purge module load apps/python/carla/0.10.0 python >>>import carla >>>client = carla.Client('<hostname>', <port_number>) # Replace <hostname> and <port_number> with the real hostname and port number obtained in previous step # E.g.: client = carla.Client('node804.csf3.man.alces.network', 2000) # You can then do things like: >>>world = client.get_world() >>>print(world) World(id=11172095543979033550) >>>level = world.get_map() >>>print(level) Map(name=Carla/Maps/Town10HD_Opt) >>>print(client.get_available_maps()) ['/Game/Carla/Maps/Mine_01', '/Game/Carla/Maps/Town10HD_Opt'] >>>world = client.load_world('/Game/Carla/Maps/Mine_01') >>>level = world.get_map() >>>print(level) Map(name=Carla/Maps/Mine_01)
Running the application – CARLA Transfuser 2022-tree (v0.9.10.1) with Unreal Engine 4
Please do NOT run CARLA on the login nodes. CARLA requires a lot of resources and can overload the login nodes.
Jobs should be submitted to the compute nodes via batch (example below).
Normally, CARLA should be run via batch. However, once-in-a-while some user might need to visualize the scenes they are working on and for this purpose please use a interactive job (details below).
Interactive job example
The steps for running CARLA in Graphical mode in an interactive session are:
- Start an interactive session.
- Load CARLA module
- Launch CARLA
The set of commands needed to accomplish this are:
# Start an interactive session with 1 V100 GPU, 1 CPU core for 1 Hour # This should land you in an interactive node if resources are available. # If you do not get an interactive session, try again later # Rendering of the scene is better with V100. Scenes rendered with A100 has some visual artefacts. srun-x11 -p gpuV -G 1 -n 1 -t 0-1 --pty bash # Once you have landed in an interactive node load the CARLA module module purge module load apps/python/carla/2022 # Launch CARLA interactive GUI CarlaUE4.sh #or just: carla
Exception thrown: bind: Address already in use
See tip on how to change port number below to learn how to overcome this error.
This should open a new window with the default scene. Please note that it will take some time to load for the first time. Initially it will be all dark and seem as if nothing is happening. Just give it some time. After a few seconds the default scene should come up in the window.
Please note that the frame rate will be low since the rendering is happening over network remotely, not to a monitor directly connected to the hardware where it is running.
📝 See tip on controlling the resolution below for slightly better frame rate.
At this stage you will be able to navigate the scene with the keys Q/E/W/S/A/D.
- Q – move upwards (towards the top edge of the window)
- E – move downwards (towards the lower edge of the window)
- W – move forwards
- S – move backwards
- A – move left
- D – move right
📝 See Manipulation using API section below for interacting with this CARLA server instance.
Batch job submission
Create a batch submission script like the following and submit a self-contained job to the batch.
#!/bin/bash --login #SBATCH -p gpuV # v100 GPU. Other available types- gpuA(A100), gpuL(L40) #SBATCH -G 1 # 1 GPU #SBATCH -t 1-0 # Wallclock limit (1-0 is 1-day & 0-hour, 4-0 is max permitted) #SBATCH -n 8 # Select the no. of CPU cores # Can use up to 8 CPUs with an v100 GPU. # Can use up to 12 CPUs with an A100 GPU. # Can use up to 12 CPUs with an L40s GPU. #SBATCH -J carla # Jobname #SBATCH -o %x.o%j # %x = SLURM_JOB_NAME #SBATCH -e %x.e%j # %j = SLURM_JOB_ID # Load the module module purge module load apps/python/carla/2022 # Find free port to run CARLA server process export FREEPORT=$(find-freeport) # Optional: Save the portnumber to a text file, for use by your python code later. echo $FREEPORT > FREEPORT.txt # Launch CRALA server (notice the & at the end of this line) # This is the method to make CARLA v0.9.10 run in off-screen mode DISPLAY= CarlaUE4.sh -carla-rpc-port=$FREEPORT -opengl & # Run your Python script python myscript.py # # If you need to know the port number in your python code, you can # read the FREEPORT.txt file (see above), or read the environment variable: # import os; # portnum=os.getenv("FREEPORT") # Use 127.0.0.1 for the hostname.
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Manipulating CARLA server process remotely from login node through its Python API
You can manipulate CARLA server process running in the compute node through its Python API from the login node itself.
For this, first create a batch submission script like the following and submit it to the batch.
#!/bin/bash --login #SBATCH -p gpuV # v100 GPU. Other available types- gpuA(A100), gpuL(L40) #SBATCH -G 1 # 1 GPU #SBATCH -t 1-0 # Wallclock limit (1-0 is 1-day & 0-hour, 4-0 is max permitted) #SBATCH -n 8 # Select the no. of CPU cores # Can use up to 8 CPUs with an v100 GPU. # Can use up to 12 CPUs with an A100 GPU. # Can use up to 12 CPUs with an L40s GPU. #SBATCH -J carla # Jobname #SBATCH -o %x.o%j # %x = SLURM_JOB_NAME #SBATCH -e %x.e%j # %j = SLURM_JOB_ID # Load the module module purge module load apps/python/carla/2022 # Find free port to run CARLA server process export FREEPORT=$(find-freeport) echo "Your CARLA job $SLURM_JOB_ID is running in host: $HOSTNAME" echo "CARLA is available in port number: $FREEPORT" echo "Job is using $SLURM_GPUS GPU(s) with ID(s) $CUDA_VISIBLE_DEVICES and $SLURM_NTASKS CPU core(s)" # This is the method to make CARLA v0.9.10 run in off-screen mode DISPLAY= CarlaUE4.sh -carla-rpc-port=$FREEPORT -opengl # Comment the above line and uncomment the below line if you also want to watch the scene in real time #CarlaUE4.sh -carla-rpc-port=$FREEPORT
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Wait for the job to start. Monitor your queue using the command: squeue.
Once the job has started, you can manipulate CARLA server process running in the compute node using its Python API from the login node itself.
For this you will need the hostname
where the job is running and the port number
once the batch job has started.
Run the following command to view the hostname and the port number after the job has started running:
head -n2 <jobname>.o<jobid> # Replace the <jobname> and <jobid> with the real ones. # You need to set the #SBATCH -J, -o and -e options # in your jobscript as shown above.
Note down the hostname
where the job is running and the port number
.
Next run the following commands in login node to run Python and manipulate CARLA server running in a compute node via its API:
module purge module load apps/python/carla/2022 python >>>import carla >>>client = carla.Client('<hostname>', <port_number>) # Replace <hostname> and <port_number> with the real hostname and port number obtained in previous step # E.g.: client = carla.Client('node805.csf3.man.alces.network', 2000) # You can then do things like: >>>world = client.get_world() >>>print(world) World(id=16098500494432822930) >>>level = world.get_map() >>>print(level) Map(name=Town03) >>>print(client.get_available_maps()) ['/Game/Carla/Maps/Town01', '/Game/Carla/Maps/Town06', '/Game/Carla/Maps/Town02', '/Game/Carla/Maps/Town07', '/Game/Carla/Maps/Town03', '/Game/Carla/Maps/Town10HD', '/Game/Carla/Maps/Town04', '/Game/Carla/Maps/Town05'] >>>world = client.load_world('/Game/Carla/Maps/Town01') >>>level = world.get_map() >>>print(level) Map(name=Town01)
Additional useful information and Tips
If you have a handy tip you would like to share, please contact us.
Ports used by CARLA server process
By default, the CARLA server process running in the compute node runs/listens on ports:
2000, 2001 and 2002 in version 0.10.0
and
2000 and 2001 in older versions (0.9.10.1)
You will normally not need to change the port numbers. However, with increasing number of users of CARLA it is possible that your interactive session request lands you on a compute node where a CARLA job is already running and using that port because up to 4 GPU jobs can run on a single GPU node. This can prevent your CARLA process from running when your run CarlaUnreal.sh / CarlaUE4.sh (or carla) and throw following error message for CARL 0.10.0:
libc++abi: terminating due to uncaught exception of type std::__1::system_error: bind: Address already in use
and following error message for CARL 0.9.10.1:
Exception thrown: bind: Address already in use
In such circumstances you can manually change the port CARLA will use by adding the command-line argument:
-carla-port=N
to the CarlaUnreal.sh / CarlaUE4.sh (or carla) command.
Try using port number (N) 15000 and above as these are safe port numbers to use since they are generally not used by other processes.
E.g.: CarlaUnreal.sh -nosound -carla-port=15000 or CarlaUE4.sh -carla-port=15000
The second and third port will be automatically set to N+1 and N+2 in case of CARLA version 0.10.0
The second port will be automatically set to N+1 in case of CARLA version 0.9.10.1
Don’t forget to use that same port number in the API command:
client = carla.Client('<hostname>', <port_number>)
Controlling the resolution
While launching the CARLA simulator in interactive graphical mode, you can control the window size with -ResX=N
and -ResY=N
argument to the CarlaUnreal.sh / CarlaUE4.sh (or carla) command.
CarlaUnreal.sh -ResX=N -ResY=M other flags...
Dataset
Downloaded Dataset generated via privileged agent – autopilot (/team_code_autopilot/autopilot.py) is available in CSF3 and can be accessed after loading the apps/python/carla/2022
module:
module purge module load apps/python/carla/2022 ls -l $CARLAROOT/data/
Further info
CARLA 0.10.0 with Unreal Engine 5 Documentation
CARLA 0.9.10 with Unreal Engine 4 Documentation
Updates
None.