CryoSPARC Installation Prerequisites
Before installing CryoSPARC, ensure these six requirements are met.
The CryoSPARC worker node requires the CUDA toolkit to be installed alongside an NVIDIA GPU. CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on its own GPUs.
As of version 4.2.1, CryoSPARC supports CUDA toolkit version 11. Your GPU device model(s) may further restrict your choice of compatible toolkit versions. We recommend toolkit version 11.8. You can download CUDA here.
Please ensure you're running the latest NVIDIA driver compatible with your GPU and CUDA toolkit version. Please follow recommendations specific to your Linux distribution for the installation of the NVIDIA driver. Visit Troubleshooting for common GPU errors.
The CryoSPARC master and worker nodes must have the same Unix user available on each node.
Execute all command line instance management tasks, such as updates or startup, under the Unix account that runs the CryoSPARC instance. Failure to do so may render the CryoSPARC instance inoperative.
Do not use the root account to install, update or manage a CryoSPARC instance.
You don't need to have a dedicated Unix user (e.g.,
cryosparcuser
), to run and install CryoSPARC -- you can use your own user account, but not the root account. Using your own user account makes sense when you are installing CryoSPARC for yourself, and you don't plan on expanding to have any other users use the same instance.Create a dedicated user account (e.g.,
cryosparcuser
) that will run the CryoSPARC application on the master node and each worker node that will be used for computation.In a master-worker setup, the CryoSPARC master node will use SSH to access the worker node and execute a bash script that will run the job a user has queued to that machine. Some lightweight job types queue directly to the master node, in which case the CryoSPARC master process will execute the job using a Python subprocess. If a user queues a job to a cluster, the CryoSPARC master process will submit a cluster job via the cluster workload scheduler's job submission system (for example via the
sbatch
command on a SLURM cluster).For the purposes of this documentation, we will use the username
cryosparcuser
to represent the user that owns the CryoSPARC processes.Set up SSH access between the master node and each standalone worker node. The
cryosparcuser
account should be able to SSH without a password (using a SSH key-pair) into all non-cluster worker nodes.Set up SSH keys for password-less access (only if you currently need to enter your password each time you ssh into the compute node).
If you do not already have SSH keys generated on your local machine, use
ssh-keygen
to do so.ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa
This will create an RSA key-pair with no passphrase in the default location.
ssh-copy-id [email protected]_hostname
remote_username
and remote_hostname
are your username and the hostname that you use to SSH into your compute node. This step will ask for your password.Ensure ports
39000-39009
are accessible between every workstation node and the master node. The port range is configurable during install time. The following table details the purpose of each port.Port | Usage |
39000 | CryoSPARC web application |
39001 | MongoDB database |
39002 | Command Core (Master) server |
39003 | Command Visualization (Vis) server |
39004 | Command Proxy server (Not Used) |
39005 | CryoSPARC Live Command RTP server |
39006 | CryoSPARC web application API server |
39007 | CryoSPARC legacy web application |
39008 | Reserved (Not Used) |
39009 | Reserved (Not Used) |
To see what ports are being used on your master node, run the command
netstat -tuplen
. You can pipe the output to grep
to search for specific ports. For example:$ netstat -tuplen | grep :3900
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:39000 0.0.0.0:* LISTEN 3002 2088014397 -
tcp 0 0 0.0.0.0:39001 0.0.0.0:* LISTEN 3002 2088031342 -
tcp 0 0 0.0.0.0:39002 0.0.0.0:* LISTEN 3002 2088039523 -
tcp 0 0 0.0.0.0:39003 0.0.0.0:* LISTEN 3002 2088044546 -
tcp 0 0 0.0.0.0:39004 0.0.0.0:* LISTEN 3002 2088021927 -
tcp 0 0 0.0.0.0:39005 0.0.0.0:* LISTEN 3002 2088020880 -
tcp 0 0 0.0.0.0:39006 0.0.0.0:* LISTEN 3002 2088022457 -
To test if a TCP port is open (for example, to test if there is a firewall blocking the port), run a
telnet
command from another computer inside the network. If you see any response other than the one below (e.g., a timeout or a denial), it may indicate that the port is not listening or is blocked.$ telnet cryosparc.server 39000
Trying 192.168.64.49...
Connected to cryoem5.slush.sandbox.
Escape character is '^]'.
$ ^C
Connection closed by foreign host.
The major requirement for installation is that all nodes (including the master) be able to access the same shared file system(s) at the same absolute path. These file systems (typically cluster file systems or NFS mounts) will be used for loading input raw data into jobs running on various nodes, as well as saving output data from jobs into projects.

Example of a Master-Worker setup where all nodes have access to the same shared filesystem
Each project created by a user is associated with a single project directory that all CryoSPARC nodes must be able to read from and write to.
CryoSPARC requires internet access from the main process to verify your license and perform updates. At minimum, CryoSPARC should have access to our license server at
https://get.cryosparc.com/
. See here for more details.