CryoSPARC Installation Prerequisites
Before installing CryoSPARC, ensure these six requirements are met.
1. Nvidia Driver
CryoSPARC worker installations on workstations, dedicated GPU nodes or clusters require a recent version of the Nvidia driver and a Nvidia GPU. The list below specified the required Nvidia Driver version for a range of CryoSPARC versions.
CryoSPARC v5.0+
Requires Nvidia Driver version 570.26 or newer. Note that NVIDIA Blackwell devices are only compatible with the open driver.
A system CUDA installation is not needed. CryoSPARC includes CUDA 12.8 which drops support for NVIDIA GPUs with compute capability 3.5 (Kepler). Only GPUs with compute capability 5.0 (Maxwell) to 12.0 (Blackwell) are supported.
CryoSPARC v4.4 to CryoSPARC v4.7
Requires Nvidia Driver version 520.61.05 or newer.
A system CUDA installation is not needed. CryoSPARC includes CUDA 11.8.0.
CryoSPARC ≤v4.4
Requires a system CUDA installation. CryoSPARC runs with CUDA version 11, and we recommend toolkit version 11.8 and the corresponding Nvidia Driver version.
Please follow instructions specific to the CryoSPARC worker node's Linux distribution to install the Nvidia driver. Visit Troubleshooting for common GPU errors.
2. Common Unix User Account
The same CryoSPARC-associated, non-privileged Linux account must be available on the CryoSPARC master and all worker nodes.
The CryoSPARC-associated Linux account must be associated with the same numeric UID on all nodes.
Execute all command line instance management tasks, such as updates or startup, under the Unix account that runs the CryoSPARC instance. Failure to do so may render the CryoSPARC instance inoperative.
Do not use the root account to install, update or manage a CryoSPARC instance.
You don't need to have a dedicated Unix user (e.g., cryosparcuser), to run and install CryoSPARC -- you can use your own Linux account, but do not use the root account. Using your own Linux account makes sense when you are installing CryoSPARC for yourself, and you don't plan on having any other users use the same instance.
In a master-worker setup, the CryoSPARC master node will use SSH to access the worker node and execute a bash script that will run the job a user has queued to that machine. Some lightweight job types queue directly to the master node, in which case the CryoSPARC master process will execute the job using a Python subprocess. If a user queues a job to a cluster, the CryoSPARC master process will submit a cluster job via the cluster workload scheduler's job submission system (for example via the sbatch command on a SLURM cluster).
For the purposes of this documentation, cryosparcuser represents the Linux account that owns the CryoSPARC processes.
3. Password-less SSH Access
Set up SSH access between the master node and each standalone worker node. The cryosparcuser account should be able to SSH without a password (using a SSH key-pair) into all non-cluster worker nodes.
Setting up password-less SSH access to a remote workstation
Set up SSH keys for password-less access (only if you currently need to enter your password each time you ssh into the compute node).
If you do not already have SSH keys generated on your local machine, use ssh-keygen to do so.
Open a terminal prompt on your local machine, and enter:
This will create an RSA key-pair with no passphrase in the default location.
Copy the RSA public key to the remote compute node for password-less login:
remote_username and remote_hostname are your username and the hostname that you use to SSH into your compute node. This step will ask for your password.
4. Open TCP Ports
The port range is configurable during install time. Select a suitable range of ten consecutive network ports on the CryoSPARC master computer that
does not coincide with ports used by non-CryoSPARC services running on this computer.
does not overlap with the port range of another CryoSPARC instance that may be running on this computer.
does not overlap with the computer's "ephemeral" port range. A Linux computer's ephemeral port range can be displayed with the command
cat /proc/sys/net/ipv4/ip_local_port_range
The following table details the purpose of each port, assuming a --port 61000 installation parameter.
Port
Usage
61000
CryoSPARC web application
61001
MongoDB database. Needs be accessible from CryoSPARC workers.
61002
Command Core (Master) server. Needs be accessible from CryoSPARC workers.
61003
Command Visualization (Vis) server
61004
Reserved (Not Used)
61005
CryoSPARC Live Command RTP server. Needs be accessible from CryoSPARC workers.
61006
CryoSPARC web application API server
61007
Reserved (Not Used)
61008
Reserved (Not Used)
61009
Reserved (Not Used)
Checking ports in use
To see whether certain ports are being used on your master node, run a command like
sudo ss -anp | grep ":6100[0-9][^0-9]" | sed 's/\s+/ /g'
where ss output is filtered with grep for a port number pattern.
Testing open ports
To test if a TCP port is open (for example, to test if there is a firewall blocking the port), run a telnet command from another computer inside the network. If you see any response other than the one below (e.g., a timeout or a denial), it may indicate that the port is not listening or is blocked.
5. Shared File System
The major requirement for installation is that all nodes (including the master) be able to access the same shared file system(s) at the same absolute path. These file systems (typically cluster file systems or NFS mounts) will be used for loading input raw data into jobs running on various nodes, as well as saving output data from jobs into projects.

Each project created by a user is associated with a single project directory that all CryoSPARC nodes must be able to read from and write to. All users should create project directories in locations where both the master and worker nodes have access.
CryoSPARC project directories need to be stored on a filesystem that supports symbolic links.
6. Outbound HTTPS Internet Access
CryoSPARC requires internet access from the main process to verify your license and perform updates. At minimum, CryoSPARC should have access to our license server at https://get.cryosparc.com/. See here for more details.
Last updated