cryosparcw reference (≤v4.7)

How to use the cryosparcw utility for managing CryoSPARC workers

circle-exclamation

Worker Management with cryosparcw

Run all commands in this section while logged into the workstation or worker nodes where the cryosparc_worker package is installed.

Verify that cryosparcw is in the worker's PATH with this command:

which cryosparcw

and ensure the output is not empty.

Alternatively navigate to the worker installation directory and in bin/cryosparcw. Example:

cd /path/to/cryosparc_worker
bin/cryosparcw gpulist

cryosparcw env

Equivalent to cryosparcm env, but for worker nodes.

cryosparcw call <command>

Execute a shell command in a transient CryoSPARC worker shell environment. For example,

cryosparcw call which python

prints the path of the CryoSPARC worker environment’s Python executable.

cryosparcw connect <options>

Run this command on the worker node that you wish to register with the master, or whose existing registration you wish to update.

Enter cryosparcw connect --help to see full usage details.

Example command to connect a worker on a new resources lane.

Overview of available options.

  • --worker <WORKER>: (Required) Hostname of the worker that the master can use to access the worker. If the master can resolve the worker's hostname, one may specify --worker $(hostname -f)

  • --master <MASTER>: (Required) Hostname or local IP address of the CryoSPARC master computer

  • --port <PORT>: Port on which the master node is running (default 39000)

  • --update: Update an existing worker configuration instead of registering a new one

  • --sshstr <SSHSTR>: SSH-login string for the master to use to send commands to the workers (default $(whoami)@<WORKER>)

  • --nogpu: Registers a worker without any GPUs installed or with GPU access disabled

  • --gpus <GPUS>: Comma-separated list of GPU slots indexes that this worker has access to, starting with 0. e.g., to only enable the last two GPUs on a 4-GPU machine, enter --gpus 2,3

  • --nossd: If specified, this worker does not have access to a Solid-State Drive (SSD) to use for caching particle data

  • --ssdpath <SSDPATH>: Path to the location of the mounted SSD drive on the file system to use for caching. e.g., --ssdpath /scratch/cryosparc_cache

  • --ssdquota <SSDQUOTA>: The maximum amount of space on the SSD that CryoSPARC uses on this worker, in megabytes (MB). Workers automatically remove older cache files to remain below this quota

  • --ssdreserve <SSDRESERVE>: The amount of space to initially reserve on the SSD for this worker

  • --lane <LANE>: Which of CryoSPARC's scheduler lanes to add this worker to. Use with --newlane to create a new lane with the unique name <LANE>

  • --newlane: Create a new scheduler lane to use for this worker instead of using an existing one

  • --rams <NUM_RAMS>: an integer representing the number of 8GB system RAM slots to make available for allocation to CryoSPARC jobs. Allocation is based on job type-specific estimates and does not enforce a limit on the amount of RAM that a running job will eventually use

  • --cpus <NUM_CPUS>: an integer representing the number of CPU cores to make available for allocation to CryoSPARC jobs

  • --monitor_port <MONITOR_PORT>: Not used

cryosparcw gpulist

Lists which GPUs the CryoSPARC worker processes have access to.

Use this to verify that the worker is installed correctly

cryosparcw ipython

Starts an ipythonarrow-up-right shell in CryoSPARC's worker environment.

cryosparcw patch

Install a patch previously downloaded on the master node with cryosparcm patch --download that is copied to the worker installation directory.

See cryosparcm patch documentation.

cryosparcw update

Used for manual worker updates. See Software Updates for details

cryosparcw newcuda <path> (CryoSPARC v4.3 and older)

circle-info

CryoSPARC versions v4.4.0 and newer include a bundled CUDA Toolkit and no longer have this command.

Specifies a new path to the CUDA installation to use. Example usage:

Software Updates and Patcheschevron-right

Last updated