cryosparcw reference

How to use the cryosparcw utility for managing cryoSPARC workers

Worker Management with cryosparcw

Run all commands in this section while logged into the workstation or worker nodes where the cryosparc_worker package is installed.

Verify that cryosparcw is in the worker's PATH with this command:

which cryosparcw

and ensure the output is not empty.

Alternatively navigate to the worker installation directory and in bin/cryosparcw. Example:

cd /path/to/cryosparc_worker
bin/cryosparcw gpulist

cryosparcw env

Equivalent to cryosparcm env, but for worker nodes.

cryosparcw call <command>

Execute a shell command in a transient CryoSPARC worker shell environment. For example,

cryosparcw call which python

prints the path of the CryoSPARC worker environment’s Python executable.

cryosparcw connect <options>

Run this command from a worker node or workstation to register a worker with the master, or to edit the configuration on an existing worker.

Enter cryosparcw connect --help to see full usage details.

$ cryosparcw connect --help
usage: connect.py [-h] [--worker WORKER] [--master MASTER] [--port PORT]
                  [--sshstr SSHSTR] [--update] [--nogpu] [--gpus GPUS]
                  [--nossd] [--ssdpath SSDPATH] [--ssdquota SSDQUOTA]
                  [--ssdreserve SSDRESERVE] [--lane LANE] [--newlane]
                  [--rams RAMS] [--cpus CPUS] [--monitor_port MONITOR_PORT]

Connect to cryoSPARC master

optional arguments:
  -h, --help            show this help message and exit
  --worker WORKER
  --master MASTER
  --port PORT
  --sshstr SSHSTR
  --update
  --nogpu
  --gpus GPUS
  --nossd
  --ssdpath SSDPATH
  --ssdquota SSDQUOTA
  --ssdreserve SSDRESERVE
  --lane LANE
  --newlane
  --rams RAMS
  --cpus CPUS
  --monitor_port MONITOR_PORT

Example command to connect a worker in a single-machine workstation on a new resources lane.

cryosparcw connect \
    --worker localhost \
    --master localhost \
    --port 39000 \
    --ssdpath /scratch/cryosparc_cache \
    --lane default \
    --newlane

Overview of available options.

  • --worker <WORKER>: (Required) Hostname of the worker that the master can use for access the worker. If the master and worker are running on the same machine (e.g., a workstation), specify --worker localhost If the master and worker are on the same local network and the master has SSH access to the worker, enter --worker $(hostname)

  • --master <MASTER>: (Required) Hostname of the local network machine

  • --port <PORT>: Port on which the master node is running (default 39000)

  • --update: Update an existing worker configuration instead of registering a new one

  • --sshstr <SSHSTR>: SSH-login string for the master to use to send commands to the workers (default $(whoami)@<WORKER>)

  • --nogpu: Registers a worker without any GPUs installed or with GPU access disabled

  • --gpus <GPUS>: Comma-separated list of GPU slots indexes that this worker has access to, starting with 0. e.g., to only enable the last two GPUs on a 4-GPU machine, enter --gpus 2,3

  • --nossd: If specified, this worker does not have access to a Solid-State Drive (SSD) to use for caching particle data

  • --ssdpath <SSDPATH>: Path to the location of the mounted SSD drive on the file system to use for caching. e.g., --ssdpath /scratch/cryosparc_cache

  • --ssdquota <SSDQUOTA>: The maximum amount of space on the SSD that cryoSPARC uses on this worker, in megabytes (MB). Workers automatically remove older cache files to remain below this quota

  • --ssdreserve <SSDRESERVE>: The amount of space to initially reserve on the SSD for this worker

  • --lane <LANE>: Which of cryoSPARC's scheduler lanes to add this worker to. Use with --newlane to automatically create a new lane

  • --newlane: Create a new scheduler lane to use for this worker instead of using an existing one

  • --rams <RAMS>: Comma-separated list of 8GB Random Access Memory (RAM) slots indexes to use for cryoSPARC jobs

  • --cpus <CPUS>: Comma-separated list of CPU core indexes to use for cryoSPARC jobs

  • --monitor_port <MONITOR_PORT>: Not used

cryosparcw gpulist

Lists which GPUs the CryoSPARC worker processes have access to.

$ cryosparcw gpulist
  Detected 4 CUDA devices.

   id           pci-bus  name
   ---------------------------------------------------------------
       0      0000:02:00.0  Quadro GP100
       1      0000:84:00.0  Quadro GP100
       2      0000:83:00.0  GeForce GTX 1080 Ti
       3      0000:03:00.0  GeForce GTX TITAN X
   ---------------------------------------------------------------

Use this to verify that the worker is installed correctly

cryosparcw ipython

Starts an ipython shell in CryoSPARC's worker environment.

cryosparcw patch

Install a patch previously downloaded on the master node with cryosparcm patch --download that is copied to the worker installation directory.

See cryosparcm patch documentation.

cryosparcw update

Used for manual worker updates. See Software Updates for details

cryosparcw newcuda <path> (CryoSPARC v4.3 and older)

CryoSPARC versions v4.4.0 and newer include a bundled CUDA Toolkit and no longer have this command.

Specifies a new path to the CUDA installation to use. Example usage:

cryosparcw newcuda /usr/local/cuda-11.0

Last updated