cryosparcw reference
How to use the cryosparcw utility for managing CryoSPARC workers
Worker Management with cryosparcw
cryosparcw
Run all commands in this section while logged into the workstation or worker nodes where the cryosparc_worker
package is installed.
Verify that cryosparcw
is in the worker's PATH
with this command:
which cryosparcw
and ensure the output is not empty.
Alternatively navigate to the worker installation directory and in bin/cryosparcw
. Example:
cd /path/to/cryosparc_worker
bin/cryosparcw gpulist
cryosparcw env
cryosparcw env
Equivalent to cryosparcm env
, but for worker nodes.
cryosparcw call <command>
cryosparcw call <command>
Execute a shell command in a transient CryoSPARC worker shell environment. For example,
cryosparcw call which python
prints the path of the CryoSPARC worker environment’s Python executable.
cryosparcw connect <options>
cryosparcw connect <options>
Run this command on the worker node that you wish to register with the master, or whose existing registration you wish to update.
Enter cryosparcw connect --help
to see full usage details.
$ cryosparcw connect --help
usage: connect.py [-h] [--worker WORKER] [--master MASTER] [--port PORT]
[--sshstr SSHSTR] [--update] [--nogpu] [--gpus GPUS]
[--nossd] [--ssdpath SSDPATH] [--ssdquota SSDQUOTA]
[--ssdreserve SSDRESERVE] [--lane LANE] [--newlane]
[--rams RAMS] [--cpus CPUS] [--monitor_port MONITOR_PORT]
Connect to cryoSPARC master
optional arguments:
-h, --help show this help message and exit
--worker WORKER
--master MASTER
--port PORT
--sshstr SSHSTR
--update
--nogpu
--gpus GPUS
--nossd
--ssdpath SSDPATH
--ssdquota SSDQUOTA
--ssdreserve SSDRESERVE
--lane LANE
--newlane
--rams NUM_RAMS
--cpus NUM_CPUS
--monitor_port MONITOR_PORT
Example command to connect a worker on a new resources lane.
cryosparcw connect \
--worker $(hostname -f) \
--master csmaster.local \
--port 61000 \
--ssdpath /scratch/cryosparc_cache \
--lane $(hostname -s) \
--newlane
Overview of available options.
--worker <WORKER>
: (Required) Hostname of the worker that the master can use to access the worker. If the master can resolve the worker's hostname, one may specify--worker $(hostname -f)
--master <MASTER>
: (Required) Hostname or local IP address of the CryoSPARC master computer--port <PORT>
: Port on which the master node is running (default39000
)--update
: Update an existing worker configuration instead of registering a new one--sshstr <SSHSTR>
: SSH-login string for the master to use to send commands to the workers (default$(whoami)@<WORKER>
)--nogpu
: Registers a worker without any GPUs installed or with GPU access disabled--gpus <GPUS>
: Comma-separated list of GPU slots indexes that this worker has access to, starting with0
. e.g., to only enable the last two GPUs on a 4-GPU machine, enter--gpus 2,3
--nossd
: If specified, this worker does not have access to a Solid-State Drive (SSD) to use for caching particle data--ssdpath <SSDPATH>
: Path to the location of the mounted SSD drive on the file system to use for caching. e.g.,--ssdpath /scratch/cryosparc_cache
--ssdquota <SSDQUOTA>
: The maximum amount of space on the SSD that CryoSPARC uses on this worker, in megabytes (MB). Workers automatically remove older cache files to remain below this quota--ssdreserve <SSDRESERVE>
: The amount of space to initially reserve on the SSD for this worker--lane <LANE>
: Which of CryoSPARC's scheduler lanes to add this worker to. Use with--newlane
to create a new lane with the unique name<LANE>
--newlane
: Create a new scheduler lane to use for this worker instead of using an existing one--rams <NUM_RAMS>
: an integer representing the number of 8GB system RAM slots to make available for allocation to CryoSPARC jobs. Allocation is based on job type-specific estimates and does not enforce a limit on the amount of RAM that a running job will eventually use--cpus <NUM_CPUS>
: an integer representing the number of CPU cores to make available for allocation to CryoSPARC jobs--monitor_port <MONITOR_PORT>
: Not used
cryosparcw gpulist
cryosparcw gpulist
Lists which GPUs the CryoSPARC worker processes have access to.
$ cryosparcw gpulist
Detected 4 CUDA devices.
id pci-bus name
---------------------------------------------------------------
0 0000:02:00.0 Quadro GP100
1 0000:84:00.0 Quadro GP100
2 0000:83:00.0 GeForce GTX 1080 Ti
3 0000:03:00.0 GeForce GTX TITAN X
---------------------------------------------------------------
Use this to verify that the worker is installed correctly
cryosparcw ipython
cryosparcw ipython
Starts an ipython shell in CryoSPARC's worker environment.
cryosparcw patch
cryosparcw patch
Install a patch previously downloaded on the master node with cryosparcm patch --download
that is copied to the worker installation directory.
See cryosparcm patch
documentation.
cryosparcw update
cryosparcw update
Used for manual worker updates. See Software Updates for details
cryosparcw newcuda <path>
(CryoSPARC v4.3 and older)
cryosparcw newcuda <path>
(CryoSPARC v4.3 and older)Specifies a new path to the CUDA installation to use. Example usage:
cryosparcw newcuda /usr/local/cuda-11.0
Software Updates and Patches
Last updated