cryosparcw reference
How to use the cryosparcw utility for managing cryoSPARC workers
Worker Management with cryosparcw
cryosparcw
Run all commands in this section while logged into the workstation or worker nodes where the cryosparc_worker
package is installed.
Verify that cryosparcw
is in the worker's PATH
with this command:
and ensure the output is not empty.
Alternatively navigate to the worker installation directory and in bin/cryosparcw
. Example:
cryosparcw env
cryosparcw env
Equivalent to cryosparcm env
, but for worker nodes.
cryosparcw call <command>
cryosparcw call <command>
Execute a shell command in a transient CryoSPARC worker shell environment. For example,
cryosparcw call which python
prints the path of the CryoSPARC worker environment’s Python executable.
cryosparcw connect <options>
cryosparcw connect <options>
Run this command from a worker node or workstation to register a worker with the master, or to edit the configuration on an existing worker.
Enter cryosparcw connect --help
to see full usage details.
Example command to connect a worker in a single-machine workstation on a new resources lane.
Overview of available options.
--worker <WORKER>
: (Required) Hostname of the worker that the master can use for access the worker. If the master and worker are running on the same machine (e.g., a workstation), specify--worker localhost
If the master and worker are on the same local network and the master has SSH access to the worker, enter--worker $(hostname)
--master <MASTER>
: (Required) Hostname of the local network machine--port <PORT>
: Port on which the master node is running (default39000
)--update
: Update an existing worker configuration instead of registering a new one--sshstr <SSHSTR>
: SSH-login string for the master to use to send commands to the workers (default$(whoami)@<WORKER>
)--nogpu
: Registers a worker without any GPUs installed or with GPU access disabled--gpus <GPUS>
: Comma-separated list of GPU slots indexes that this worker has access to, starting with0
. e.g., to only enable the last two GPUs on a 4-GPU machine, enter--gpus 2,3
--nossd
: If specified, this worker does not have access to a Solid-State Drive (SSD) to use for caching particle data--ssdpath <SSDPATH>
: Path to the location of the mounted SSD drive on the file system to use for caching. e.g.,--ssdpath /scratch/cryosparc_cache
--ssdquota <SSDQUOTA>
: The maximum amount of space on the SSD that cryoSPARC uses on this worker, in megabytes (MB). Workers automatically remove older cache files to remain below this quota--ssdreserve <SSDRESERVE>
: The amount of space to initially reserve on the SSD for this worker--lane <LANE>
: Which of cryoSPARC's scheduler lanes to add this worker to. Use with--newlane
to automatically create a new lane--newlane
: Create a new scheduler lane to use for this worker instead of using an existing one--rams <RAMS>
: Comma-separated list of 8GB Random Access Memory (RAM) slots indexes to use for cryoSPARC jobs--cpus <CPUS>
: Comma-separated list of CPU core indexes to use for cryoSPARC jobs--monitor_port <MONITOR_PORT>
: Not used
cryosparcw gpulist
cryosparcw gpulist
Lists which GPUs the CryoSPARC worker processes have access to.
Use this to verify that the worker is installed correctly
cryosparcw ipython
cryosparcw ipython
Starts an ipython shell in CryoSPARC's worker environment.
cryosparcw patch
cryosparcw patch
Install a patch previously downloaded on the master node with cryosparcm patch --download
that is copied to the worker installation directory.
See cryosparcm patch
documentation.
cryosparcw update
cryosparcw update
Used for manual worker updates. See Software Updates for details
cryosparcw newcuda <path>
(CryoSPARC v4.3 and older)
cryosparcw newcuda <path>
(CryoSPARC v4.3 and older)CryoSPARC versions v4.4.0 and newer include a bundled CUDA Toolkit and no longer have this command.
Specifies a new path to the CUDA installation to use. Example usage:
Last updated