Run all commands in this section while logged into the workstation or worker nodes where the
cryosparc_worker package is installed.
cryosparcw is in the worker's
PATH with this command:
and ensure the output is not empty.
Alternatively navigate to the worker installation directory and in
cd /path/to/cryosparc_workerbin/cryosparcw gpulist
cryosparcm env, but for worker nodes.
Run this command from a worker node or workstation to register a worker with the master, or to edit the configuration on an existing worker.
cryosparcw connect --help to see full usage details.
$ cryosparcw connect --helpusage: connect.py [-h] [--worker WORKER] [--master MASTER] [--port PORT][--sshstr SSHSTR] [--update] [--nogpu] [--gpus GPUS][--nossd] [--ssdpath SSDPATH] [--ssdquota SSDQUOTA][--ssdreserve SSDRESERVE] [--lane LANE] [--newlane][--rams RAMS] [--cpus CPUS] [--monitor_port MONITOR_PORT]Connect to cryoSPARC masteroptional arguments:-h, --help show this help message and exit--worker WORKER--master MASTER--port PORT--sshstr SSHSTR--update--nogpu--gpus GPUS--nossd--ssdpath SSDPATH--ssdquota SSDQUOTA--ssdreserve SSDRESERVE--lane LANE--newlane--rams RAMS--cpus CPUS--monitor_port MONITOR_PORT
Example command to connect a worker in a single-machine workstation on a new resources lane.
cryosparcw connect \--worker localhost \--master localhost \--port 39000 \--ssdpath /scratch/cryosparc_cache \--lane default \--newlane
Overview of available options.
--worker <WORKER>: (Required) Hostname of the worker that the master can use for access the worker. If the master and worker are running on the same machine (e.g., a workstation), specify
--worker localhost If the master and worker are on the same local network and the master has SSH access to the worker, enter
--master <MASTER>: (Required) Hostname of the local network machine
--port <PORT>: Port on which the master node is running (default
--update: Update an existing worker configuration instead of registering a new one
--sshstr <SSHSTR>: SSH-login string for the master to use to send commands to the workers (default
--nogpu: Registers a worker without any GPUs installed or with GPU access disabled
--gpus <GPUS>: Comma-separated list of GPU slots indexes that this worker has access to, starting with
0. e.g., to only enable the last two GPUs on a 4-GPU machine, enter
--nossd: If specified, this worker does not have access to a Solid-State Drive (SSD) to use for caching particle data
--ssdpath <SSDPATH>: Path to the location of the mounted SSD drive on the file system to use for caching. e.g.,
--ssdquota <SSDQUOTA>: The maximum amount of space on the SSD that cryoSPARC uses on this worker, in megabytes (MB). Workers automatically remove older cache files to remain below this quota
--ssdreserve <SSDRESERVE>: The amount of space to initially reserve on the SSD for this worker
--lane <LANE>: Which of cryoSPARC's scheduler lanes to add this worker to. Use with
--newlane to automatically create a new lane
--newlane: Create a new scheduler lane to use for this worker instead of using an existing one
--rams <RAMS>: Comma-separated list of 8GB Random Access Memory (RAM) slots indexes to use for cryoSPARC jobs
--cpus <CPUS>: Comma-separated list of CPU core indexes to use for cryoSPARC jobs
--monitor_port <MONITOR_PORT>: Not used
Lists which GPUs the cryoSPARC worker processes have access to.
$ cryosparcw gpulistDetected 4 CUDA devices.id pci-bus name---------------------------------------------------------------0 0000:02:00.0 Quadro GP1001 0000:84:00.0 Quadro GP1002 0000:83:00.0 GeForce GTX 1080 Ti3 0000:03:00.0 GeForce GTX TITAN X---------------------------------------------------------------
Use this to verify that the worker is installed correctly
Starts an ipython shell in cryoSPARC's worker environment.
Specifies a new path to the CUDA installation to use. Example usage:
cryosparcw newcuda /usr/local/cuda-10.0
Install a patch previously downloaded on the master node with
cryosparcm patch --download that is copied to the worker installation directory.
Used for manual worker updates. See Software Updates for details