Comment on page
cryosparcm cli reference
How to use the cryosparcm utility for starting and stopping the CryoSPARC instance, checking status or logs, managing users and using CryoSPARC's command-line interface.
Call methods described in this module directly as arguments to
cryosparcm cli
or referenced from the cli
object in cryosparcm icli
.- cli example:
cryosparcm cli "enqueue_job(project_uid='P3', job_uid='J42', lane='cryoem1')"
- icli example:
$ cryosparcm icli
​
connecting to cryoem5:61002 ...
cli, rtp, db, gfs and tools ready to use
​
In [1]: cli.enqueue_job(project_uid='P3', job_uid='J42', lane='cryoem1')
​
In [2]:
Allows owner of a project (or admin) to grant access to another user to view and edit the project
- Parameters:
- project_uid (str) -- uid of the project to grant access to
- requester_user_id (str) -- _id of the user requesting a new user to be added to the project
- add_user_id (str) -- uid of the user to add to the project
- Raises: AssertionError
Adds a new lane to the master scheduler
- Parameters:
- name (str) -- name of the lane
- lanetype (str) -- type of lane ("cluster" or "node")
- title (str) -- optional title of the lane
- desc (str) -- optional description of the lane
add_scheduler_target_cluster(name, worker_bin_path, script_tpl, send_cmd_tpl='{{ command }}', qsub_cmd_tpl='qsub {{ script_path_abs }}', qstat_cmd_tpl='qstat -as {{ cluster_job_id }}', qstat_code_cmd_tpl=None, qdel_cmd_tpl='qdel {{ cluster_job_id }}', qinfo_cmd_tpl='qstat -q', transfer_cmd_tpl='cp {{ src_path }} {{ dest_path }}', cache_path=None, cache_quota_mb=None, cache_reserve_mb=10000, title=None, desc=None)
add_scheduler_target_cluster(name, worker_bin_path, script_tpl, send_cmd_tpl='{{ command }}', qsub_cmd_tpl='qsub {{ script_path_abs }}', qstat_cmd_tpl='qstat -as {{ cluster_job_id }}', qstat_code_cmd_tpl=None, qdel_cmd_tpl='qdel {{ cluster_job_id }}', qinfo_cmd_tpl='qstat -q', transfer_cmd_tpl='cp {{ src_path }} {{ dest_path }}', cache_path=None, cache_quota_mb=None, cache_reserve_mb=10000, title=None, desc=None)
Add a cluster to the master scheduler
- Parameters:
- name (str) -- name of cluster
- worker_bin_path (str) -- absolute path to 'cryosparc_package/cryosparc_worker/bin' on the cluster
- script_tpl (str) -- script template string
- send_cmd_tpl (str) -- send command template string
- qsub_cmd_tpl (str) -- queue submit command template string
- qstat_cmd_tpl (str) -- queue stat command template string
- qdel_cmd_tpl (str) -- queue delete command template string
- qinfo_cmd_tpl (str) -- queue info command template string
- transfer_cmd_tpl (str) -- transfer command template string (currently unused)
- cache_path (str) -- path on SSD that can be used for the cryosparc cache
- cache_quota_mb (int) -- the max size (in MB) to use for the cache on the SSD
- cache_reserve_mb (int) -- size (in MB) to set aside for uses other than CryoSPARC cache
- title (str) -- an optional title to give to the cluster
- desc (str) -- an optional description of the cluster
- Returns: configuration parameters for new cluster
- Return type: dict
Adds a worker node to the master scheduler
- Parameters:
- hostname (str) -- the hostname of the target worker node
- ssh_str (str) -- the ssh connection string of the worker node
- worker_bin_path (str) -- the absolute path to 'cryosparc_package/cryosparc_worker/bin' on the worker node
- num_cpus (int) -- total number of CPU threads available on the worker node
- cuda_devs (list) -- total number of cuda-capable devices (GPUs) on the worker node, represented as a list (i.e. range(4))
- ram_mb (float) -- total available physical ram in MB
- has_ssd (bool) -- if an ssd is available or not
- cache_path (str) -- path on SSD that can be used for the cryosparc cache
- cache_quota (int) -- the max size (in MB) to use for the cache on the SSD
- cache_reserve (int) -- size (in MB) to initially reserve for the cache on the SSD
- gpu_info (list) -- compiled GPU information as computed by get_gpu_info
- lane (str) -- the scheduler lane to add the worker node to
- title (str) -- an optional title to give to the worker node
- desc (str) -- an optional description of the worker node
- Returns: configuration parameters for new worker node
- Return type: dict
- Raises: AssertionError
Tag the given job with the given tag UID
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- job_uid (str) -- target job UID, e.g., "J42"
- tag_uid (str) -- target tag UID, e.g., "T1"
- Returns: contains modified jobs count
- Return type: dict
Tag the given project with the given tag
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- tag_uid (str) -- target tag UID, e.g., "T1"
- Returns: contains modified documents count
- Return type: dict
Tag the given session with the given tag
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- session_uid (str) -- target session UID, e.g., "S1"
- tag_uid (str) -- target tag UID, e.g., "T1"
- Returns: contains modified sessions count
- Return type: dict
Tag the given workspace with the given tag
- Parameters:
- project_uid (_type_) -- target project UID, e.g., "P3"
- workspace_uid (str) -- target workspace UID, e.g. "W1"
- tag_uid (str) -- target tag UID, e.g., "T1"
- Returns: contains modified workspaces count
- Return type: dict
Returns True if there exists at least one user with admin privileges, False otherwise
- Returns: Whether an admin exists
- Return type: bool
Archive given project. This means that the project can no longer be modified and jobs cannot be created or run. Once archived, the project directory may be safely moved to long-term storage.
- Parameters: project_uid (str) -- target project UID, e.g., "P3"
Attach a project by importing it from the given path and writing a new lockfile. May only run this on previously-detached projects.
- Parameters:
- owner_user_id (str) -- user account ID performing this opertation that will take ownership of imported project.
- abs_path_export_project_dir (str) -- absolute path to directory of CryoSPARC project to attach
- Returns: new project UID of attached project
- Return type: str
Find intermediate results, calculate intermediate results total size, save it to the job doc, all in a PostResponseThread
Check that the given project exists and has not been deleted
- Parameters: project_uid (str) -- unique ID of target project, e.g., "P3"
- Returns: True if the project exists and hasn't been deleted
- Return type: bool
Returns True if target workspace exists.
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- workspace_uid (str) -- target workspace UID, e.g., "W4"
- Returns: True if the workspace exists, False otherwise
- Return type: bool
Cleanup project or workspace data, clearing/deleting jobs based on final result status, sections, types, or status
Cleanup jobs helper for cleanup_data
Asynchornously remove intermediate results from the given project or job
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- workspace_uid (str*,* optional) -- clear intermediate results for all jobs in a workspace
- job_uid (str*,* optional) -- target job UID, e.g., "J42", defaults to None
- always_keep_final (bool*,* optional) -- not used, defaults to True
Clear a job to get it back to building state (do not clear params or inputs)
- Parameters:
- project_uid (str) -- uid of the project that contains the job to clear
- job_uid (str) -- uid of the job to clear
- nofail (bool) -- If True, ignores errors that occur while job is clearing, defaults to False
- isdelete (bool) -- Set to True when the job is about to be deleted, defaults to False
- Raises: AssertionError
Clear all jobs from a project belonging to specified sections in the job register, optionally scoped to a workspace
Clear all jobs from a project belonging to specified types, optionally scoped to a workspace
Creates a new job as a clone of the provided job
- Parameters:
- project_uid (str) -- project UID
- workspace_uid (str | None) -- uid of the workspace, may be empty
- job_uid (str) -- uid of the job to copy
- created_by_user_id (str) -- the id of the user creating the clone
- created_by_job_uid (str*,* optional) -- uid of the job creating the clone, defaults to None
- Returns: the job uid of the newly created clone
- Return type: str
Clone jobs that directly descend from the given start job UID up to the given end job UID. Returns a dict with information about cloned jobs, or None if nothing was cloned
- Parameters:
- project_uid (str) -- project UID where jobs are located, e.g., "P3"
- start_job_uid (str) -- starting anscestor job UID
- end_job_uid (str) -- ending descendant job UID
- created_by_user_id (str) -- ID of user performing this operation
- workspace_uid (str*,* optional) -- uid of workspace to clone jobs into, defaults to None
- new_workspace_title (str*,* optional) -- Title of new workspace to create if a uid is not provided, defaults to None
- Returns: dictionary with information about created jobs and workspace or None
- Return type: dict | None
Clone the given list of jobs. If any jobs are related, it will try to re-create the input connections between the cloned jobs (but maintain the same connections to jobs that were not cloned)
- Parameters:
- project_uid (str) -- target project uid (e.g., "P3")
- job_uids (list) -- List of job UIDs to delete (e.g.,
["J1", "J2", "J3"]
) - created_by_user_id (str) -- ID of user performing this operation
- workspace_uid (str*,* optional) -- uid of workspace to clone jobs into, defaults to None
- new_workspace_title (str*,* optional) -- Title of new workspace to create if one is not provided, defaults to None
- Returns: dictionary with information about created jobs and workspace
- Return type: dict
Compile extensive validation benchmark data for upload
Create a database backup in the given directory with the given file name
- Parameters:
- backup_dir (str) -- directory to create backup in
- backup_file (str) -- filename of backup file
Creates a new project and project directory and creates a new document in the project collection
project_container_dir is an absolute path that the user guarantees is available everywhere.
- Parameters:
- owner_user_id (str) -- the _id of the user that owns the new project (which is also the user that requests to create the project)
- project_container_dir (str) -- the path to the "root" directory in which to create the new project directory
- title (str*,* optional) -- the title of the new project to create, defaults to None
- desc (str*,* optional) -- the description of the new project to create, defaults to None
- export (bool*,* optional) -- if True, outputs project details to disk for exporting to another instance, defaults to True
- hidden (bool*,* optional) -- if True, outputs project details to disk for exporting to another instance, defaults to False
- Returns: the new uid of the project that was created
- Return type: str
Add a new empty workspace to the given project.
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- created_by_user_id (str) -- User _id that creates this workspace
- created_by_job_uid (str*,* optional) -- Job UID that creates this workspace, defaults to None
- title (str*,* optional) -- Workspace title,, defaults to None
- desc (str*,* optional) -- Workspace description, defaults to None
- export (bool*,* optional) -- If True, dumps workspace contents to disk for exporting to other CryoSPARC instances, defaults to True
- Returns: UID of new workspace, e.g., "W4"
- Return type: str
Create a new job
This request runs on its own thread, but locks on the "jobs" lock, so only one create can happen at a time.This means that the builder functions below can take as much time as they like to do stuff in their own thread (releasing GIL like IO or numpy etc) and command can still respond to other requests that don't require this lock.
- Parameters:
- job_type (str) -- the name of the job to create
- project_uid (str) -- uid of the project
- workspace_uid (str) -- uid of the workspace
- created_by_user_id (str) -- the id of the user creating the job
- created_by_job_uid (str) -- uid of the job that created this job
- title (str) -- optional title of the job
- desc (str) -- optional description of the job
- do_layout (bool) -- specifies if the job tree layout should be recalculated, defaults to True
- enable_bench (bool) -- should performance be measured for this job (benchmarking)
- Returns: uid of the newly created job
- Return type: str
- Raises: AssertionError
Create a new tag
- Parameters:
- title (str) -- tag name
- type (str) -- tag type such as "general", "project", "workspace", "session" or "job"
- created_by_user_id (str) -- user account ID performing this action
- colour (str*,* optional) -- tag colour, may be "black", "red", green", etc., defaults to None
- description (str*,* optional) -- detailed tag description, defaults to None
- Returns: created tag document
- Return type: dict
Creates a new CryoSPARC user account
- Parameters:
- created_by_user_id (str) -- identity of user who is creating the new user
- email (str) -- email of user to create
- password (str) -- password of user to create
- admin (bool*,* optional) -- specifies if user should have an administrator role or not, defaults to False
- Returns: mongo _id of user that was created
- Return type: str
Deletes large database entries such as streamlogs and gridFS for a detached project and marks the project and its jobs as deleted
Deletes a job after killing it (if running), clearing it, setting the "deleted" property, and recalculating the tree layout
- Parameters:
- project_uid (str) -- uid of the project that contains the job to delete
- job_uid (str) -- uid of the job to delete
- relayout (bool) -- specifies if the tree layout should be recalculated or not
Delete the given project-job UID combinations, provided in the following format:
[("PX", "JY"), ...]
Or the following is also valid:
[["PX", "JY"], ...]
Where PX is a project UID and JY is a job UID in that project
- Parameters:
- project_job_uids (list*[tuple[str,* str*]**]*) -- project-job UIDs to delete
- relayout (bool*,* optional) -- whether to recompute the layout of the job tree, defaults to True
- nofail (bool*,* optional) -- if True, ignore errors when deleting, defaults to False
Iterate through each job and workspace associated with the project to be deleted and delete both, and then disable the project.
- Parameters:
- project_uid (str) -- uid of the project to be deleted
- request_user_id (str) -- _id of user requesting the project to be deleted
- jobs_to_delete (list) -- list of all jobs to be fully deleted
- workspaces_to_delete (list) -- list of all workspaces that will be deleted by association
- Returns: string confirmation that the workspace has been deleted
- Return type: str
- Raises: AssertionError -- if for some reason a workspace isn't deleted, this will be thrown
Removes a user's access from a project.
- Parameters:
- project_uid (str) -- uid of the project to revoke access to
- requester_user_id (str) -- _id of the user requesting a user to be removed from the project
- delete_user_id (str) -- uid of the user to remove from the project
- Raises: AssertionError
Remove a user from the CryoSPARC. Only administrators may do this
- Parameters:
- email (str) -- user's email
- requesting_user_email (str) -- your CryoSPARC login email
- requesting_user_password (str) -- your CryoSPARC password
- Returns: confirmation message
- Return type: str
Asynchronously iterate through each job associated with the workspace to be deleted and either delete or update the job, and then disables the workspace
- Parameters:
- project_uid (str) -- uid of the project containing the workspace to be deleted
- workspace_uid (str) -- uid of the workspace to be deleted
- request_user_id (str) -- uid of the user that is requesting the workspace to be deleted
- jobs_to_delete_inside_one_workspace (list) -- list of all jobs to be fully deleted
- jobs_to_update_inside_multiple_workspaces (list) -- list of all jobs to be updated so that the workspace uid is removed from its list of "workspace_uids"
- Returns: string confirmation that the workspace has been deleted
- Return type: str
Detach a project by exporting, removing lockfile, then setting its detached property. This hides the project from the interface and allows other instances to attach and run this project.
- Parameters: project_uid (str) -- target project UID to detach, e.g., "P3"
Create and run a job on the "default" node
- Parameters:
- job_type (str) -- type of job, e.g, "abinit"
- puid (str*,* optional) -- project UID to create job in, defaults to 'P1'
- wuid (str*,* optional) -- workspace UID to create job in, defaults to 'W1'
- uuid (str*,* optional) -- User ID performing this action, defaults to 'devuser'
- params (str*,* optional) -- parameter overrides for the job, defaults to {}
- input_group_connects (dict*,* optional) -- input group connections dictionary, where each key is the input group name and each value is the parent output identifier, e.g.,
{"particles": "J1.particles"}
, defaults to {}
- Returns: created job UID
- Return type: str
Get a summary of license validation checks
- Returns: Text description of validation checks separated by newlines
- Return type: str
Add the job in the given project to the queue for the given worker lane (default lane if not specified)
- Parameters:
- project_uid (str) -- uid of the project containing the job to queue
- job_uid (str) -- job uid to queue
- lane (str*,* optional) -- name of the worker lane onto which to queue
- hostname (str*,* optional) -- the hostname of the target worker node
- gpus (list*,* optional) -- list of GPU indexes on which to queue the given job, defaults to False
- no_check_inputs_ready (bool*,* optional) -- if True, forgoes checking whether inputs are ready (not recommended), defaults to False
- Returns: job status (e.g., 'launched' if success)
- Return type: str
- Example:
enqueue_job('P3', 'J42', lane='cryoem1', user_id='62b64b77632103020e4e30a7')
Start export for the given job into the project's exports directory
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- job_uid (str) -- target job UID to export, e.g., "J42"
Start export of given output result group. Optionally specify which output slots to select for export
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- job_uid (str) -- target job UID, e.g,. "J42"
- output_result_group_name (str) -- target output, e.g., "particles"
- result_names (list*,* optional) -- target result slots list, e.g., ["blob", "location"], exports all if unspecified, defaults to None
Ensure the given project is ready for transfer to another instance. Call
detach_project
once this is complete.- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- override (bool*,* optional) -- force run even if recently exported, defaults to False
Generates a new uid for the CryoSPARC instance.
- Parameters: force_takeover_projects (boolean*,* optional) -- If True, take overwrite existing lockfiles. If False, only creates lockfile in projects that don't already have one. Defaults to False
- Returns: New instance UID
- Return type: str
Get a list of available tag documents
- Returns: list of tags
- Return type: list
Returns template arguments for cluster commands including: project_uid job_uid job_creator cryosparc_username project_dir_abs job_dir_abs job_log_path_abs job_type
Get cryosparc service logs, filterable by date, name, function, and level
- Parameters:
- service_name (str) -- target service name
- days (int*,* optional) -- number of days of logs to retrive, defaults to 7
- date (str*,* optional) -- retrieve logs from a specific day formatted as "YYYY-MM-DD", defaults to None
- max_lines (int*,* optional) -- maximum number of lines to retrieve, defaults to None
- log_name (str*,* optional) -- name of internal logger type such as ", defaults to ""
- func_name (str*,* optional) -- name of Python function producing logs, defaults to ""
- level (str*,* optional) -- log severity level such as "INFO", "WARNING" or "ERROR", defaults to ""
- Returns: Filtered log
- Return type: str
Get the default job priority for jobs queued by the given user ID
- Parameters: created_by_user_id (str) -- target user account _id
- Returns: the integer priority, with 0 being the highest priority
- Return type: int
Asynchronously update the available GPU information on all connected nodes
Default priority for jobs queued by users without an explicit priority set in Admin > User Management
- Returns: Default priority config variable, 0 if never set
- Return type: int
Get a job object with optionally only a few fields specified in
*args
- Parameters:
- project_uid (str) -- project UID
- job_uid (str) -- uid of job
- Returns: mongo result set
- Return type: dict
- Example:
get_job('P1', 'J1', 'status', 'job_type', 'project_uid')
Get a list of jobs all jobs that are descendants of the start job and ancestors of the end job.
- Parameters:
- project_uid (str) -- project UID where jobs are located, e.g., "P3"
- start_job_uid (str) -- starting ascenstor job UID
- end_job_uid (str) -- ending descendant job UID
- Returns: list of jobs in the job chain
- Return type: str
Returns job creator and username strings for cluster job submission
Get the path to the given job's directory
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- job_uid (str) -- target job UID, e.g., "J42"
- Returns: absolute path to job log directory
- Return type: str
Get the full contents of the given job's standard output log
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- job_uid (str) -- target job UID, e.g., "J42"
- Returns: job log contents
- Return type: str
Get the path to the given job's standard output log
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- job_uid (str) -- target job UID, e.g., "J42"
- Returns: absolute path to job log file
- Return type: str
Get the minimum expected fields description for the given output name.
Get message stating why a job is queued but not running
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- job_uid (str) -- job UID, e.g., "J42"
- Returns: queue message
- Return type: str
Get the
output_results
item that matches this source result- Code-block:: pythonsource_result = JXX.output_group_name.result_name
- Parameters:
- project_uid (str) -- id of the project
- source_result (str) -- Source result
- Returns: the result details from the database
- Return type: dict
Retrieve available jobs, origanized by section
- Returns: job sections
- Return type: list[dict[str, Any]]
Get the status of the given job
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- job_uid (str) -- target job UID, e.g., "J42"
- Returns: job status such as "building", "queued" or "completed"
- Return type: str
Get a list of dictionaries representing the given job's event log
- Parameters:
- project_uid (str) -- target project UID, e.g., "P3"
- job_uid (str) -- target job UID, e.g., "J42"
- Returns: event log contents
- Return type: str
Get a list of symbolic links in the given job directory
- Parameters:
- project_uid (str) -- target project UID, e.g, "P3"
- job_uid (str) -- target job UID, e.g., "J42"
- Returns: List of symlinks paths, either absolute or relative to job directory
- Return type: list
Get all jobs in a project belonging to the specified sections in the job register, optionally scoped to a workspace
Get the list of jobs with the given status (or count if
count_only
is True)- Parameters:
- status (str) -- Possible job status such as "building", "queued" or "completed"
- count_only (bool*,* optional) -- If True, only return the integer count, defaults to False
- Returns: List of job documents or count
- Return type: list | int
Get all jobs matching the given types, optionally scoped to a workspace