cryosparcm cli reference
How to use the cryosparcm utility for starting and stopping the CryoSPARC instance, checking status or logs, managing users and using CryoSPARC's command-line interface.
Call methods described in this module directly as arguments to cryosparcm cli
or referenced from the cli
object in cryosparcm icli
.
cli example:
icli example:
add_project_user_access(project_uid: str, requester_user_id: str, add_user_id: str)
add_project_user_access(project_uid: str, requester_user_id: str, add_user_id: str)
Allows owner of a project (or admin) to grant access to another user to view and edit the project
Parameters:
project_uid (str) -- uid of the project to grant access to
requester_user_id (str) -- _id of the user requesting a new user to be added to the project
add_user_id (str) -- uid of the user to add to the project
Raises: AssertionError
add_scheduler_lane(name: str, lanetype: str, title: str | None = None, desc: str = '')
add_scheduler_lane(name: str, lanetype: str, title: str | None = None, desc: str = '')
Adds a new lane to the master scheduler
Parameters:
name (str) -- name of the lane
lanetype (str) -- type of lane ("cluster" or "node")
title (str) -- optional title of the lane
desc (str) -- optional description of the lane
add_scheduler_target_cluster(name, worker_bin_path, script_tpl, send_cmd_tpl='{{ command }}', qsub_cmd_tpl='qsub {{ script_path_abs }}', qstat_cmd_tpl='qstat -as {{ cluster_job_id }}', qstat_code_cmd_tpl=None, qdel_cmd_tpl='qdel {{ cluster_job_id }}', qinfo_cmd_tpl='qstat -q', transfer_cmd_tpl='cp {{ src_path }} {{ dest_path }}', cache_path=None, cache_quota_mb=None, cache_reserve_mb=10000, title=None, desc=None)
add_scheduler_target_cluster(name, worker_bin_path, script_tpl, send_cmd_tpl='{{ command }}', qsub_cmd_tpl='qsub {{ script_path_abs }}', qstat_cmd_tpl='qstat -as {{ cluster_job_id }}', qstat_code_cmd_tpl=None, qdel_cmd_tpl='qdel {{ cluster_job_id }}', qinfo_cmd_tpl='qstat -q', transfer_cmd_tpl='cp {{ src_path }} {{ dest_path }}', cache_path=None, cache_quota_mb=None, cache_reserve_mb=10000, title=None, desc=None)
Add a cluster to the master scheduler
Parameters:
name (str) -- name of cluster
worker_bin_path (str) -- absolute path to 'cryosparc_package/cryosparc_worker/bin' on the cluster
script_tpl (str) -- script template string
send_cmd_tpl (str) -- send command template string
qsub_cmd_tpl (str) -- queue submit command template string
qstat_cmd_tpl (str) -- queue stat command template string
qdel_cmd_tpl (str) -- queue delete command template string
qinfo_cmd_tpl (str) -- queue info command template string
transfer_cmd_tpl (str) -- transfer command template string (currently unused)
cache_path (str) -- path on SSD that can be used for the cryosparc cache
cache_quota_mb (int) -- the max size (in MB) to use for the cache on the SSD
cache_reserve_mb (int) -- size (in MB) to set aside for uses other than CryoSPARC cache
title (str) -- an optional title to give to the cluster
desc (str) -- an optional description of the cluster
Returns: configuration parameters for new cluster
Return type: dict
add_scheduler_target_node(hostname, ssh_str, worker_bin_path, num_cpus, cuda_devs, ram_mb, has_ssd, cache_path=None, cache_quota=None, cache_reserve=10000, monitor_port=None, gpu_info=None, lane='default', title=None, desc=None)
add_scheduler_target_node(hostname, ssh_str, worker_bin_path, num_cpus, cuda_devs, ram_mb, has_ssd, cache_path=None, cache_quota=None, cache_reserve=10000, monitor_port=None, gpu_info=None, lane='default', title=None, desc=None)
Adds a worker node to the master scheduler
Parameters:
hostname (str) -- the hostname of the target worker node
ssh_str (str) -- the ssh connection string of the worker node
worker_bin_path (str) -- the absolute path to 'cryosparc_package/cryosparc_worker/bin' on the worker node
num_cpus (int) -- total number of CPU threads available on the worker node
cuda_devs (list) -- total number of cuda-capable devices (GPUs) on the worker node, represented as a list (i.e. range(4))
ram_mb (float) -- total available physical ram in MB
has_ssd (bool) -- if an ssd is available or not
cache_path (str) -- path on SSD that can be used for the cryosparc cache
cache_quota (int) -- the max size (in MB) to use for the cache on the SSD
cache_reserve (int) -- size (in MB) to initially reserve for the cache on the SSD
gpu_info (list) -- compiled GPU information as computed by get_gpu_info
lane (str) -- the scheduler lane to add the worker node to
title (str) -- an optional title to give to the worker node
desc (str) -- an optional description of the worker node
Returns: configuration parameters for new worker node
Return type: dict
Raises: AssertionError
add_tag_to_job(project_uid: str, job_uid: str, tag_uid: str)
add_tag_to_job(project_uid: str, job_uid: str, tag_uid: str)
Tag the given job with the given tag UID
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified jobs count
Return type: dict
add_tag_to_project(project_uid: str, tag_uid: str)
add_tag_to_project(project_uid: str, tag_uid: str)
Tag the given project with the given tag
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified documents count
Return type: dict
add_tag_to_session(project_uid: str, session_uid: str, tag_uid: str)
add_tag_to_session(project_uid: str, session_uid: str, tag_uid: str)
Tag the given session with the given tag
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
session_uid (str) -- target session UID, e.g., "S1"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified sessions count
Return type: dict
add_tag_to_workspace(project_uid: str, workspace_uid: str, tag_uid: str)
add_tag_to_workspace(project_uid: str, workspace_uid: str, tag_uid: str)
Tag the given workspace with the given tag
Parameters:
project_uid (_type_) -- target project UID, e.g., "P3"
workspace_uid (str) -- target workspace UID, e.g. "W1"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified workspaces count
Return type: dict
admin_user_exists()
admin_user_exists()
Returns True if there exists at least one user with admin privileges, False otherwise
Returns: Whether an admin exists
Return type: bool
archive_project(project_uid: str)
archive_project(project_uid: str)
Archive given project. This means that the project can no longer be modified and jobs cannot be created or run. Once archived, the project directory may be safely moved to long-term storage.
Parameters: project_uid (str) -- target project UID, e.g., "P3"
attach_project(owner_user_id: str, abs_path_export_project_dir: str)
attach_project(owner_user_id: str, abs_path_export_project_dir: str)
Attach a project by importing it from the given path and writing a new lockfile. May only run this on previously-detached projects.
Parameters:
owner_user_id (str) -- user account ID performing this opertation that will take ownership of imported project.
abs_path_export_project_dir (str) -- absolute path to directory of CryoSPARC project to attach
Returns: new project UID of attached project
Return type: str
calculate_intermediate_results_size(project_uid: str, job_uid: str, always_keep_final: bool = True, use_prt=True)
calculate_intermediate_results_size(project_uid: str, job_uid: str, always_keep_final: bool = True, use_prt=True)
Find intermediate results, calculate intermediate results total size, save it to the job doc, all in a PostResponseThread
check_project_exists(project_uid: str)
check_project_exists(project_uid: str)
Check that the given project exists and has not been deleted
Parameters: project_uid (str) -- unique ID of target project, e.g., "P3"
Returns: True if the project exists and hasn't been deleted
Return type: bool
check_workspace_exists(project_uid: str, workspace_uid: str)
check_workspace_exists(project_uid: str, workspace_uid: str)
Returns True if target workspace exists.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
workspace_uid (str) -- target workspace UID, e.g., "W4"
Returns: True if the workspace exists, False otherwise
Return type: bool
cleanup_data(project_uid: str, workspace_uid: str | None = None, delete_non_final: bool = False, delete_statuses: List[str] = [], clear_non_final: bool = False, clear_sections: List[str] = [], clear_types: List[str] = [], clear_statuses: List[str] = [])
cleanup_data(project_uid: str, workspace_uid: str | None = None, delete_non_final: bool = False, delete_statuses: List[str] = [], clear_non_final: bool = False, clear_sections: List[str] = [], clear_types: List[str] = [], clear_statuses: List[str] = [])
Cleanup project or workspace data, clearing/deleting jobs based on final result status, sections, types, or status
cleanup_jobs(project_uid: str, job_uids_to_delete: List[str], job_uids_to_clear: List[str])
cleanup_jobs(project_uid: str, job_uids_to_delete: List[str], job_uids_to_clear: List[str])
Cleanup jobs helper for cleanup_data
clear_intermediate_results(project_uid: str, workspace_uid: str | None = None, job_uid: str | None = None, always_keep_final: bool | None = True)
clear_intermediate_results(project_uid: str, workspace_uid: str | None = None, job_uid: str | None = None, always_keep_final: bool | None = True)
Asynchornously remove intermediate results from the given project or job
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
workspace_uid (str*,* optional) -- clear intermediate results for all jobs in a workspace
job_uid (str*,* optional) -- target job UID, e.g., "J42", defaults to None
always_keep_final (bool*,* optional) -- not used, defaults to True
clear_job(project_uid: str, job_uid: str, nofail=False, isdelete=False)
clear_job(project_uid: str, job_uid: str, nofail=False, isdelete=False)
Clear a job to get it back to building state (do not clear params or inputs)
Parameters:
project_uid (str) -- uid of the project that contains the job to clear
job_uid (str) -- uid of the job to clear
nofail (bool) -- If True, ignores errors that occur while job is clearing, defaults to False
isdelete (bool) -- Set to True when the job is about to be deleted, defaults to False
Raises: AssertionError
clear_jobs_by_section(project_uid, workspace_uid=None, sections=[])
clear_jobs_by_section(project_uid, workspace_uid=None, sections=[])
Clear all jobs from a project belonging to specified sections in the job register, optionally scoped to a workspace
clear_jobs_by_type(project_uid, workspace_uid=None, types=[])
clear_jobs_by_type(project_uid, workspace_uid=None, types=[])
Clear all jobs from a project belonging to specified types, optionally scoped to a workspace
clone_job(project_uid: str, workspace_uid: str | None, job_uid: str, created_by_user_id: str, created_by_job_uid: str | None = None)
clone_job(project_uid: str, workspace_uid: str | None, job_uid: str, created_by_user_id: str, created_by_job_uid: str | None = None)
Creates a new job as a clone of the provided job
Parameters:
project_uid (str) -- project UID
workspace_uid (str | None) -- uid of the workspace, may be empty
job_uid (str) -- uid of the job to copy
created_by_user_id (str) -- the id of the user creating the clone
created_by_job_uid (str*,* optional) -- uid of the job creating the clone, defaults to None
Returns: the job uid of the newly created clone
Return type: str
clone_job_chain(project_uid: str, start_job_uid: str, end_job_uid: str, created_by_user_id: str, workspace_uid: str | None = None, new_workspace_title: str | None = None)
clone_job_chain(project_uid: str, start_job_uid: str, end_job_uid: str, created_by_user_id: str, workspace_uid: str | None = None, new_workspace_title: str | None = None)
Clone jobs that directly descend from the given start job UID up to the given end job UID. Returns a dict with information about cloned jobs, or None if nothing was cloned
Parameters:
project_uid (str) -- project UID where jobs are located, e.g., "P3"
start_job_uid (str) -- starting anscestor job UID
end_job_uid (str) -- ending descendant job UID
created_by_user_id (str) -- ID of user performing this operation
workspace_uid (str*,* optional) -- uid of workspace to clone jobs into, defaults to None
new_workspace_title (str*,* optional) -- Title of new workspace to create if a uid is not provided, defaults to None
Returns: dictionary with information about created jobs and workspace or None
Return type: dict | None
clone_jobs(project_uid: str, job_uids: List[str], created_by_user_id, workspace_uid=None, new_workspace_title=None)
clone_jobs(project_uid: str, job_uids: List[str], created_by_user_id, workspace_uid=None, new_workspace_title=None)
Clone the given list of jobs. If any jobs are related, it will try to re-create the input connections between the cloned jobs (but maintain the same connections to jobs that were not cloned)
Parameters:
project_uid (str) -- target project uid (e.g., "P3")
job_uids (list) -- List of job UIDs to delete (e.g.,
["J1", "J2", "J3"]
)created_by_user_id (str) -- ID of user performing this operation
workspace_uid (str*,* optional) -- uid of workspace to clone jobs into, defaults to None
new_workspace_title (str*,* optional) -- Title of new workspace to create if one is not provided, defaults to None
Returns: dictionary with information about created jobs and workspace
Return type: dict
compile_extensive_validation_benchmark_data(instance_information: dict, job_timings: dict)
compile_extensive_validation_benchmark_data(instance_information: dict, job_timings: dict)
Compile extensive validation benchmark data for upload
create_backup(backup_dir: str, backup_file: str)
create_backup(backup_dir: str, backup_file: str)
Create a database backup in the given directory with the given file name
Parameters:
backup_dir (str) -- directory to create backup in
backup_file (str) -- filename of backup file
create_empty_project(owner_user_id: str, project_container_dir: str, title: str, desc: str | None = None, project_dir: str | None = None, export=True, hidden=False)
create_empty_project(owner_user_id: str, project_container_dir: str, title: str, desc: str | None = None, project_dir: str | None = None, export=True, hidden=False)
Creates a new project and project directory and creates a new document in the project collection
NOTE
NOTE
project_container_dir is an absolute path that the user guarantees is available everywhere.
Parameters:
owner_user_id (str) -- the _id of the user that owns the new project (which is also the user that requests to create the project)
project_container_dir (str) -- the path to the "root" directory in which to create the new project directory
title (str*,* optional) -- the title of the new project to create, defaults to None
desc (str*,* optional) -- the description of the new project to create, defaults to None
export (bool*,* optional) -- if True, outputs project details to disk for exporting to another instance, defaults to True
hidden (bool*,* optional) -- if True, outputs project details to disk for exporting to another instance, defaults to False
Returns: the new uid of the project that was created
Return type: str
create_empty_workspace(project_uid, created_by_user_id, created_by_job_uid=None, title=None, desc=None, export=True)
create_empty_workspace(project_uid, created_by_user_id, created_by_job_uid=None, title=None, desc=None, export=True)
Add a new empty workspace to the given project.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
created_by_user_id (str) -- User _id that creates this workspace
created_by_job_uid (str*,* optional) -- Job UID that creates this workspace, defaults to None
title (str*,* optional) -- Workspace title,, defaults to None
desc (str*,* optional) -- Workspace description, defaults to None
export (bool*,* optional) -- If True, dumps workspace contents to disk for exporting to other CryoSPARC instances, defaults to True
Returns: UID of new workspace, e.g., "W4"
Return type: str
create_new_job(job_type, project_uid, workspace_uid, created_by_user_id, created_by_job_uid=None, title=None, desc=None, do_layout=True, dry_run=False, enable_bench=False, priority=None)
create_new_job(job_type, project_uid, workspace_uid, created_by_user_id, created_by_job_uid=None, title=None, desc=None, do_layout=True, dry_run=False, enable_bench=False, priority=None)
Create a new job
NOTE
NOTE
This request runs on its own thread, but locks on the "jobs" lock, so only one create can happen at a time.This means that the builder functions below can take as much time as they like to do stuff in their own thread (releasing GIL like IO or numpy etc) and command can still respond to other requests that don't require this lock.
Parameters:
job_type (str) -- the name of the job to create
project_uid (str) -- uid of the project
workspace_uid (str) -- uid of the workspace
created_by_user_id (str) -- the id of the user creating the job
created_by_job_uid (str) -- uid of the job that created this job
title (str) -- optional title of the job
desc (str) -- optional description of the job
do_layout (bool) -- specifies if the job tree layout should be recalculated, defaults to True
enable_bench (bool) -- should performance be measured for this job (benchmarking)
Returns: uid of the newly created job
Return type: str
Raises: AssertionError
create_tag(title: str, type: str, created_by_user_id: str, colour: str | None = None, description: str | None = None)
create_tag(title: str, type: str, created_by_user_id: str, colour: str | None = None, description: str | None = None)
Create a new tag
Parameters:
title (str) -- tag name
type (str) -- tag type such as "general", "project", "workspace", "session" or "job"
created_by_user_id (str) -- user account ID performing this action
colour (str*,* optional) -- tag colour, may be "black", "red", green", etc., defaults to None
description (str*,* optional) -- detailed tag description, defaults to None
Returns: created tag document
Return type: dict
create_user(created_by_user_id: str, email: str, password: str, username: str, first_name: str, last_name: str, admin: bool = False)
create_user(created_by_user_id: str, email: str, password: str, username: str, first_name: str, last_name: str, admin: bool = False)
Creates a new CryoSPARC user account
Parameters:
created_by_user_id (str) -- identity of user who is creating the new user
email (str) -- email of user to create
password (str) -- password of user to create
admin (bool*,* optional) -- specifies if user should have an administrator role or not, defaults to False
Returns: mongo _id of user that was created
Return type: str
delete_detached_project(project_uid: str)
delete_detached_project(project_uid: str)
Deletes large database entries such as streamlogs and gridFS for a detached project and marks the project and its jobs as deleted
delete_job(project_uid: str, job_uid: str, relayout=True, nofail=False, force=False)
delete_job(project_uid: str, job_uid: str, relayout=True, nofail=False, force=False)
Deletes a job after killing it (if running), clearing it, setting the "deleted" property, and recalculating the tree layout
Parameters:
project_uid (str) -- uid of the project that contains the job to delete
job_uid (str) -- uid of the job to delete
relayout (bool) -- specifies if the tree layout should be recalculated or not
delete_jobs(project_job_uids: List[Tuple[str, str]], relayout=True, nofail=False, force=False)
delete_jobs(project_job_uids: List[Tuple[str, str]], relayout=True, nofail=False, force=False)
Delete the given project-job UID combinations, provided in the following format:
Or the following is also valid:
Where PX is a project UID and JY is a job UID in that project
Parameters:
project_job_uids (list*[tuple[str,* str*]**]*) -- project-job UIDs to delete
relayout (bool*,* optional) -- whether to recompute the layout of the job tree, defaults to True
nofail (bool*,* optional) -- if True, ignore errors when deleting, defaults to False
delete_project(project_uid: str, request_user_id: str, jobs_to_delete: list | None = None, workspaces_to_delete: list | None = None)
delete_project(project_uid: str, request_user_id: str, jobs_to_delete: list | None = None, workspaces_to_delete: list | None = None)
Iterate through each job and workspace associated with the project to be deleted and delete both, and then disable the project.
Parameters:
project_uid (str) -- uid of the project to be deleted
request_user_id (str) -- _id of user requesting the project to be deleted
jobs_to_delete (list) -- list of all jobs to be fully deleted
workspaces_to_delete (list) -- list of all workspaces that will be deleted by association
Returns: string confirmation that the workspace has been deleted
Return type: str
Raises: AssertionError -- if for some reason a workspace isn't deleted, this will be thrown
delete_project_user_access(project_uid: str, requester_user_id: str, delete_user_id: str)
delete_project_user_access(project_uid: str, requester_user_id: str, delete_user_id: str)
Removes a user's access from a project.
Parameters:
project_uid (str) -- uid of the project to revoke access to
requester_user_id (str) -- _id of the user requesting a user to be removed from the project
delete_user_id (str) -- uid of the user to remove from the project
Raises: AssertionError
delete_user(email: str, requesting_user_email: str, requesting_user_password: str)
delete_user(email: str, requesting_user_email: str, requesting_user_password: str)
Remove a user from the CryoSPARC. Only administrators may do this
Parameters:
email (str) -- user's email
requesting_user_email (str) -- your CryoSPARC login email
requesting_user_password (str) -- your CryoSPARC password
Returns: confirmation message
Return type: str
delete_workspace(project_uid: str, workspace_uid: str, request_user_id: str, jobs_to_delete_inside_one_workspace: list | None = None, jobs_to_update_inside_multiple_workspaces: list | None = None)
delete_workspace(project_uid: str, workspace_uid: str, request_user_id: str, jobs_to_delete_inside_one_workspace: list | None = None, jobs_to_update_inside_multiple_workspaces: list | None = None)
Asynchronously iterate through each job associated with the workspace to be deleted and either delete or update the job, and then disables the workspace
Parameters:
project_uid (str) -- uid of the project containing the workspace to be deleted
workspace_uid (str) -- uid of the workspace to be deleted
request_user_id (str) -- uid of the user that is requesting the workspace to be deleted
jobs_to_delete_inside_one_workspace (list) -- list of all jobs to be fully deleted
jobs_to_update_inside_multiple_workspaces (list) -- list of all jobs to be updated so that the workspace uid is removed from its list of "workspace_uids"
Returns: string confirmation that the workspace has been deleted
Return type: str
detach_project(project_uid: str)
detach_project(project_uid: str)
Detach a project by exporting, removing lockfile, then setting its detached property. This hides the project from the interface and allows other instances to attach and run this project.
Parameters: project_uid (str) -- target project UID to detach, e.g., "P3"
do_job(job_type: str, puid='P1', wuid='W1', uuid='devuser', params={}, input_group_connects={})
do_job(job_type: str, puid='P1', wuid='W1', uuid='devuser', params={}, input_group_connects={})
Create and run a job on the "default" node
Parameters:
job_type (str) -- type of job, e.g, "abinit"
puid (str*,* optional) -- project UID to create job in, defaults to 'P1'
wuid (str*,* optional) -- workspace UID to create job in, defaults to 'W1'
uuid (str*,* optional) -- User ID performing this action, defaults to 'devuser'
params (str*,* optional) -- parameter overrides for the job, defaults to {}
input_group_connects (dict*,* optional) -- input group connections dictionary, where each key is the input group name and each value is the parent output identifier, e.g.,
{"particles": "J1.particles"}
, defaults to {}
Returns: created job UID
Return type: str
dump_license_validation_results()
dump_license_validation_results()
Get a summary of license validation checks
Returns: Text description of validation checks separated by newlines
Return type: str
enqueue_job(project_uid: str, job_uid: str, lane: str | None = None, user_id: str | None = None, hostname: str | None = None, gpus: List[int] | typing_extensions.Literal[False] = False, no_check_inputs_ready: bool = False)
enqueue_job(project_uid: str, job_uid: str, lane: str | None = None, user_id: str | None = None, hostname: str | None = None, gpus: List[int] | typing_extensions.Literal[False] = False, no_check_inputs_ready: bool = False)
Add the job in the given project to the queue for the given worker lane (default lane if not specified)
Parameters:
project_uid (str) -- uid of the project containing the job to queue
job_uid (str) -- job uid to queue
lane (str*,* optional) -- name of the worker lane onto which to queue
hostname (str*,* optional) -- the hostname of the target worker node
gpus (list*,* optional) -- list of GPU indexes on which to queue the given job, defaults to False
no_check_inputs_ready (bool*,* optional) -- if True, forgoes checking whether inputs are ready (not recommended), defaults to False
Returns: job status (e.g., 'launched' if success)
Return type: str
Example:
export_job(project_uid: str, job_uid: str)
export_job(project_uid: str, job_uid: str)
Start export for the given job into the project's exports directory
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID to export, e.g., "J42"
export_output_result_group(project_uid: str, job_uid: str, output_result_group_name: str, result_names: List[str] | None = None)
export_output_result_group(project_uid: str, job_uid: str, output_result_group_name: str, result_names: List[str] | None = None)
Start export of given output result group. Optionally specify which output slots to select for export
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g,. "J42"
output_result_group_name (str) -- target output, e.g., "particles"
result_names (list*,* optional) -- target result slots list, e.g., ["blob", "location"], exports all if unspecified, defaults to None
export_project(project_uid: str, override=False)
export_project(project_uid: str, override=False)
Ensure the given project is ready for transfer to another instance. Call detach_project
once this is complete.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
override (bool*,* optional) -- force run even if recently exported, defaults to False
generate_new_instance_uid(force_takeover_projects=False)
generate_new_instance_uid(force_takeover_projects=False)
Generates a new uid for the CryoSPARC instance.
Parameters: force_takeover_projects (boolean*,* optional) -- If True, take overwrite existing lockfiles. If False, only creates lockfile in projects that don't already have one. Defaults to False
Returns: New instance UID
Return type: str
get_all_tags()
get_all_tags()
Get a list of available tag documents
Returns: list of tags
Return type: list
get_base_template_args(project_uid, job_uid, target)
get_base_template_args(project_uid, job_uid, target)
Returns template arguments for cluster commands including: project_uid job_uid job_creator cryosparc_username project_dir_abs job_dir_abs job_log_path_abs job_type
get_cryosparc_log(service_name, days=7, date=None, max_lines=None, log_name='', func_name='', level='')
get_cryosparc_log(service_name, days=7, date=None, max_lines=None, log_name='', func_name='', level='')
Get cryosparc service logs, filterable by date, name, function, and level
Parameters:
service_name (str) -- target service name
days (int*,* optional) -- number of days of logs to retrive, defaults to 7
date (str*,* optional) -- retrieve logs from a specific day formatted as "YYYY-MM-DD", defaults to None
max_lines (int*,* optional) -- maximum number of lines to retrieve, defaults to None
log_name (str*,* optional) -- name of internal logger type such as ", defaults to ""
func_name (str*,* optional) -- name of Python function producing logs, defaults to ""
level (str*,* optional) -- log severity level such as "INFO", "WARNING" or "ERROR", defaults to ""
Returns: Filtered log
Return type: str
get_default_job_priority(created_by_user_id: str)
get_default_job_priority(created_by_user_id: str)
Get the default job priority for jobs queued by the given user ID
Parameters: created_by_user_id (str) -- target user account _id
Returns: the integer priority, with 0 being the highest priority
Return type: int
get_gpu_info()
get_gpu_info()
Asynchronously update the available GPU information on all connected nodes
get_instance_default_job_priority()
get_instance_default_job_priority()
Default priority for jobs queued by users without an explicit priority set in Admin > User Management
Returns: Default priority config variable, 0 if never set
Return type: int
get_job(project_uid: str, job_uid: str, \*args, \*\*kwargs)
get_job(project_uid: str, job_uid: str, \*args, \*\*kwargs)
Get a job object with optionally only a few fields specified in *args
Parameters:
project_uid (str) -- project UID
job_uid (str) -- uid of job
Returns: mongo result set
Return type: dict
Example:
get_job_chain(project_uid: str, start_job_uid: str, end_job_uid: str)
get_job_chain(project_uid: str, start_job_uid: str, end_job_uid: str)
Get a list of jobs all jobs that are descendants of the start job and ancestors of the end job.
Parameters:
project_uid (str) -- project UID where jobs are located, e.g., "P3"
start_job_uid (str) -- starting ascenstor job UID
end_job_uid (str) -- ending descendant job UID
Returns: list of jobs in the job chain
Return type: str
get_job_creator_and_username(job_uid, user=None)
get_job_creator_and_username(job_uid, user=None)
Returns job creator and username strings for cluster job submission
get_job_dir_abs(project_uid: str, job_uid: str)
get_job_dir_abs(project_uid: str, job_uid: str)
Get the path to the given job's directory
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
Returns: absolute path to job log directory
Return type: str
get_job_log(project_uid: str, job_uid: str)
get_job_log(project_uid: str, job_uid: str)
Get the full contents of the given job's standard output log
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
Returns: job log contents
Return type: str
get_job_log_path_abs(project_uid: str, job_uid: str)
get_job_log_path_abs(project_uid: str, job_uid: str)
Get the path to the given job's standard output log
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
Returns: absolute path to job log file
Return type: str
get_job_output_min_fields(project_uid: str, job_uid: str, output: str)
get_job_output_min_fields(project_uid: str, job_uid: str, output: str)
Get the minimum expected fields description for the given output name.
get_job_queue_message(project_uid: str, job_uid: str)
get_job_queue_message(project_uid: str, job_uid: str)
Get message stating why a job is queued but not running
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- job UID, e.g., "J42"
Returns: queue message
Return type: str
get_job_result(project_uid: str, source_result: str)
get_job_result(project_uid: str, source_result: str)
Get the output_results
item that matches this source result
Code-block:: python
source_result = JXX.output_group_name.result_name
Parameters:
project_uid (str) -- id of the project
source_result (str) -- Source result
Returns: the result details from the database
Return type: dict
get_job_sections()
get_job_sections()
Retrieve available jobs, origanized by section
Returns: job sections
Return type: list[dict[str, Any]]
get_job_status(project_uid: str, job_uid: str)
get_job_status(project_uid: str, job_uid: str)
Get the status of the given job
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
Returns: job status such as "building", "queued" or "completed"
Return type: str
get_job_streamlog(project_uid: str, job_uid: str)
get_job_streamlog(project_uid: str, job_uid: str)
Get a list of dictionaries representing the given job's event log
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
Returns: event log contents
Return type: str
get_job_symlinks(project_uid: str, job_uid: str)
get_job_symlinks(project_uid: str, job_uid: str)
Get a list of symbolic links in the given job directory
Parameters:
project_uid (str) -- target project UID, e.g, "P3"
job_uid (str) -- target job UID, e.g., "J42"
Returns: List of symlinks paths, either absolute or relative to job directory
Return type: list
get_jobs_by_section(project_uid, workspace_uid=None, sections=[], fields=[])
get_jobs_by_section(project_uid, workspace_uid=None, sections=[], fields=[])
Get all jobs in a project belonging to the specified sections in the job register, optionally scoped to a workspace
get_jobs_by_status(status: str, count_only=False)
get_jobs_by_status(status: str, count_only=False)
Get the list of jobs with the given status (or count if count_only
is True)
Parameters:
status (str) -- Possible job status such as "building", "queued" or "completed"
count_only (bool*,* optional) -- If True, only return the integer count, defaults to False
Returns: List of job documents or count
Return type: list | int
get_jobs_by_type(project_uid, workspace_uid=None, types=[], fields=[])
get_jobs_by_type(project_uid, workspace_uid=None, types=[], fields=[])
Get all jobs matching the given types, optionally scoped to a workspace
get_jobs_size_by_section(project_uid: str, workspace_uid: str = None, sections: List[str] = [])
get_jobs_size_by_section(project_uid: str, workspace_uid: str = None, sections: List[str] = [])
Gets the total job size for jobs belonging to the input sections of the job register
get_last_backup_complete_activity()
get_last_backup_complete_activity()
Get details about most recent database backup
Returns: activity document or {} if never backed up
Return type: dict
get_maintenance_mode()
get_maintenance_mode()
Get maintenance mode status.
Returns: True if set, False otherwise
Return type: bool
get_non_final_jobs(project_uid: str, workspace_uid: str = None, fields: List[str] = [])
get_non_final_jobs(project_uid: str, workspace_uid: str = None, fields: List[str] = [])
Gets a list of job uids of all jobs not marked as a final result or ancestor of final result
get_non_final_jobs_size(project_uid: str, workspace_uid: str = None)
get_non_final_jobs_size(project_uid: str, workspace_uid: str = None)
Gets total size of all jobs not marked as a final result or ancestor of final result
get_num_active_licenses()
get_num_active_licenses()
Get number of acquired licenses for running jobs
Returns: number of active licenses
Return type: int
get_project(project_uid: str, \*args)
get_project(project_uid: str, \*args)
Get information about a single project
Parameters:
project_uid (str) -- the id of the project
args -- extra args (comma seperated) that contain the keys (if any) to project when returning the mongodb doc
Returns: the information related to the project thats stored in the database
Return type: str
get_project_dir_abs(project_uid: str)
get_project_dir_abs(project_uid: str)
Get the project's absolute directory with all environment variables in the path resolved
Parameters: project_uid (str) -- target project UID, e.g., "P3"
Raises: ValueError -- when the project does not exist
Returns: the absolute path to the project directory
Return type: str
get_project_jobs_by_status(project_uid, workspace_uid=None, statuses=[], fields=[])
get_project_jobs_by_status(project_uid, workspace_uid=None, statuses=[], fields=[])
Get all jobs matching the given statuses, optionally scoped to a workspace
get_project_symlinks(project_uid: str)
get_project_symlinks(project_uid: str)
Get all symbolic links in the given project directory
Parameters: project_uid (str) -- target project UID, e.g., "P3"
Returns: List symlink paths, either absolute or relative to project directory
Return type: str
get_project_title_slug(project_title: str)
get_project_title_slug(project_title: str)
Returns a slugified version of a project title
Parameters: project_title (str) -- Requested project title
Returns: URL- and file-system- safe slug of the given project title
Return type: str
get_result_download_abs_path(project_uid: str, result_spec: str)
get_result_download_abs_path(project_uid: str, result_spec: str)
Get the absolute path to the dataset for the given result type
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
result_spec (str) -- result slot name e.g., "J3.particles.blob"
Returns: absolute path
Return type: str
get_running_version()
get_running_version()
Get the current CryoSPARC version
Returns: e.g., v3.3.2
Return type: str
get_runtime_diagnostics()
get_runtime_diagnostics()
Get runtime diagnostics for the CryoSPARC instance
Returns: formatted with diagnostics
Return type: dict
get_scheduler_lanes()
get_scheduler_lanes()
Returns a list of lanes that are registered with the master scheduler
Returns: list of information about each lane
Return type: dicts
get_scheduler_target_cluster_info(name: str)
get_scheduler_target_cluster_info(name: str)
Get cluster information for given cluster name
Parameters: name (str) -- name of cluster/lane (lane must be of type 'cluster')
Returns: json string containing cluster information
Return type: str
Raises: AssertionError
get_scheduler_target_cluster_script(name: str)
get_scheduler_target_cluster_script(name: str)
Get cluster job submission template for specified cluster
Parameters: name (str) -- name of cluster/lane (lane must be of type 'cluster')
Returns: job scheduler script template
Return type: str
Raises: AssertionError
get_scheduler_targets()
get_scheduler_targets()
Returns a list of worker nodes that are registered with the master scheduler
Returns: list of information about each worker node
Return type: dicts
get_supervisor_log_latest(service: str, n: int = 50)
get_supervisor_log_latest(service: str, n: int = 50)
Read the last n lines from the given service log. Service must be one of the following:
"app"
"database"
"command_core"
"command_vis"
"command_rtp"
"app_api"
"app_legacy"
"supervisord"
Parameters:
service (str) -- Service to retrive logs for
n (int*,* optional) -- Number of lines to retrieve, defaults to 50
Returns: logs as a string separated by new lines
Return type: str
get_system_info()
get_system_info()
Returns system-related information related to the cryosparc app
Returns: dictionary listing information about cryosparc environment
Return type: dict
get_tag_count_by_type()
get_tag_count_by_type()
Get a dictionary of where keys are tag types and values are how many tags there are of each type
Returns: dict of integers
Return type: dict
get_tag_counts(tag_uid: str)
get_tag_counts(tag_uid: str)
Get a dictionary of counts contining how many entities (e.g., projects, jobs) are tagged with the given tag UID
Parameters: tag_uid (str) -- target tag UID, e.g., "T1"
Returns: counts organized by entity type
Return type: str
get_tags_by_type()
get_tags_by_type()
Get all tags as a dictionary organized by tag type. Each key is a type such as "general" or "job" and each value is a list of tags
Returns: dict of lists of tags
Return type: dict
get_tags_of_type(tag_type: str)
get_tags_of_type(tag_type: str)
Get a list of tags with the given type
Parameters: tag_type (str) -- tag type such as "general", "project", "job", etc.
Returns: List of tags with the given type
Return type: list
get_targets_by_lane(lane_name, node_only=False)
get_targets_by_lane(lane_name, node_only=False)
Returns a list of worker nodes that are registered with the master scheduler and are in the given lane
get_user_default_priority(email_address: str)
get_user_default_priority(email_address: str)
Get a user's priority when launching jobs. Defaults to the instance's configured job priority if not set.
Parameters: email_address (str) -- email for the target user
Returns: the job priority, defaulting to 0 if not set
Return type: int
get_user_lanes(email_address: str)
get_user_lanes(email_address: str)
Retrieve a list of lens the user with the given email address is allowed to queue jobs to.
Parameters: email_address (str) -- target user account email address
Returns: list of lanes
Return type: list
get_user_tags(user_id: str)
get_user_tags(user_id: str)
Get a list of tags created by the given user account ID
Parameters: user_id (str) -- target user account ID
Returns: list of tags
Return type: list
get_worker_nodes()
get_worker_nodes()
Returns a list of worker nodes registered in the master scheduler
Returns: list of information about each worker node
Return type: dicts
get_workspace(project_uid, workspace_uid, \*args)
get_workspace(project_uid, workspace_uid, \*args)
A helper function that returns the mongodb document of the workspace requested, only including the fields passed
Parameters:
project_uid (str) -- the uid of the project that contains the workspace
workspace_uid (str) -- the uid of the workspace to retrieve
args -- extra args (comma seperated) that contain the keys (if any) to project when returning the mongodb doc
Returns: dict - the mongodb workspace document
import_job(owner_user_id: str, project_uid: str, workspace_uid: str, abs_path_export_job_dir: str)
import_job(owner_user_id: str, project_uid: str, workspace_uid: str, abs_path_export_job_dir: str)
Import the given exported job directory into the given project. Exported job directory must exist in the project directory. By convention, may be added into the "imports" directory
Parameters:
owner_user_id (str) -- _id of user performing this import operation
project_uid (str) -- project UID to import into, e.g., "P3"
workspace_uid (str) -- workspace UID to import into, e.g., "W1"
abs_path_export_job_dir (str) -- path to exported job directory. Must be inside the project directory.
import_jobs(jobs_manifest, abs_path_export_project_dir, new_project_uid, owner_user_id, notification_id)
import_jobs(jobs_manifest, abs_path_export_project_dir, new_project_uid, owner_user_id, notification_id)
Imports jobs using the job manifest
Parameters:
jobs_manifest (str) -- the job data loaded from a job_manifest.json file
abs_path_export_project_dir (str) -- the import project directory
new_project_uid (str) -- uid of the project to import the jobs into
owner_user_id (str) -- the id of the user importing the jobs
notification_id (str) -- the import project notification uid that will have its progress meter updated as import completes
import_project(owner_user_id: str, abs_path_export_project_dir: str)
import_project(owner_user_id: str, abs_path_export_project_dir: str)
Import any project directory that was previously exported by CryoSPARC
Parameters:
owner_user_id (str) -- the mongo object id ("_id") of the user requesting to import the project
abs_path_export_project_dir (str) -- the absolute project directory containing the jobs to import
import_workspaces(workspaces_doc_data, abs_path_export_project_dir, new_project_uid, owner_user_id, notification_id)
import_workspaces(workspaces_doc_data, abs_path_export_project_dir, new_project_uid, owner_user_id, notification_id)
Imports workspaces and live sessions
Parameters:
workspaces_doc_data (str) -- workspace data loaded from workspaces.json file
abs_path_export_project_dir (str) -- the import project directory
new_project_uid (str) -- uid of the project to import the workspaces into
owner_user_id (str) -- the id of the user importing the workspaces
notification_id (str) -- import project notification uid that will have its progress meter updated as import completes
is_admin(user_id: str)
is_admin(user_id: str)
Returns True if the given user account ID has admin privileges
Parameters: user_id (str) -- _id property of the target user
Returns: Whether the user is an admin
Return type: bool
job_add_to_workspace(project_uid: str, job_uid: str, workspace_uid: str)
job_add_to_workspace(project_uid: str, job_uid: str, workspace_uid: str)
Adds a job to the specified workspace
Parameters:
project_uid (str) -- the id of the project
job_uid (str) -- the id of the job to add
workspace_uid (str) -- the id of the workspace
Returns: the number of modified documents
Return type: int
job_cart_create(project_uid: str, workspace_uid: str, created_by_user_id: str, output_result_groups: list, new_job_type: str)
job_cart_create(project_uid: str, workspace_uid: str, created_by_user_id: str, output_result_groups: list, new_job_type: str)
Given a list of output result groups and a job type, create the new job, and connect the output result groups to the input slots of the newly created job.
Parameters:
project_uid (str) -- project ID, e.g., P3
workspace_uid (str) -- workspace ID, e.g., W1
created_by_user_id (str) -- _id of user
output_result_groups (list) -- output result groups to connect to the input groups of the new job
new_job_type (str) -- the job type to create
job_clear_param(project_uid: str, job_uid: str, param_name: str)
job_clear_param(project_uid: str, job_uid: str, param_name: str)
Reset the given parameter to its default value.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g.,
param_name (str) -- target parameter name, e.g., "refine_symmetry"
Returns: whether the job has an build errors
Return type: bool
job_clear_streamlog(project_uid: str, job_uid: str)
job_clear_streamlog(project_uid: str, job_uid: str)
Delete all entries from the given job's event log
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
Returns: delete result, including affected document count
Return type: dict
job_connect_group(project_uid: str, source_group: str, dest_group: str)
job_connect_group(project_uid: str, source_group: str, dest_group: str)
Connect the given source output group to the target input group. Each group must be formatted as <job uid>.<group name>
Parameters:
project_uid (str) -- currenct project UID, e.g., "P3"
source_group (srr) -- source output group, e.g., "J1.movies"
dest_group (str) -- destination input group, e.g., "J2.exposures"
Returns: whether the job has any errors
Return type: bool
job_connect_result(project_uid: str, source_result: str, dest_slot: str)
job_connect_result(project_uid: str, source_result: str, dest_slot: str)
Connect the given source output slot to the target input slot, where the main group (e.g., particles
) remains but a set of fields (e.g., particles.blob
) is added or replaced.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
source_result (str) -- output slot to connect, e.g., "J1.particles.blob"
dest_slot (str) -- input slot to connect to, e.g., "J2.particles.0.blob"
Returns: whether the job has any build errors
Return type: bool
job_connected_group_clear(project_uid: str, dest_group: str, connect_idx: int)
job_connected_group_clear(project_uid: str, dest_group: str, connect_idx: int)
Clear the given job input group. Group must be formatted as <job uid>.<group name>
Parameters:
project_uid (str) -- current project UID, e.g., "P3"
dest_group (str) -- Group to clear, e.g., "J2.exposures"
connect_idx (int) -- Connection index to clear when multiple outputs are connected to the same input. Set to 0 to clear the first input.
Returns: whether the job has any build errors
Return type: bool
job_connected_result_clear(project_uid: str, dest_slot: str, passthrough=False)
job_connected_result_clear(project_uid: str, dest_slot: str, passthrough=False)
Clear a given input slot from a job, where the main group (e.g., particles
) is left intact but a set of fields (e.g., particles.blob
) is removed
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
dest_slot (str) -- input slot to clear, e.g., "J2.particles.0.blob"
passthrough (bool*,* optional) -- set to True the input slot was passed through from its parent output (though not strictly necessary), defaults to False
Returns: whether the job has any build errors
Return type: bool
job_find_ancestors(project_uid: str, job_uid: str)
job_find_ancestors(project_uid: str, job_uid: str)
Find all jobs that provided inputs to the given job by following the given job's inputs backward up the job tree.
Parameters:
project_uid (str) -- project UID where jobs reside, e.g., "P3"
job_uid (str) -- uid of job, e.g., "J42"
Returns: sorted list of job UIDs
Return type: list
job_find_descendants(project_uid: str, job_uid: str)
job_find_descendants(project_uid: str, job_uid: str)
Find all jobs that the given job provided outputs to by following the given job's outputs forward down the job tree.
Parameters:
project_uid (str) -- project UID where jobs reside, e.g., "P3"
job_uid (str) -- uid of job, e.g., "J42"
Returns: sorted list of job UIDs
Return type: list
job_has_build_errors(project_uid: str, job_uid: str)
job_has_build_errors(project_uid: str, job_uid: str)
Whether the given job has any build errors
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
Returns: True or False
Return type: bool
job_import_replace_symlinks(project_uid: str, job_uid: str, prefix_cut: str, prefix_new: str)
job_import_replace_symlinks(project_uid: str, job_uid: str, prefix_cut: str, prefix_new: str)
Update symbolic links to imported data in the given job directory when the original data has been moved.
Parameters:
project_uid (str) -- target project uid, e.g., "P3"
job_uid (str) -- uid of target job, e.g. "J42"
prefix_cut (str) -- Old path prefix of external data, e.g., "/path/to/dataold"
prefix_new (str) -- New path prefix where external data has been moved, e.g., "/path/to/datanew"
Returns: number of replaced symbolic links
Return type: str
job_rebuild(project_uid: str, job_uid: str)
job_rebuild(project_uid: str, job_uid: str)
Re-run the builder for the given job to re-generate parameters and input slots
Parameters:
project_uid (str) -- target project UID
job_uid (str) -- target job UID
job_remove_from_workspace(project_uid: str, job_uid: str, workspace_uid: str)
job_remove_from_workspace(project_uid: str, job_uid: str, workspace_uid: str)
Removes a job from the specified workspace
Parameters:
project_uid (str) -- the id of the project
job_uid (str) -- the id of the job to remove
workspace_uid (str) -- the id of the workspace
Returns: the number of modified documents
Return type: int
job_send_streamlog(project_uid: str, job_uid: str, message: str, error: bool = False, flags: List[str] = [], imgfiles: List[EventLogAsset] = [])
job_send_streamlog(project_uid: str, job_uid: str, message: str, error: bool = False, flags: List[str] = [], imgfiles: List[EventLogAsset] = [])
Add the given message to the target job's event log
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job uid, e.g., "J42"
message (str) -- Message to log
error (bool*,* optional) -- Whether to show as error, defaults to False
flags (list*[str]**,* optional) -- Additional event flags
imgfiles (list*[StreamlogAsset]*) -- Uploaded GridFS files to attach to event
Returns: Created mongo event ID
Return type: str
job_set_param(project_uid: str, job_uid: str, param_name: str, param_new_value: Any)
job_set_param(project_uid: str, job_uid: str, param_name: str, param_new_value: Any)
Set the given job parameter to the given value
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
param_name (str) -- target parameter name, e.g., "random_seed"
param_new_value (any) -- new parameter value
Returns: whether the job has any build errors
Return type: bool
kill_job(project_uid, job_uid, killed_by_user_id=None)
kill_job(project_uid, job_uid, killed_by_user_id=None)
Kill the given running job
Parameters:
project_uid (str) -- uid of the project that contain the job to kill
job_uid (str) -- job uid to kill
killed_by_user_id (str) -- ID of user that killed the job, optional
Raises: AssertionError
list_projects()
list_projects()
Get list of all projects available
Returns: all projects available in the database
Return type: list
list_users()
list_users()
Show a table of all CryoSPARC user accounts
list_workspaces(project_uid=None)
list_workspaces(project_uid=None)
List all workspaces inside a given project (or all projects if not specified)
Parameters: project_uid (str*,* optional) -- target project UID, e.g. "P1", defaults to None
Returns: list of workpaces in project or all projects if not specified
Return type: list
make_job(job_type: str, project_uid: str, workspace_uid: str, user_id: str, created_by_job_uid: str | None = None, title: str | None = None, desc: str | None = None, params: dict = {}, input_group_connects: dict = {}, enable_bench: bool = False, priority: int | None = None, do_layout: bool = True)
make_job(job_type: str, project_uid: str, workspace_uid: str, user_id: str, created_by_job_uid: str | None = None, title: str | None = None, desc: str | None = None, params: dict = {}, input_group_connects: dict = {}, enable_bench: bool = False, priority: int | None = None, do_layout: bool = True)
Create a new job with the given type in the given project/workspace
To see all available job types, see cryosparc_compute/jobs/register.py (look for the value of the contains
keys).
To see what parameters are available for a job and what values are available, reference the build.py file in cryosparc_compute/jobs that pertains to the desired job type.
Parameters:
job_type (str) -- Type of job
project_uid (str) -- project ID, e.g., P3
workspace_uid (str) -- workspace ID, e.g., W1
user_id (str) -- ID of user
created_by_job_uid (str*,* optional) -- ID of the parent job that created this, defaults to None
title (str*,* optional) -- Descriptive title for what this job is for, defaults to None
desc (str*,* optional) -- Detailed description, defaults to None
params (dict*,* optional) -- Parameter settings for the job, defaults to {}
input_group_connects (dict*,* optional) -- Connected input groups, which each key is name and each value has format
JXX.output_group_name
, defaults to {}enable_bench (bool*,* optional) -- enable benchmarking stats for this job, defaults to False
priority (int*,* optional) -- job priority, defaults to None (use default priority)
do_layout (bool*,* optional) -- re-compute the workspace and project tree view after creating this job, defaults to True
Returns: ID of new job
Return type: str
Example:
The following makes a 3-class ab-initio job with particles connected from a Select 2D Classes job:
parse_template_vars(target)
parse_template_vars(target)
Parses and returns a list of template variable names from the submission script and cluster commands in a cluster target :param target: a cluster target :return: list of template variable names :rtype: list[str]
propose_clone_job_chain(project_uid: str, start_job_uid: str, end_job_uid: str)
propose_clone_job_chain(project_uid: str, start_job_uid: str, end_job_uid: str)
Deprecated - Old function name for get_job_chain() which was used only for cloning jobs
refresh_job_types()
refresh_job_types()
Reload the available input and parameter types on jobs created prior to a CryoSPARC update. Run this following a CryoSPARC update to ensure jobs run with the correct parameters.
Returns: list of the results of the mongodb query
Return type: dicts
remove_scheduler_lane(name: str)
remove_scheduler_lane(name: str)
Removes the specified lane and any targets assigned under the lane in the master scheduler
NOTE
NOTE
This will remove any worker node associated with the specified lane.
Parameters: name (str) -- the name of the lane to remove
remove_scheduler_target_cluster(name: str)
remove_scheduler_target_cluster(name: str)
Removes the specified cluster/lane and any targets assigned under the lane in the master scheduler
NOTE
NOTE
This will remove any worker node associated with the specified cluster/lane.
Parameters: name (str) -- the name of the cluster/lane to remove
Returns: "True" if successful
Return type: bool
remove_scheduler_target_node(hostname: str)
remove_scheduler_target_node(hostname: str)
Removes a target worker node from the master scheduler
Parameters: hostname (str) -- the hostname of the target worker node to remove
remove_tag(tag_uid: str)
remove_tag(tag_uid: str)
Delete the given tag and remove it from all connected entitites
Parameters: tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains deleted count
Return type: dict
remove_tag_from_job(project_uid: str, job_uid: str, tag_uid: str)
remove_tag_from_job(project_uid: str, job_uid: str, tag_uid: str)
Remove the given tag from the job
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "W1"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified jobs count
Return type: dict
remove_tag_from_project(project_uid: str, tag_uid: str)
remove_tag_from_project(project_uid: str, tag_uid: str)
Remove the given tag from the given projet
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified project count
Return type: dict
remove_tag_from_session(project_uid: str, session_uid: str, tag_uid: str)
remove_tag_from_session(project_uid: str, session_uid: str, tag_uid: str)
Remove the given tag from the session
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
session_uid (str) -- target session UID, e.g., "W1"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified sessions count
Return type: dict
remove_tag_from_workspace(project_uid: str, workspace_uid: str, tag_uid: str)
remove_tag_from_workspace(project_uid: str, workspace_uid: str, tag_uid: str)
Remove the given tag from the workspace
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
workspace_uid (str) -- target workspace UID, e.g., "W1"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified workspaces count
Return type: dict
request_delete_project(project_uid: str, request_user_id: str)
request_delete_project(project_uid: str, request_user_id: str)
Confirm what jobs and workspaces will be deleted when requesting to delete a project. Does not perform the deletion (see delete_project
).
Parameters:
project_uid (str) -- uid of the project to be deleted
request_user_id (str) -- _id of user requesting the project to be deleted
Returns: If the project does not have any jobs or workspaces related to it, will "disable" the project and return a string confirmation.
Return type: str
Returns: If the project has jobs or workspaces associated with it, returns 2 lists. First list are all non-deleted jobs that exist in the project, and second list are all workspaces within the project to be deleted.
Return type: tuple
request_delete_workspace(project_uid: str, workspace_uid: str, request_user_id: str)
request_delete_workspace(project_uid: str, workspace_uid: str, request_user_id: str)
Confirm what jobs be deleted when requesting to delete a workspace. Call delete_workspace
with results to perform deletion
Parameters:
project_uid (str) -- uid of the project containing the workspace to be deleted
workspace_uid (str) -- uid of the workspace to be deleted
Returns: If the workspace does not have any jobs related to it, "disables" the workspace and returns a string confirmation.
Return type: str
Returns: If the workspace has jobs associated with it, returns 2 lists. First list includes jobs only within the given workspace and can be deleted, and second list includes jobs that are a part of other workspaces and should not be deleted.
Return type: tuple
request_reset_password(email)
request_reset_password(email)
Generate a password reset token for a user with the given email. The token will appear in the Admin > User Management interface.
Parameters: email (str) -- email address of target user account
reset_password(email, newpass)
reset_password(email, newpass)
Reset a cryosparc user's password
Parameters:
email (str) -- the user's email address
password (str) -- the user's new password
Returns: the number of modified documents
Return type: str
run_external_job(project_uid: str, job_uid: str, status: typing_extensions.Literal[running, waiting] = 'waiting')
run_external_job(project_uid: str, job_uid: str, status: typing_extensions.Literal[running, waiting] = 'waiting')
Special run method which marks an External job as running or waiting.
Parameters:
project_uid (str) -- Project UID of target job
job_uid (str) -- Job UI
status (str*,* optional) -- Status to run with, defaults to "waiting"
save_extensive_validation_benchmark_data(instance_information: dict, job_timings: dict)
save_extensive_validation_benchmark_data(instance_information: dict, job_timings: dict)
Save extensive validation benchmark data to database for visulisation in UI
save_performance_benchmark_references()
save_performance_benchmark_references()
On every startup, this function is called to repopulate the benchmark_references collection.
set_cluster_job_custom_vars(project_uid, job_uid, cluster_job_custom_vars)
set_cluster_job_custom_vars(project_uid, job_uid, cluster_job_custom_vars)
Set a job's custom variables for cluster submission
set_instance_banner(active: bool, title: str | None = None, body: str | None = None)
set_instance_banner(active: bool, title: str | None = None, body: str | None = None)
Set an instance banner as active or inactive. Updates title and body if provided.
Parameters:
active (bool) -- True/False to activate/deactivate
title (str*,* optional) -- Banner title, defaults to None
body (str*,* optional) -- Banner body, defaults to None
Returns: title and body
Return type: dict
set_instance_default_job_priority(priority)
set_instance_default_job_priority(priority)
Set the default priority for jobs queued by users without an explicit priority set in Admin > User Management
Parameters: priority (int) -- Non-negative priority number
set_job_final_result(project_uid: str, job_uid: str, is_final_result: bool)
set_job_final_result(project_uid: str, job_uid: str, is_final_result: bool)
Sets job final result flag and updates flags for all jobs in the project
set_login_message(active: bool, title: str | None = None, body: str | None = None)
set_login_message(active: bool, title: str | None = None, body: str | None = None)
Set an login message as active or inactive. Updates title and body if provided.
Parameters:
active (bool) -- True/False to activate/deactivate
title (str*,* optional) -- Login message title, defaults to None
body (str*,* optional) -- Login message body, defaults to None
Returns: title and body
Return type: dict
set_maintenance_mode(maintenance_mode: bool)
set_maintenance_mode(maintenance_mode: bool)
Set maintenance mode status.
Parameters: maintenance_mode (bool) -- True to enable, False to disable
set_project_owner(project_uid: str, user_id: str)
set_project_owner(project_uid: str, user_id: str)
Updates a project's owner
Parameters:
project_uid (str) -- target project UID to update, e.g., "P3"
user_id (str) -- the owner's user id
set_project_param_default(project_uid: str, param_name: str, value: Any)
set_project_param_default(project_uid: str, param_name: str, value: Any)
Set a default value for a given parameter name globally for the given project
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
param_name (str) -- target parameter name, e.g., "compute_use_ssd"
value (Any) -- target parameter default value
set_scheduler_target_node_cache(hostname: str, cache_reserve: int | None = None, cache_quota: int | None = None)
set_scheduler_target_node_cache(hostname: str, cache_reserve: int | None = None, cache_quota: int | None = None)
Sets the cache reserve and cache quota for a target worker node
Parameters:
hostname (str) -- the hostname of the target worker node
cache_reserve (int*,* optional) -- the size (in MB) to reserve on the SSD for the cryosparc cache
cache_quota (int*,* optional) -- the max size (in MB) to use on the SSD for the cryosparc cache
Raises: AssertionError
set_scheduler_target_node_lane(hostname, lane)
set_scheduler_target_node_lane(hostname, lane)
Sets the lane of a target worker node
Parameters:
hostname (str) -- the hostname of the target worker node
lane (str) -- the name of the lane to assign to the target worker node:
Returns: target worker node's updated configurations
Return type: list
Raises: AssertionError
set_scheduler_target_property(hostname: str, key: str, value: Any)
set_scheduler_target_property(hostname: str, key: str, value: Any)
Set a property for the target worker node
Parameters:
hostname (str) -- the hostname of the target worker node
key (str) -- the key of the property whose value is being modified
value (str) -- the actual value to set for the property
Returns: information about all targets
Return type: list
Raises: AssertionError
set_user_allowed_prefix_dir(user_id: str, allowed_prefix: str)
set_user_allowed_prefix_dir(user_id: str, allowed_prefix: str)
Sets directories that users are allowed to query from the file browser
Parameters:
user_id (str) -- the mongo id of the user to update
allowed_prefix (str) -- the path of the directory the user can query inside (must start with "/", and must be an absolute path)
Returns: True if successful
Return type: bool
Raises: AssertionError
set_user_default_priority(email_address: str, priority: int)
set_user_default_priority(email_address: str, priority: int)
Set a user's priority when launching jobs. This is equivalent to changing the Default Job Priority in the Admin > User Management interface.
Parameters:
email_address (_type_) -- _description_
priority (_type_) -- _description_
set_user_lanes(email_address: str, assigned_lanes: List[str])
set_user_lanes(email_address: str, assigned_lanes: List[str])
Only allow a user account with the given email address to queue to the given lanes.
Parameters:
email_address (str) -- target user account email
assigned_lanes (list) -- list of lane names to assign
set_user_state_var(user_id: str, key: str, value: Any, set_on_insert_only: bool = False)
set_user_state_var(user_id: str, key: str, value: Any, set_on_insert_only: bool = False)
Updates a user's state variable
Parameters:
user_id (str) -- the user's id
key (str) -- the name of the key to update or insert
value (str) -- the actual value to insert under the key in the state variable
set_on_insert_only (bool*,* optional) -- specifies if
setOnInsert
should be used (if the update op results in an insertion of a document, this will assign the specified values to the fields in the doc), defaults to False
Returns: confirmation that the update happened successfully
Return type: bool
start_worker_test(project_uid: str, test_type: str = 'all', targets: list | None = None, verbose: bool = True)
start_worker_test(project_uid: str, test_type: str = 'all', targets: list | None = None, verbose: bool = True)
Launch a test to ensure CryoSPARC can correctly queue to all or only the given list of targets
Parameters:
project_uid (str) -- target project UID to launch test int
test_type (str*,* optional) -- 'all', 'launch', 'ssd' or 'gpu', defaults to 'all'
targets (list*,* optional) -- list of targets to test, omit to test all, defaults to None
verbose (bool*,* optional) -- if True, show extended log information, defaults to True
take_over_project(project_uid, force=False)
take_over_project(project_uid, force=False)
Write a lockfile to an existing project so that no CryoSPARC instances outside of the current one may modify it.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
force (bool*,* optional) -- If True, (over-)write the lock file even if a lock file already exists, defaults to False
Raises: Exception -- If lock file cannot be written
Returns: True if takeover succeeded
Return type: bool
take_over_projects(force=False)
take_over_projects(force=False)
Write lockfiles to all existing projects so that no CryoSPARC instances outside of the current one may modify it.
Parameters: force (bool*,* optional) -- Takeover all projects even if they already have a lock file from a diffeent instance, defaults to False
test_authentication(project_uid, job_uid)
test_authentication(project_uid, job_uid)
Test if a worker running a job can correctly authenticate with command_core
test_connection(sleep_time=0)
test_connection(sleep_time=0)
Check the connection to the command_core service. Returns True if the connection succeeded.
Parameters: sleep_time (float*,* optional) -- How long to sleep for, in seconds, defaults to 0
Returns: True if connection succeeded
Return type: bool
unarchive_project(project_uid: str, abs_path_to_project_dir: str)
unarchive_project(project_uid: str, abs_path_to_project_dir: str)
Reverse archive operation.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
abs_path_to_project_dir (str) -- Project directory to unarchive from
unset_project_param_default(project_uid: str, param_name: str)
unset_project_param_default(project_uid: str, param_name: str)
Clear the per-project default value for the given parameter name.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
param_name (str) -- target parameter name, e.g., "compute_use_ssd"
update_all_job_sizes(asynchronous=True, sleep_time=0)
update_all_job_sizes(asynchronous=True, sleep_time=0)
Recompute the folder sizes of all jobs
Parameters:
asynchronous (bool*,* optional) -- if True, returns immediately and computes sizes in the background, defaults to True
sleep_time (float*,* optional) -- if asynchronous is True, waits the given number of seconds before proceeding, defaults to 0
update_final_result_statuses(project_uid: str)
update_final_result_statuses(project_uid: str)
Update jobs in a project to have the correct flags set when marked as final or as an ancestor of marked as final
update_job(project_uid: str, job_uid: str, attrs: dict, operation='$set')
update_job(project_uid: str, job_uid: str, attrs: dict, operation='$set')
Update a specific job's document with the provided attributes
Parameters:
project_uid (str) -- project UID
job_uid (str) -- uid of job
attrs (dict) -- the attribute to modify in the job document
operation (str*,* optional) -- the operation to perform on the document, defaults to '$set'
update_parents_and_children_for_project(project_uid)
update_parents_and_children_for_project(project_uid)
Restore tree view if broken following an import
Parameters: project_uid (str) -- target project UID, e.g., "P3"
update_project(project_uid: str, attrs: dict, operation='$set', export=True)
update_project(project_uid: str, attrs: dict, operation='$set', export=True)
Update a project's attributes
Parameters:
project_uid (str) -- the id of the project to update
attrs (dict) -- the key to update and its value
operation (str) -- the type of mongodb operation to perform
Example:
update_project_directory(project_uid: str, new_project_dir: str)
update_project_directory(project_uid: str, new_project_dir: str)
Safely updates the project directory of a project given a directory. Checks if the directory exists, is readable, and writeable.
Parameters:
project_uid (str) -- uid of the project to update
new_project_dir_container (str) -- the new directory
update_project_root_dir(project_uid: str, new_project_dir_container: str)
update_project_root_dir(project_uid: str, new_project_dir_container: str)
Updates the root directory of a project (creates a new project (PXXX) folder within the new root directory and updates the project document)
NOTE
NOTE
the root project directory passed cannot contain a folder with the same project (PXXX) folder name
Parameters:
project_uid (str) -- uid of the project to update
new_project_dir_container (str) -- the new "root" path where the project (PXXX) folder is to be created
update_project_size(project_uid: str, use_prt: bool = True)
update_project_size(project_uid: str, use_prt: bool = True)
Calculates the size of the project. Similar to running du -sL inside the project dir
Parameters: project_uid (str) -- Unique ID of project to update, e.g., "P3"
update_session(project_uid: str, session_uid: str, attrs: dict, operation='$set')
update_session(project_uid: str, session_uid: str, attrs: dict, operation='$set')
Similar to update_workspace
, but for sessions
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
session_uid (str) -- target session UID, e.g., "S5"
attrs (dict) -- Attributes to update
operation (str*,* optional) -- mongo operation type, defaults to '$set'
update_tag(tag_uid: str, title: str | None = None, colour: str | None = None, description: str | None = None)
update_tag(tag_uid: str, title: str | None = None, colour: str | None = None, description: str | None = None)
Update the title, colour and/or description of the given tag UID
Parameters:
tag_uid (str) -- target tag UID, e.g., "T1"
title (str*,* optional) -- new value for title, defaults to None
colour (str*,* optional) -- new value of colour, defaults to None
description (str*,* optional) -- new value of description, defaults to None
Returns: updated tag document
Return type: dict
update_user(email: str, password: str, username: str | None = None, first_name: str | None = None, last_name: str | None = None, admin: bool | None = None)
update_user(email: str, password: str, username: str | None = None, first_name: str | None = None, last_name: str | None = None, admin: bool | None = None)
Updates a cryosparc user's details. Email and password are required, other params will only be set if they are not empty.
Parameters:
email (str) -- the user's email address
password (str) -- the user's password
username (str*,* optional) -- new username of the user
first_name (str*,* optional) -- new given name of the user
last_name (str*,* optional) -- new surname of the user
admin (bool*,* optional) -- whether the user should be admin or not
Returns: confirmation message if successful or not
Return type: str
update_workspace(project_uid: str, workspace_uid: str, attrs: dict, operation='$set', export=True)
update_workspace(project_uid: str, workspace_uid: str, attrs: dict, operation='$set', export=True)
Update properties for the given workspace
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
workspace_uid (str) -- target workspace UID, e.g., "W4"
attrs (dict) -- Attributes to update
operation (str*,* optional) -- mongo operation type, defaults to '$set'
export (bool*,* optional) -- Whether to dump this workspace to disk export to other instances, defaults to True
upload_extensive_validation_benchmark_data(instance_information: dict, job_timings: dict)
upload_extensive_validation_benchmark_data(instance_information: dict, job_timings: dict)
Upload extensive validation benchmark data
validate_enqueue_job(project_uid, job_uid, interactive, lightweight, lane, hostname)
validate_enqueue_job(project_uid, job_uid, interactive, lightweight, lane, hostname)
Validate a job's queueing configuration
validate_license()
validate_license()
Check whether the active license ID is valid. If the license is invalid, use dump_license_validation_results()
to get additional details.
Returns: True if valid, False otherwise.
Return type: bool
validate_project_creation(project_title: str, project_container_dir: str)
validate_project_creation(project_title: str, project_container_dir: str)
Validate project title and container directory for a project creation request.
Parameters:
project_title (str) -- title of the project
project_container_dir (str) -- container directory of the project
Returns: dict with a slug of the project name, validity, and a message if the request was invalid
Return type: dict
verify_cluster(name: str)
verify_cluster(name: str)
Ensure cluster has been properly configured by executing a generic 'info' command
Parameters: name (str) -- name of cluster/lane (lane must be of type 'cluster')
Returns: the result of the command execution
Return type: str
Raises: AssertionError
wait_job_complete(project_uid: str, job_uid: str, timeout: int | float = 5)
wait_job_complete(project_uid: str, job_uid: str, timeout: int | float = 5)
Hold the request and prevent from returning until the given job status is "completed", the given timeout is reached or the global command client request timeout is reached (default 300 seconds)
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
timeout (float*,* optional) -- how long to wait in seconds, defaults to 5
Returns: the job status when the timeout is reached
Return type: str
get_email_by_id(user_id: str)
get_email_by_id(user_id: str)
Get the registered email address for the user account with the given ID
Parameters: user_id (str) -- _id property of the target user account
Returns: first email property in the database, or safe default value if not found
Return type: str
get_id_by_email(email: str)
get_id_by_email(email: str)
Get the _id property for the user account with the given email address
Parameters: email (str) -- target user email address
Raises: ValueError -- If user is not found
Returns: _id property of the target user
Return type: str
get_id_by_email_password(email: str, password: str)
get_id_by_email_password(email: str, password: str)
Retrieve the ID for a user given their email and password. Raises error if matching email/password combo is not found.
Parameters:
email (str) -- Target user email address
password (str) -- Target user password
Returns: ID of requesting user
Return type: str
get_user_id(user: str)
get_user_id(user: str)
Get a user's ID by one of its unique identifiers, including email, username and the ID itself. Returns None if user ID is not found or given identifier is not unique.
get_username_by_id(user_id: str)
get_username_by_id(user_id: str)
Get the registered user name for the user account with the given ID
Parameters: user_id (str) -- _id property of the target user account
Returns: name property in the database
Return type: str
Last updated