Last updated
Last updated
Call methods described in this module directly as arguments to cryosparcm cli
or referenced from the cli
object in cryosparcm icli
.
cli example:
icli example:
add_project_user_access(project_uid: str, requester_user_id: str, add_user_id: str)
Allows owner of a project (or admin) to grant access to another user to view and edit the project
Parameters:
project_uid (str) -- uid of the project to grant access to
requester_user_id (str) -- _id of the user requesting a new user to be added to the project
add_user_id (str) -- uid of the user to add to the project
Raises: AssertionError
add_scheduler_lane(name: str, lanetype: str, title: str | None = None, desc: str = '')
Adds a new lane to the master scheduler
Parameters:
name (str) -- name of the lane
lanetype (str) -- type of lane ("cluster" or "node")
title (str) -- optional title of the lane
desc (str) -- optional description of the lane
add_scheduler_target_cluster(name, worker_bin_path, script_tpl, send_cmd_tpl='{{ command }}', qsub_cmd_tpl='qsub {{ script_path_abs }}', qstat_cmd_tpl='qstat -as {{ cluster_job_id }}', qstat_code_cmd_tpl=None, qdel_cmd_tpl='qdel {{ cluster_job_id }}', qinfo_cmd_tpl='qstat -q', transfer_cmd_tpl='cp {{ src_path }} {{ dest_path }}', cache_path=None, cache_quota_mb=None, cache_reserve_mb=10000, title=None, desc=None)
Add a cluster to the master scheduler
Parameters:
name (str) -- name of cluster
worker_bin_path (str) -- absolute path to 'cryosparc_package/cryosparc_worker/bin' on the cluster
script_tpl (str) -- script template string
send_cmd_tpl (str) -- send command template string
qsub_cmd_tpl (str) -- queue submit command template string
qstat_cmd_tpl (str) -- queue stat command template string
qdel_cmd_tpl (str) -- queue delete command template string
qinfo_cmd_tpl (str) -- queue info command template string
transfer_cmd_tpl (str) -- transfer command template string (currently unused)
cache_path (str) -- path on SSD that can be used for the cryosparc cache
cache_quota_mb (int) -- the max size (in MB) to use for the cache on the SSD
cache_reserve_mb (int) -- size (in MB) to set aside for uses other than CryoSPARC cache
title (str) -- an optional title to give to the cluster
desc (str) -- an optional description of the cluster
Returns: configuration parameters for new cluster
Return type: dict
add_scheduler_target_node(hostname, ssh_str, worker_bin_path, num_cpus, cuda_devs, ram_mb, has_ssd, cache_path=None, cache_quota=None, cache_reserve=10000, monitor_port=None, gpu_info=None, lane='default', title=None, desc=None)
Adds a worker node to the master scheduler
Parameters:
hostname (str) -- the hostname of the target worker node
ssh_str (str) -- the ssh connection string of the worker node
worker_bin_path (str) -- the absolute path to 'cryosparc_package/cryosparc_worker/bin' on the worker node
num_cpus (int) -- total number of CPU threads available on the worker node
cuda_devs (list) -- total number of cuda-capable devices (GPUs) on the worker node, represented as a list (i.e. range(4))
ram_mb (float) -- total available physical ram in MB
has_ssd (bool) -- if an ssd is available or not
cache_path (str) -- path on SSD that can be used for the cryosparc cache
cache_quota (int) -- the max size (in MB) to use for the cache on the SSD
cache_reserve (int) -- size (in MB) to initially reserve for the cache on the SSD
gpu_info (list) -- compiled GPU information as computed by get_gpu_info
lane (str) -- the scheduler lane to add the worker node to
title (str) -- an optional title to give to the worker node
desc (str) -- an optional description of the worker node
Returns: configuration parameters for new worker node
Return type: dict
Raises: AssertionError
add_tag_to_job(project_uid: str, job_uid: str, tag_uid: str)
Tag the given job with the given tag UID
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g., "J42"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified jobs count
Return type: dict
add_tag_to_project(project_uid: str, tag_uid: str)
Tag the given project with the given tag
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified documents count
Return type: dict
add_tag_to_session(project_uid: str, session_uid: str, tag_uid: str)
Tag the given session with the given tag
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
session_uid (str) -- target session UID, e.g., "S1"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified sessions count
Return type: dict
add_tag_to_workspace(project_uid: str, workspace_uid: str, tag_uid: str)
Tag the given workspace with the given tag
Parameters:
project_uid (_type_) -- target project UID, e.g., "P3"
workspace_uid (str) -- target workspace UID, e.g. "W1"
tag_uid (str) -- target tag UID, e.g., "T1"
Returns: contains modified workspaces count
Return type: dict
admin_user_exists()
Returns True if there exists at least one user with admin privileges, False otherwise
Returns: Whether an admin exists
Return type: bool
archive_project(project_uid: str)
Archive given project. This means that the project can no longer be modified and jobs cannot be created or run. Once archived, the project directory may be safely moved to long-term storage.
Parameters: project_uid (str) -- target project UID, e.g., "P3"
attach_project(owner_user_id: str, abs_path_export_project_dir: str)
Attach a project by importing it from the given path and writing a new lockfile. May only run this on previously-detached projects.
Parameters:
owner_user_id (str) -- user account ID performing this opertation that will take ownership of imported project.
abs_path_export_project_dir (str) -- absolute path to directory of CryoSPARC project to attach
Returns: new project UID of attached project
Return type: str
calculate_intermediate_results_size(project_uid: str, job_uid: str, always_keep_final: bool = True, use_prt=True)
Find intermediate results, calculate intermediate results total size, save it to the job doc, all in a PostResponseThread
check_project_exists(project_uid: str)
Check that the given project exists and has not been deleted
Parameters: project_uid (str) -- unique ID of target project, e.g., "P3"
Returns: True if the project exists and hasn't been deleted
Return type: bool
check_workspace_exists(project_uid: str, workspace_uid: str)
Returns True if target workspace exists.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
workspace_uid (str) -- target workspace UID, e.g., "W4"
Returns: True if the workspace exists, False otherwise
Return type: bool
cleanup_data(project_uid: str, workspace_uid: str | None = None, delete_non_final: bool = False, delete_statuses: List[str] = [], clear_non_final: bool = False, clear_sections: List[str] = [], clear_types: List[str] = [], clear_statuses: List[str] = [])
Cleanup project or workspace data, clearing/deleting jobs based on final result status, sections, types, or status
cleanup_jobs(project_uid: str, job_uids_to_delete: List[str], job_uids_to_clear: List[str])
Cleanup jobs helper for cleanup_data
clear_intermediate_results(project_uid: str, workspace_uid: str | None = None, job_uid: str | None = None, always_keep_final: bool | None = True)
Asynchornously remove intermediate results from the given project or job
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
workspace_uid (str*,* optional) -- clear intermediate results for all jobs in a workspace
job_uid (str*,* optional) -- target job UID, e.g., "J42", defaults to None
always_keep_final (bool*,* optional) -- not used, defaults to True
clear_job(project_uid: str, job_uid: str, nofail=False, isdelete=False)
Clear a job to get it back to building state (do not clear params or inputs)
Parameters:
project_uid (str) -- uid of the project that contains the job to clear
job_uid (str) -- uid of the job to clear
nofail (bool) -- If True, ignores errors that occur while job is clearing, defaults to False
isdelete (bool) -- Set to True when the job is about to be deleted, defaults to False
Raises: AssertionError
clear_jobs_by_section(project_uid, workspace_uid=None, sections=[])
Clear all jobs from a project belonging to specified sections in the job register, optionally scoped to a workspace
clear_jobs_by_type(project_uid, workspace_uid=None, types=[])
Clear all jobs from a project belonging to specified types, optionally scoped to a workspace
clone_job(project_uid: str, workspace_uid: str | None, job_uid: str, created_by_user_id: str, created_by_job_uid: str | None = None)
Creates a new job as a clone of the provided job
Parameters:
project_uid (str) -- project UID
workspace_uid (str | None) -- uid of the workspace, may be empty
job_uid (str) -- uid of the job to copy
created_by_user_id (str) -- the id of the user creating the clone
created_by_job_uid (str*,* optional) -- uid of the job creating the clone, defaults to None
Returns: the job uid of the newly created clone
Return type: str
clone_job_chain(project_uid: str, start_job_uid: str, end_job_uid: str, created_by_user_id: str, workspace_uid: str | None = None, new_workspace_title: str | None = None)
Clone jobs that directly descend from the given start job UID up to the given end job UID. Returns a dict with information about cloned jobs, or None if nothing was cloned
Parameters:
project_uid (str) -- project UID where jobs are located, e.g., "P3"
start_job_uid (str) -- starting anscestor job UID
end_job_uid (str) -- ending descendant job UID
created_by_user_id (str) -- ID of user performing this operation
workspace_uid (str*,* optional) -- uid of workspace to clone jobs into, defaults to None
new_workspace_title (str*,* optional) -- Title of new workspace to create if a uid is not provided, defaults to None
Returns: dictionary with information about created jobs and workspace or None
Return type: dict | None
clone_jobs(project_uid: str, job_uids: List[str], created_by_user_id, workspace_uid=None, new_workspace_title=None)
Clone the given list of jobs. If any jobs are related, it will try to re-create the input connections between the cloned jobs (but maintain the same connections to jobs that were not cloned)
Parameters:
project_uid (str) -- target project uid (e.g., "P3")
job_uids (list) -- List of job UIDs to delete (e.g., ["J1", "J2", "J3"]
)
created_by_user_id (str) -- ID of user performing this operation
workspace_uid (str*,* optional) -- uid of workspace to clone jobs into, defaults to None
new_workspace_title (str*,* optional) -- Title of new workspace to create if one is not provided, defaults to None
Returns: dictionary with information about created jobs and workspace
Return type: dict
compile_extensive_validation_benchmark_data(instance_information: dict, job_timings: dict)
Compile extensive validation benchmark data for upload
create_backup(backup_dir: str, backup_file: str)
Create a database backup in the given directory with the given file name
Parameters:
backup_dir (str) -- directory to create backup in
backup_file (str) -- filename of backup file
create_empty_project(owner_user_id: str, project_container_dir: str, title: str, desc: str | None = None, project_dir: str | None = None, export=True, hidden=False)
Creates a new project and project directory and creates a new document in the project collection
NOTE
project_container_dir is an absolute path that the user guarantees is available everywhere.
Parameters:
owner_user_id (str) -- the _id of the user that owns the new project (which is also the user that requests to create the project)
project_container_dir (str) -- the path to the "root" directory in which to create the new project directory
title (str*,* optional) -- the title of the new project to create, defaults to None
desc (str*,* optional) -- the description of the new project to create, defaults to None
export (bool*,* optional) -- if True, outputs project details to disk for exporting to another instance, defaults to True
hidden (bool*,* optional) -- if True, outputs project details to disk for exporting to another instance, defaults to False
Returns: the new uid of the project that was created
Return type: str
create_empty_workspace(project_uid, created_by_user_id, created_by_job_uid=None, title=None, desc=None, export=True)
Add a new empty workspace to the given project.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
created_by_user_id (str) -- User _id that creates this workspace
created_by_job_uid (str*,* optional) -- Job UID that creates this workspace, defaults to None
title (str*,* optional) -- Workspace title,, defaults to None
desc (str*,* optional) -- Workspace description, defaults to None
export (bool*,* optional) -- If True, dumps workspace contents to disk for exporting to other CryoSPARC instances, defaults to True
Returns: UID of new workspace, e.g., "W4"
Return type: str
create_new_job(job_type, project_uid, workspace_uid, created_by_user_id, created_by_job_uid=None, title=None, desc=None, do_layout=True, dry_run=False, enable_bench=False, priority=None)
Create a new job
NOTE
This request runs on its own thread, but locks on the "jobs" lock, so only one create can happen at a time.This means that the builder functions below can take as much time as they like to do stuff in their own thread (releasing GIL like IO or numpy etc) and command can still respond to other requests that don't require this lock.
Parameters:
job_type (str) -- the name of the job to create
project_uid (str) -- uid of the project
workspace_uid (str) -- uid of the workspace
created_by_user_id (str) -- the id of the user creating the job
created_by_job_uid (str) -- uid of the job that created this job
title (str) -- optional title of the job
desc (str) -- optional description of the job
do_layout (bool) -- specifies if the job tree layout should be recalculated, defaults to True
enable_bench (bool) -- should performance be measured for this job (benchmarking)
Returns: uid of the newly created job
Return type: str
Raises: AssertionError
create_tag(title: str, type: str, created_by_user_id: str, colour: str | None = None, description: str | None = None)
Create a new tag
Parameters:
title (str) -- tag name
type (str) -- tag type such as "general", "project", "workspace", "session" or "job"
created_by_user_id (str) -- user account ID performing this action
colour (str*,* optional) -- tag colour, may be "black", "red", green", etc., defaults to None
description (str*,* optional) -- detailed tag description, defaults to None
Returns: created tag document
Return type: dict
create_user(created_by_user_id: str, email: str, password: str, username: str, first_name: str, last_name: str, admin: bool = False)
Creates a new CryoSPARC user account
Parameters:
created_by_user_id (str) -- identity of user who is creating the new user
email (str) -- email of user to create
password (str) -- password of user to create
admin (bool*,* optional) -- specifies if user should have an administrator role or not, defaults to False
Returns: mongo _id of user that was created
Return type: str
delete_detached_project(project_uid: str)
Deletes large database entries such as streamlogs and gridFS for a detached project and marks the project and its jobs as deleted
delete_job(project_uid: str, job_uid: str, relayout=True, nofail=False, force=False)
Deletes a job after killing it (if running), clearing it, setting the "deleted" property, and recalculating the tree layout
Parameters:
project_uid (str) -- uid of the project that contains the job to delete
job_uid (str) -- uid of the job to delete
relayout (bool) -- specifies if the tree layout should be recalculated or not
delete_jobs(project_job_uids: List[Tuple[str, str]], relayout=True, nofail=False, force=False)
Delete the given project-job UID combinations, provided in the following format:
Or the following is also valid:
Where PX is a project UID and JY is a job UID in that project
Parameters:
project_job_uids (list*[tuple[str,* str*]**]*) -- project-job UIDs to delete
relayout (bool*,* optional) -- whether to recompute the layout of the job tree, defaults to True
nofail (bool*,* optional) -- if True, ignore errors when deleting, defaults to False
delete_project(project_uid: str, request_user_id: str, jobs_to_delete: list | None = None, workspaces_to_delete: list | None = None)
Iterate through each job and workspace associated with the project to be deleted and delete both, and then disable the project.
Parameters:
project_uid (str) -- uid of the project to be deleted
request_user_id (str) -- _id of user requesting the project to be deleted
jobs_to_delete (list) -- list of all jobs to be fully deleted
workspaces_to_delete (list) -- list of all workspaces that will be deleted by association
Returns: string confirmation that the workspace has been deleted
Return type: str
Raises: AssertionError -- if for some reason a workspace isn't deleted, this will be thrown
delete_project_user_access(project_uid: str, requester_user_id: str, delete_user_id: str)
Removes a user's access from a project.
Parameters:
project_uid (str) -- uid of the project to revoke access to
requester_user_id (str) -- _id of the user requesting a user to be removed from the project
delete_user_id (str) -- uid of the user to remove from the project
Raises: AssertionError
delete_user(email: str, requesting_user_email: str, requesting_user_password: str)
Remove a user from the CryoSPARC. Only administrators may do this
Parameters:
email (str) -- user's email
requesting_user_email (str) -- your CryoSPARC login email
requesting_user_password (str) -- your CryoSPARC password
Returns: confirmation message
Return type: str
delete_workspace(project_uid: str, workspace_uid: str, request_user_id: str, jobs_to_delete_inside_one_workspace: list | None = None, jobs_to_update_inside_multiple_workspaces: list | None = None)
Asynchronously iterate through each job associated with the workspace to be deleted and either delete or update the job, and then disables the workspace
Parameters:
project_uid (str) -- uid of the project containing the workspace to be deleted
workspace_uid (str) -- uid of the workspace to be deleted
request_user_id (str) -- uid of the user that is requesting the workspace to be deleted
jobs_to_delete_inside_one_workspace (list) -- list of all jobs to be fully deleted
jobs_to_update_inside_multiple_workspaces (list) -- list of all jobs to be updated so that the workspace uid is removed from its list of "workspace_uids"
Returns: string confirmation that the workspace has been deleted
Return type: str
detach_project(project_uid: str)
Detach a project by exporting, removing lockfile, then setting its detached property. This hides the project from the interface and allows other instances to attach and run this project.
Parameters: project_uid (str) -- target project UID to detach, e.g., "P3"
do_job(job_type: str, puid='P1', wuid='W1', uuid='devuser', params={}, input_group_connects={})
Create and run a job on the "default" node
Parameters:
job_type (str) -- type of job, e.g, "abinit"
puid (str*,* optional) -- project UID to create job in, defaults to 'P1'
wuid (str*,* optional) -- workspace UID to create job in, defaults to 'W1'
uuid (str*,* optional) -- User ID performing this action, defaults to 'devuser'
params (str*,* optional) -- parameter overrides for the job, defaults to {}
input_group_connects (dict*,* optional) -- input group connections dictionary, where each key is the input group name and each value is the parent output identifier, e.g., {"particles": "J1.particles"}
, defaults to {}
Returns: created job UID
Return type: str
dump_license_validation_results()
Get a summary of license validation checks
Returns: Text description of validation checks separated by newlines
Return type: str
enqueue_job(project_uid: str, job_uid: str, lane: str | None = None, user_id: str | None = None, hostname: str | None = None, gpus: List[int] | typing_extensions.Literal[False] = False, no_check_inputs_ready: bool = False)
Add the job in the given project to the queue for the given worker lane (default lane if not specified)
Parameters:
project_uid (str) -- uid of the project containing the job to queue
job_uid (str) -- job uid to queue
lane (str*,* optional) -- name of the worker lane onto which to queue
hostname (str*,* optional) -- the hostname of the target worker node
gpus (list*,* optional) -- list of GPU indexes on which to queue the given job, defaults to False
no_check_inputs_ready (bool*,* optional) -- if True, forgoes checking whether inputs are ready (not recommended), defaults to False
Returns: job status (e.g., 'launched' if success)
Return type: str
Example:
export_job(project_uid: str, job_uid: str)
Start export for the given job into the project's exports directory
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID to export, e.g., "J42"
export_output_result_group(project_uid: str, job_uid: str, output_result_group_name: str, result_names: List[str] | None = None)
Start export of given output result group. Optionally specify which output slots to select for export
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
job_uid (str) -- target job UID, e.g,. "J42"
output_result_group_name (str) -- target output, e.g., "particles"
result_names (list*,* optional) -- target result slots list, e.g., ["blob", "location"], exports all if unspecified, defaults to None
export_project(project_uid: str, override=False)
Ensure the given project is ready for transfer to another instance. Call detach_project
once this is complete.
Parameters:
project_uid (str) -- target project UID, e.g., "P3"
override (bool*,* optional) -- force run even if recently exported, defaults to False
generate_new_instance_uid(force_takeover_projects=False)
Generates a new uid for the CryoSPARC instance.
Parameters: force_takeover_projects (boolean*,* optional) -- If True, take overwrite existing lockfiles. If False, only creates lockfile in projects that don't already have one. Defaults to False
Returns: New instance UID
Return type: str
get_all_tags()
Get a list of available tag documents
Returns: list of tags
Return type: list
get_base_template_args(project_uid, job_uid, target)
Returns template arguments for cluster commands including: project_uid job_uid job_creator cryosparc_username project_dir_abs job_dir_abs job_log_path_abs job_type
get_cryosparc_log(service_name, days=7, date=None, max_lines=None, log_name='', func_name='', level='')
Get cryosparc service logs, filterable by date, name, function, and level
Parameters:
service_name (str) -- target service name
days (int*,* optional) -- number of days of logs to retrive, defaults to 7
date (str*,* optional) -- retrieve logs from a specific day formatted as "YYYY-MM-DD", defaults to None
max_lines (int*,* optional) -- maximum number of lines to retrieve, defaults to None
log_name (str*,* optional) -- name of internal logger type such as ", defaults to ""
func_name (str*,* optional) -- name of Python function producing logs, defaults to ""
level (str*,* optional) -- log severity level such as "INFO", "WARNING" or "ERROR", defaults to ""
Returns: Filtered log
Return type: str
get_default_job_priority(created_by_user_id: str)
Get the default job priority for jobs queued by the given user ID
Parameters: created_by_user_id (str) -- target user account _id
Returns: the integer priority, with 0 being the highest priority
Return type: int
get_gpu_info()
Asynchronously update the available GPU information on all connected nodes
get_instance_default_job_priority()
Default priority for jobs queued by users without an explicit priority set in Admin > User Management
Returns: Default priority config variable, 0 if never set
Return type: int
get_job(project_uid: str, job_uid: str, \*args, \*\*kwargs)
Get a job object with optionally only a few fields specified in *args
Parameters:
project_uid (str) -- project UID
job_uid (str) -- uid of job
Returns: mongo result set
Return type: dict