Guide: Migrating your CryoSPARC Instance
A guide to moving CryoSPARC from one location to another.
There may come a time when you want to move your CryoSPARC instance from one location to another. This may be between folders, different network storage locations, or even different host machines entirely. There are four main areas we will focus on:
- 1.The paths of any raw particle, micrograph, or movie data imported into CryoSPARC
- 2.All CryoSPARC project directories
- 3.The CryoSPARC database and its (new) location
- 4.The identities/hostnames of compute nodes or the master node and the CryoSPARC binaries
All four of the above areas can be taken care of in isolation, but if combined, will amount to a full-out migration of your CryoSPARC instance.
We will be using a combination of the shell as well as an interactive python session to complete this migration. You will need access to the master node in which the CryoSPARC system is hosted.
It is also recommended that a database backup is created before starting anything. More details are at: Setup, Configuration and Management
When raw data is imported into a CryoSPARC project, rather than copy the data into the project directory, symlinks are created inside the import job directories pointing to the original data files. Read 7. Imported data and symlinks in project directories for more details.
If you're moving data that you used an "Import Particles", "Import Micrographs" or "Import Movies" job to bring into CryoSPARC, you will need to repair these jobs. When CryoSPARC imports these three types of data, it creates symlinks to each file inside the job's
imported
directory. These symlinks may become broken if the original path to the file no longer exists. You can check the status of the symlinks by running ls -l
inside the imported
directory of the job. Note: The "Import Templates" and "Import Volumes" jobs copy the specified files directly into the job directory.CryoSPARC v4.0+
CryoSPARC ≤v3.3
Start up an interactive python session
cryosparcm icli
Use the cli to find all the symlinks for an entire project or a single job.
>>> cli.get_project_symlinks(’P1’)
or
>>> cli.get_job_symlinks(’P1’, ‘J3’)
[{'exists': True,
'link_path': '/bulk8/data/dev_nwong_projects/P1/J2/imported/004525579726026751633_14sep05c_c_00003gr_00014sq_00011hl_00003es.frames.tif',
'link_target': '/bulk8/data/dev_nwong_projects/testdata/empiar_10025_subset/14sep05c_c_00003gr_00014sq_00011hl_00003es.frames.tif'},
…]
where
link_path
is the path to the symlink filelink_target
is the file the symlink points toexists
indicates if the target file exists
Start up an interactive python session
cryosparcm icli
Execute a MongoDB query for all potentially affected "import" jobs
>>> jobs = list(db.jobs.find({'deleted': False, 'job_type': {'$in': ['import_particles', 'import_movies']}}, {'_id': 0, 'project_uid': 1, 'uid': 1, 'job_type': 1}))
>>> print(jobs)
[{'job_type': 'import_movies', 'project_uid': 'P1', 'uid': 'J3'},
{'job_type': 'import_movies', 'project_uid': 'P2', 'uid': 'J3'},
{'job_type': 'import_movies', 'project_uid': 'P1', 'uid': 'J41'},
{'job_type': 'import_particles', 'project_uid': 'P1', 'uid': 'J42'},
{'job_type': 'import_movies', 'project_uid': 'P2', 'uid': 'J29'},
{'job_type': 'import_particles', 'project_uid': 'P2', 'uid': 'J51'},
{'job_type': 'import_movies', 'project_uid': 'P2', 'uid': 'J55'},
{'job_type': 'import_movies', 'project_uid': 'P2', 'uid': 'J64'},
...]
From this point, you can take a look into each list job's
imported
directory>>> cli.get_job_dir_abs('P1', 'J3')
'/data/cryosparc_projects/P1/J3'
>>> !ls -l /data/cryosparc_projects/P1/J3/imported
total 99
lrwxrwxrwx 1 cryosparcuser cryosparcuser 83 Jan 31 2018 14sep05c_00024sq_00003hl_00002es.frames.mrc -> /data/EMPIAR/10025/data/14sep05c_raw_196/14sep05c_00024sq_00003hl_00002es.frames.mrc
lrwxrwxrwx 1 cryosparcuser cryosparcuser 83 Jan 31 2018 14sep05c_00024sq_00003hl_00005es.frames.mrc -> /data/EMPIAR/10025/data/14sep05c_raw_196/14sep05c_00024sq_00003hl_00005es.frames.mrc
lrwxrwxrwx 1 cryosparcuser cryosparcuser 83 Jan 31 2018 14sep05c_00024sq_00004hl_00002es.frames.mrc -> /data/EMPIAR/10025/data/14sep05c_raw_196/14sep05c_00024sq_00004hl_00002es.frames.mrc
lrwxrwxrwx 1 cryosparcuser cryosparcuser 83 Jan 31 2018 14sep05c_00024sq_00006hl_00003es.frames
Use the command
cli.job_import_replace_symlinks(project_uid, job_uid, prefix_cut, prefix_new)
where prefix_cut
is the beginning of the link you'd like to cut (e.g. /data/EMPIAR
) and where prefix_new
is what you'd like to replace it with (e.g. /data
). This function will loop through every file inside the job directory, find all symlinks, and only modify them only if they start with prefix_cut
. The function returns the number of links it modified. Below it is used in a loop to modify all jobs across all projects all at once.>>> failed_jobs = []
>>> for job in jobs:
try:
print("Repairing %s %s" % (job['project_uid'], job['uid']))
modified_count = cli.job_import_replace_symlinks(job['project_uid'], job['uid'], '/data/EMPIAR', '/data')
print("Finished. Modified %d links." % (modified_count))
except Exception as e:
failed_jobs.append((job['project_uid'], job['uid']))
print("Failed to repair %s %s: %s" % (job['project_uid'], job['uid'], str(e)))
...
>>> failed_jobs
[]
The information in this section applies to CryoSPARC ≤v3.3. For instructions on moving cryoSPARC project directories in v4.0+, see Use Case: Moving a project directory from one storage location to another
If you're moving the locations of the projects and their jobs, you will need to point CryoSPARC to the new directory where the projects reside. Jobs inside CryoSPARC are referenced by their relative location to their project directory. This allows a user to specify a new location for the project directory only, rather than each job.
cryosparcm cli "update_project('PXX', {'project_dir' : '/new/abs/path/PXX'})"
- Where
'PXX'
is the project UID and'/new/abs/path/PXX'
is the new directory.
Start up an interactive python session
cryosparcm icli
Execute a MongoDB query to list all project directories
>>> projects = list(db['projects'].find({}, {'uid': 1, 'project_dir': 1, '_id': 0}))
>>> projects
[{'project_dir': '/data/cryosparc_projects/P1', 'uid': 'P1'},
{'project_dir': '/data/cryosparc_projects/P2', 'uid': 'P2'},
{'project_dir': '/data/cryosparc_projects/P3', 'uid': 'P3'},
{'project_dir': '/data/cryosparc_projects/P4', 'uid': 'P4'},
...]
Use the command
update_project(project_uid, attrs, operation='$set')
where attrs
is a dictionary whose keys correspond to the fields in the project document to update. In the following example, update_project
used in a loop to modify all project directory paths.>>> failed_projects = []
>>> new_parent_dir = '/cryoem/cryosparc_projects'
>>> for project in projects:
new_project_dir = os.path.join(new_parent_dir, os.path.basename(project['project_dir']))
try:
print("Modifying project directory for %s: %s --> %s" % (project['uid'], project['project_dir'], new_project_dir))
cli.update_project(project['uid'], {'project_dir': new_project_dir})
except Exception as e:
failed_projects.append(project['uid'])
print("Failed to update %s: %s" % (project['uid'], str(e)))
...
>>> failed_projects
[]
The CryoSPARC database doesn't necessarily have to be in the same location as the CryoSPARC binaries. To move the database, specify the new location in the cryosparc_master/config.sh file.
cryosparcm stop
rsync -r --links /data/cryosparc/cryosparc_database/* /cryoem/cryosparc/cryosparc_database
Navigate to the cryosparc_master directory
cd /data/cryosparc/cryosparc_master
Modify
config.sh
to contain the new directory path to the databasenano config.sh
#modify the line below
export CRYOSPARC_DB_PATH="/cryoem/cryosparc/cryosparc_database"
cryosparcm start
Assuming that the CryoSPARC instance is located on a shared storage layer, if you want to host it on another machine, (i.e.
server1:39000 —> server2:39000
you just need to specify the new hostname to CryoSPARC master. Read further if the new machine doesn't have access to the same file system.NOTE: Skip this step if you're using a shared filesystem (e.g. a remote storage server that is hosted on all machines). This means that the CryoSPARC binaries, database and project directories are already accessible on the new machine.
First, follow all previous parts of this guide in order. Read this entire guide thoroughly before starting anything. You will need your old instance started and working, so make sure you still have access to it. Then, follow the install guide to install CryoSPARC normally using the migrated database path. After this step, you are done.
cryosparcm stop
Navigate to the cryosparc_master directory
cd /data/cryosparc/cryosparc_master
Modify config.sh to list the new master node
nano config.sh
#modify the line below
export CRYOSPARC_MASTER_HOSTNAME="newnode"
On the new machine, start CryoSPARC. Ensure you start CryoSPARC using the same user.
cryosparcm start