Troubleshooting
Overview of common issues and advice on how to resolve them.
Unless otherwise noted:
- Log in to the workstation or remote node where
cryosparc_master
is installed. - Use the same non-root UNIX user account that runs the CryoSPARC process and was used to install CryoSPARC.
- Run all commands on this page in a terminal running
bash
In v4.0+, you can download error reporting information from within the application. For more details, see: Guide: Download Error Reports
"Couldn't connect to host" "Could not resolve host" {"success": false} "tar: This does not look like a tar archive" "Version mismatch! Worker and master versions are not the same. Please update." "An unexpected error has occurred."
Steps
- 1.
- 2.Check that your
LICENSE_ID
environment variable is set correctly with this commandecho $LICENSE_IDEnsure the output exactly matches the CryoSPARC License ID issued to you over email. - 3.Check your machine's connection to CryoSPARC's license servers at get.cryosparc.com with this
curl
command:curl https://get.cryosparc.com/checklicenseexists/$LICENSE_IDYou should see the message{"success": true}
. If instead you see{"success": false}
, your license is not valid, so please check it has been entered correctly. If you see an error message like "Couldn't connect to host" or "Could not resolve host" check your Internet connection, firewall or ensure your IT department has theget.cryosparc.com
license server domain whitelisted.
Steps
This can happen following a fresh install or recent update.
Steps
- 1.In a command line, run
cryosparcm status
- 2.Check that the output looks like this----------------------------------------------------------------------------CryoSPARC System master node installed at/home/cryosparcuser/cryosparc/cryosparc_masterCurrent CryoSPARC version: v4.0.0----------------------------------------------------------------------------CryoSPARC process status:app RUNNING pid 1223898, uptime 0:51:41app_api RUNNING pid 1224512, uptime 0:51:39app_api_dev STOPPED Not startedapp_legacy STOPPED Not startedapp_legacy_dev STOPPED Not startedcommand_core RUNNING pid 1218914, uptime 0:51:56command_rtp RUNNING pid 1221639, uptime 0:51:48command_vis RUNNING pid 1220983, uptime 0:51:49database RUNNING pid 1217182, uptime 0:52:00----------------------------------------------------------------------------License is valid----------------------------------------------------------------------------global config variables:export CRYOSPARC_LICENSE_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"export CRYOSPARC_MASTER_HOSTNAME="localhost"export CRYOSPARC_DB_PATH="/home/cryosparcuser/cryosparc/cryosparc_database"export CRYOSPARC_BASE_PORT=39000export CRYOSPARC_INSECURE=falseexport CRYOSPARC_CLICK_WRAP=true
- 3.Check that all items under "CryoSPARC process status" that do not end in
_dev
or_legacy
areRUNNING
. If any are not, runcryosparcm restart
- 4.If any non-
_dev
/non-_legacy
components have a status other thanRUNNING
(such asSTOPPED
orEXITED
), check their log for errors. For example, this command checks for errors on thedatabase
process:cryosparcm log database(Presscontrol C
, thenq
to stop logging) - 5.If the web interface is inaccessible, check firewall settings to ensure CryoSPARC's base port number (default
39000
) is exposed for network access
Any error messages here could indicate specific configuration issues and may require re-installing CryoSPARC.
If at any point you see
No command 'cryosparcm' found
or command not found: cryosparcm
:- 1.Check that you are on the master node or workstation where
cryosparc_master
is installed - 2.Run
echo $PATH
and check that it contains<installation directory>/cryosparc_master/bin
$ echo $PATH/home/cryosparcuser/cryosparc/cryosparc_master/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - 3.
This error message occurs if the email address field does not match any existing users in your CryoSPARC instance. Use the CryoSPARC command-line to verify the details of your user account and change the email address or password if needed.
- 1.Run the following command in your terminal:
cryosparcm listusers
- 2.If an email address is incorrect (e.g., mispelled or with an extra space at the beginning or end), modify it in the database. Run the following commands:
- Log into the MongoDB shell:
cryosparcm mongo
- Once in the MongoDB shell, enter the following (replace the incorrect/correct email):
db.users.update({ 'emails.0.address': '[email protected]' },{ $set: { 'emails.0.address': '[email protected]' }})- Exit the MongoDB shell with
exit
- 3.If you don't remember your password, reset it with the following command (replace with your email address and new password):cryosparcm resetpassword --email "<email address>" --password "<new password>"
An incomplete shutdown of CryoSPARC is likely to interfere with subsequent attempts to start CryoSPARC and/or CryoSPARC software updates. Incomplete shutdowns can occur for various reasons, including, but not limited to:
- unclean shutdown of the computer that runs
cryosparc_master
processes - failed coordination of services by
cryosparc_master
'ssupervisord
process
Follow this sequence to ensure a complete shutdown of CryoSPARC
For CryoSPARC instances that were not configured as a systemd service, run the command
cryosparcm stop
Do not use
cryosparcm stop
for CryoSPARC instances that are controlled by systemd. For such instances, use the appropriate
systemctl stop
command.Confirm that the basic shutdown did not "leave behind" any CryoSPARC-related processes. If the basic shutdown was successful, a suitable
ps
command should not show any processes for the CryoSPARC instance in question, but processes may be shown if- a glitch occurred during the basic shutdown or
- the computer hosts multiple CryoSPARC instances.
To illustrate what kind of processes one might encounter, here is an example command and its output for a running CryoSPARC v4.3.1 instance:
$ ps -w -U user1 -opid,ppid,start,cmd | grep -e cryosparc -e mongo | grep -v grep
2879033 1 10:36:48 python /u/user1/sw/cryosparc-prod/v4/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /u/user1/sw/cryosparc-prod/v4/cryosparc_master/supervisord.conf
2879640 2879033 10:36:53 mongod --auth --dbpath /u/user1/sw/cryosparc-prod/v4/cryosparc_db_61561 --port 61561 --oplogSize 64 --replSet meteor --wiredTigerCacheSizeGB 4 --bind_ip_all
2880256 2879033 10:36:56 python -c import cryosparc_command.command_core as serv; serv.start(port=61562)
2880901 2879033 10:37:02 python -c import cryosparc_command.command_vis as serv; serv.start(port=61563)
2881272 2879033 10:37:03 python -c import cryosparc_command.command_rtp as serv; serv.start(port=61565)
2882326 2879033 10:37:10 /u/user1/sw/cryosparc-prod/v4/cryosparc_master/cryosparc_app/api/nodejs/bin/node ./bundle/main.js
This is a simple example. More complex configurations, such a host with multiple active CryoSPARC instances, may require different
ps
options and/or grep
patterns.The
ps
output may include processes that belong to non-CryoSPARC applications or to CryoSPARC instances other than the CryoSPARC instance that you wish to shutdown. Parent process identifiers and port numbers in the listed commands can help in attributing processes to a common parent supervisord
process. Carefully confirm the purpose and identity of any process before termination.For the example above, it should be sufficient to
kill
the supervisord
process using the process identifier shown by the ps
commandkill 2879033
and wait a few seconds for the
supervisord
process' children to be terminated automatically.Finally, using another
ps
command with suitable options, re-confirm that all relevant processes have in fact been terminatedAn intact CryoSPARC instance manages the creation and deletion of socket files for
mongod
and supervisord
, like/tmp/mongodb-39001.sock
/tmp/cryosparc-supervisor-263957c4ac4e8da90abc3d163e3c073c.sock
Filenames differ between CryoSPARC instances, for example based on
$CRYOSPARC_DB_PORT
.Socket files should be deleted only under specific circumstances, subject to precautions given below.
Never delete socket files before confirming that associated processes have been terminated as described in the previous step.
The computer may store socket files that belong to non-CryoSPARC applications or to CryoSPARC instances other than the CryoSPARC instance that you wish to shutdown. Such socket files may have names similar to the files you wish to delete. Carefully confirm the purpose and identity of each file before any deletion.
Follow the steps in this section when you see error messages that look like this:
"License is invalid." "License signature invalid." "Could not find local license file. Please re-establish your connection to the license servers." "Local license file is expired. Please re-establish your connection to the license servers." "Token is invalid. Another CryoSPARC instance is running with the same license ID."
Steps
Run
cryosparcm licensestatus
. This should result in "License is valid". If you see this error:ServerError: Authentication failed
Your license ID is not entered configured correctly. Check the
CRYOSPARC_LICENSE_ID
entry in cryosparc_master/config.sh
.If you see this error:
WARNING: Could NOT verify active license
This includes a list of checks, the last of which will indicate a failure. Depending on which check failed, do one of the following:
- Check your Internet connection
- Check your machine's connection to CryoSPARC's license servers at get.cryosparc.com with this
curl
command (substitute<license>
with your unique license ID):
curl https://get.cryosparc.com/checklicenseexists/<license>
Look for the message message
{"success": true}
If instead you see
{"success": false}
, your license is not valid so please check it has been entered correctly.If you see an error message like "Couldn't connect to host" or "Could not resolve host" check your Internet connection, firewall or ensure your IT department has the
get.cryosparc.com
license server domain whitelisted.If you see a license ID conflict such as
"Another cryoSPARC instance is running with the same license ID."
cryosparcm start
Follow these steps when the CryoSPARC web interface is up and running normally and jobs may be created but do not run. These error messages may indicate this issue:
"list index out of range" "Could not resolve hostname ... Name or service not known"
A job that never changes from
Queued
or Launched
status may also indicate this.Steps
- 1.Ensure at least one worker is connected to the master. See the Installation page for details. Visit Manage > Resources to see what lanes are available
- 2.
- 3.(For master/worker setups) check that SSH is configured between the master and worker machines.
- 4.Check the log for the
command_core
log to find any application error messagescryosparcm log command_core(pressControl + C
on the keyboard followed byq
to exit when finished) - 5.
- 6.Refresh job types and reload CryoSPARC:cryosparcm cli "refresh_job_types()"cryosparcm cli "reload()"
- 8.Restart CryoSPARC:cryosparcm restart
- 9.Clear the job and re-run it.
This indicates that CryoSPARC started the job process but the job encountered an internal error immediately after.
Check the job's Event Log for errors. If there are none, check the standard out log either from the interface under Job > Metadata > Job Log

or from the command line with
cryosparcm joblog PX JY
Substituting
PX
and JY
for the project and job IDs, respectively.Typically the errors here occur when the worker process cannot connect back to the master. Ensure there is a stable network connection between all machines involved. Ensure CryoSPARC was installed correctly and re-install if necessary.
When a job fails, its job card in the interface turns red and the bottom of the job log includes an error message with the text
Traceback (most recent call last)
Common failure reasons include:
- Invalid or unspecified input slots
- Invalid or unspecified required parameters, including file/folder paths
- Incorrectly set up GPU (e.g., running a job on a node without enough GPUs or CUDA drivers not installed)
- Another process taking up memory on a GPU
- Cache not set up correctly for a worker
- Lost connection to
cryosparc_master

Example of failed jobs in a CryoSPARC workspace

Example of an error log entry at the bottom of a Failed job
Common job failure error messages:
"AssertionError: Child process with PID ... has terminated unexpectedly!" Job is unresponsive - no heartbeat received in 30 seconds.
Common error messages that indicate incorrectly configured GPU drivers:
"cuInit failed: unknown error" "no CUDA-capable device is detected" "cuMemHostAlloc failed: OS call failed or operation not supported on this OS" "cuCtxCreate failed: invalid device ordinal kernel.cu ... error: identifier "__shfl_down_sync" is undefined
Common error messages that indicate not enough GPU memory:
"cuMemAlloc failed: out of memory" "cuArrayCreate failed: out of memory" "cufftAllocFailed"
Steps
- 1.Ensure a supported version of the CUDA toolkit is installed and running on the workstation or each worker node
- 2.Check the GPU configuration on the workstation or node where the job runs on. Log into that machine and navigate to the CryoSPARC installation directory. Run the
cryosparcw gpulist
command:cd /path/to/cryosparc_workerbin/cryosparcw gpulist - 3.Run nvidia-smi to check that no other processes are using GPU memory. CryoSPARC-related process appear with process name "python"Example output of the nvidia-smi command, showing CUDA 10.2 and a CryoSPARC python process using ~2GB on GPU 0If you don't recognize the processes using GPU memory, run
kill <PID>
, substituting<PID>
with the value under the Processes PID column - 4.Check the Event log: Select the Job card in the CryoSPARC interface and press the Spacebar on your keyboard to see the log. Scroll down to the bottom and look for the failure reasons in red
- 5.Clear the job and select the "Build" status badge on the job card to enter the Job Builder
- 6.If the job failed with a GPU-related error and multiple GPUs are available, try running the job on a different GPU. Press Queue, switch to the "Run on specific GPU" Queue type and select one or more GPUs
- 7.If the job failed with
AssertionError: Non-optional inputs from the following input groups and their slots are not connected
then clear the job, enter the Builder and expand any input groups connected to the job. Missing required slots appear with text "Empty" and "Required" - 8.Check the job parameters: To learn about setting specific parameters, hover over or touch the parameter names in the Job Builder to see a description of what they doOn-hover description of the "Negative Stain Data" parameter for the "Import Movies" job
- 9.Find the target job type in this guide's Job Reference for more detailed descriptions of expected input slots and parameters: All Job Types in CryoSPARC.
- 10.Reduce the box-size of extracted particles. Some jobs need to fit thousands of particles in GPU memory at a time, and larger box sizes exceed GPU memory limits. Either extract with a smaller box size or with the Downsample Particles job.
- 11.Refresh job types and reload CryoSPARC:cryosparcm cli "refresh_job_types()"cryosparcm cli "reload()"
- 12.Look for extended error information with the
cryosparcm joblog
command (pressCtrl + C
on the keyboard to exit when finished) - 13.Check the network connection from the worker machine to the master
- 14.On occasion, a job fails due to an error in the CryoSPARC code (bug). The CryoSPARC team regularly releases updates and patches with bug fixes. Check for and install the latest update or patch. If you find a new bug, see the Additional Help section for advice.
Due to their large sizes, cryo-EM datasets can take a long time to process with sub-optimal hardware or parameters. Here are some facilities that CryoSPARC provides for increasing speed/performance.
- Connect workers with SSD cache enabled. This speeds up processing for extracted particles during 2D Classification, ab-initio reconstruction, refinement and more. Ensure the "Cache particle images on SSD" parameter is enabled under "Compute settings" for the target particle-processing job
- Some jobs (motion correction, ctf estimation, 2D classification) can run on multiple GPUs. If your hardware supports it, increase the number of GPUs to parallelize over

2D Classification jobs support particle SSD caching and parallelizing over multiple GPUs.
- Extracted particles with large box sizes (relative to their pixel size) take a long time to process. Consider Fourier-cropping (or "binning") the extracted particle blobs with the Downsample Particles job
- Minimize the number of processes using system resources on the workstation or worker nodes
- Check for zombie processes on worker machines. The process is similar to the steps under "Another CryoSPARC instance is running with the same license ID" under the License error or license not found section
- Cancel the job, clear and re-queue
File "/u/cryosparc/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 1711, in run_with_except_hook
run_old(*args, **kw)
File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 129, in cryosparc_compute.engine.cuda_core.GPUThread.run
File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 130, in cryosparc_compute.engine.cuda_core.GPUThread.run
File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 997, in cryosparc_compute.engine.engine.process.work
File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 106, in cryosparc_compute.engine.engine.EngineThread.load_image_data_gpu
File "cryosparc_worker/cryosparc_compute/engine/gfourier.py", line 33, in cryosparc_compute.engine.gfourier.fft2_on_gpu_inplace
File "/u/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/skcuda/fft.py", line 102, in __init__
capability = misc.get_compute_capability(misc.get_current_device())
File "/u/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/skcuda/misc.py", line 254, in get_current_device
return drv.Device(cuda.cudaGetDevice())
File "/u/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/skcuda/cudart.py", line 767, in cudaGetDevice
cudaCheckStatus(status)
File "/u/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/skcuda/cudart.py", line 565, in cudaCheckStatus
raise e
skcuda.cudart.cudaErrorInsufficientDriver
This error usually indicates the NVIDIA driver you're using isn't compatible with the GPU and CUDA Toolkit version installed. This issue can be fixed by installing the latest NVIDIA Driver from the driver download page.
Traceback (most recent call last):
File "cryosparc_worker/cryosparc_compute/run.py", line 72, in cryosparc_compute.run.main
File "/home/cryosparc/cryosparc/cryosparc_worker/cryosparc_compute/jobs/jobregister.py", line 371, in get_run_function
runmod = importlib.import_module(".."+modname, __name__)
File "/home/cryosparc/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1050, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "cryosparc_worker/cryosparc_compute/jobs/class2D/run.py", line 13, in init cryosparc_compute.jobs.class2D.run
File "/home/cryosparc/cryosparc/cryosparc_worker/cryosparc_compute/engine/__init__.py", line 8, in <module>
from .engine import * # noqa
File "cryosparc_worker/cryosparc_compute/engine/engine.py", line 9, in init cryosparc_compute.engine.engine
File "cryosparc_worker/cryosparc_compute/engine/cuda_core.py", line 4, in init cryosparc_compute.engine.cuda_core
File "/home/cryosparc/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/driver.py", line 62, in <module>
from pycuda._driver import * # noqa
ImportError: /home/cryosparc/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/_driver.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZSt28__throw_bad_array_new_lengthvytho
This error arises when running GPU jobs (Patch Motion Correction, 2D Classification) when running CryoSPARC v3.3.2 or v3.4.0 on Ubuntu 22+.
This issue may be fixed by adding two variables (
CFLAGS="-static-libstdc++"
and CXXFLAGS="-static-libstdc++"
) to the environment before recompiling the PyCUDA module using cryosparcw newcuda
:$ export CFLAGS="-static-libstdc++"
$ export CXXFLAGS="-static-libstdc++"
$ cryosparc_worker/bin/cryosparcw newcuda ~/cryosparc/cuda-11.8.0
In rare circumstances, jobs with "Cache particle images on SSD" enabled will not complete with one of the following scenarios:
- The job logs "SSD cache : cache successfully synced" and never proceeds further
- The job logs "SSD cache : requested files are locked for past Xs" and never proceeds further, even when there are no other jobs accessing the SSD cache
- The job fails with an HTTP timeout error during the SSD caching step
These typically occur in long-active CryoSPARC instances with many projects and files on the SSD. There are several strategies to reduce these occurrences.
SSD's naturally degrade over time, the likelihood of failure increasing with heavy usage. Use a tool such as
smartctl
to check the SSD. If enough errors have accumulated, the SSD may have to be replaced.CryoSPARC automatically removes files from the SSD that have not been accessed in a while (more than 30 days by default) each time the SSD cache system runs. If the SSD is very heavily used in particle processing jobs or by other external tools, leaving more free space available may extend its lifetime. This is possible with one or both of these strategies:
- Reconnect the worker with the
--ssdreserve
flag set (default 10GB or 10000MB) to ensure CryoSPARC always leaves the given amount of free space on the SSD (will clean out old files to stay above the threshold) - Set the
CRYOSPARC_SSD_CACHE_LIFETIME_DAYS
environment variable incryosparc_master/config.sh
to clean up unused files on the SSD more frequently. The default value is30
days
cryosparc_master
uses its MongoDB database to coordinate SSD caching between multiple workers running in parallel. If a job fails unexpectedly during the SSD caching step, this could lead to database inconsistencies which prevent other jobs from proceeding.To address these, try the following steps:
- 1.Ensure no jobs are running in CryoSPARC
- 2.In a terminal, enter
cryosparcm mongo
to enter the interactive database prompt - 3.Enter the following command to check how many records are in an inconsistent state:db.cache_files.find({status: {$nin: ['hit', 'miss']}}).length()
- 4.If the result is not
0
(zero), enter the following command to fix themdb.cache_files.updateMany({status: {$nin: ['hit', 'miss']}}, {$set: {status: 'miss'}}) - 5.Exit from the database prompt with
Ctrl + D
- 6.Try re-running the problematic jobs
If the previous options have no effect, fully reset the caching system with the following steps
- 1.Ensure no jobs are running in CryoSPARC
- 2.For each connected worker machine:
- Navigate to the SSD cache directory containing CryoSPARC's cache files (e.g.,
/scratch/
). This path was configured during installation time - Look for a directory named
instance_<master hostname>:<master port + 1>
e.g.,instance_localhost:39001
- Delete this directory and all its contents
- 3.In a terminal, enter
cryosparcm mongo
to enter the interactive database prompt - 4.Enter the following command to clear out the cache recordsdb.cache_files.deleteMany({})
- 5.Exit from the database prompt with
Ctrl + D
- 6.Try re-running the problematic jobs
If you encounter a problem with the user interface, e.g., one or more elements of a page are not loading, etc., you can use the following steps to obtain debugging information.
- 1.Open the browser console
- In Chrome, Firefox, Edge, and Safari this can be done by right clicking the page to open the browser context menu and selecting the
Inspect
option (Inspect Element
in Safari) . - This will open up a “DevTools” panel used for inspecting and debugging in the browser. This panel includes a number of tabs at the top used to display different views. When opened using the context menu the current view will be the
Elements
tab. Click on theConsole
tab directly beside theElements
tab in order to view the web console. This is where errors, warnings, and general information about the page’s javascript code can be observed. - In order to keep the console clean in production we disable our development logs. Enable these logs by pasting the command
window.localStorage.setItem('cryosparc_debug', true);
into the browser console and then pressing the enter key on your keyboard.undefined
will be logged below this command if it was submitted correctly. - Now reload the page and all of the development console logs and errors will be visible.
- 2.Save Console OutputPlease save console output as a
.log
file, including the type of browser (Chrome, Firefox, Edge, Safari, etc.) in the file name. The filename should be formatted as such:console_{browser}.log
, eg.console_chrome.log
.- Chrome or Edge: Right click anywhere in the console panel to open the context menu and select the
Save As...
option. This will allow you to save the entire output as a.log
file. - Firefox: Right click on a console message to open the context menu and select the
Save all Messages to File
option. This will allow you to save the entire output as a.txt
file as default (or.log
file optionally). - Safari: Click and drag the cursor over all items in the console output so that the items are all highlighted blue. You can then right click on any of the highlighted items to open the context menu and select the
Save Selected
option to to save the entire output as a.txt
file as default (or.log
file optionally).
- 3.Save Network OutputNavigate to the
Network
tab in the DevTools by selecting it from the tabs in the top bar of the panel. If theNetwork
tab is not shown then it is likely hidden in the overflow menu (this appears when there is not enough space to display all of the tab options in the DevTools). Click the overflow menu button represented by two right chevrons (>>
) and select theNetwork
option.Please include the type of web browser (Chrome, Firefox, Edge, or Safari) in the name of the.har
file you are saving. The filename should be formatted as such:network_{browser}.har
, eg.network_chrome.har
.- Chrome or Edge: Right click on any of the items in the network request table and then select
Save all as HAR with context
from the context menu.Make sure that the red record button at the top left of the panel has been activated. It will appear as a grey circle if it has not been activated, and a red circle if it has. ThePreserve log
checkbox must also be selected. Now reload the page and the network panel will be populated with all requests made and received by the browser - Firefox: Right click on any of the items in the network request table and then select
Save All As HAR
from the context menu. - Safari: Right click on any of the items in the network request table and then select
Export HAR
from the context menu.
For topics not covered above, get additional help through the CryoSPARC Discussion Forum:
If no related discussions exist, please create a new post. Review our Troubleshooting Guidelines for items to include in your post: