CryoSPARC Guide

Tutorial: 3D Flexible Refinement

An in-depth guide to 3D Flexible Refinement (3DFlex) in CryoSPARC.
3DFlex (BETA) is available in CryoSPARC v4.1+.


3D Flexible Refinement (3DFlex) is a motion-based deep generative model for continuous heterogeneity. It can model non-rigid motion and flexibility of a protein molecule across its conformational landscape, and can use the motion model to combine signal from particle images in different conformations to improve refinement resolution in flexible regions.
The 3DFlex model represents the flexible 3D structure of a protein as deformations of a single canonical 3D density map V. Under the model, a single particle image is associated with a low-dimensional latent coordinate z that encodes the conformation for the particle in the image. A neural flow generator network f_θ converts the latent coordinate into the flow field u and a convection operator then deforms the canonical density to generate a convected map W. This map can then be projected along the particle viewing direction determined by the pose φ, CTF-corrupted, and compared against the experimental image.
Complete details of the architecture and training of 3DFlex can be found in the bioRxiv preprint here.
3DFlex (BETA) is included in CryoSPARC v4.1. This tutorial shows how to run the new job types in CryoSPARC used for creating, training, and using a 3DFlex model. It also covers some of the practical aspects of using the algorithm such as parameter tuning and customizing inputs. Much of the content is covered in a tutorial video below.

Example Results

This video shows results of 3DFlex on a dataset of 102,500 particle images of a tri-snRNP spliceosome particle (EMPIAR-10073). 3DFlex is run with a K=5-dimensional latent space, and different regions of the space correspond to different parts of the particle's conformational landscape. This video shows the output of the 3DFlex generative model as latent coordinates are varied along three axes (coordinates 1, 3, and 5). These dimensions encode non-rigid motion of the head region of the protein, where different parts and subunits move and bend relative to each other.
3DFlex applied to 58,433 particle images of a translocating ribosome (EMPIAR-10792). Traversing the latent space shows that 3DFlex has learned coordinated motion of multiple parts (e.g., large and small subunits, elongation factor, etc.) including the overall ratcheting motion of the ribosome. For this result, a segmentation was used to specify a tetrahedral mesh topology allowing adjacent subunits to deform separately (see Mesh Generation below).
3DFlex applied to 113,511 particle images of the SARS-CoV-2 spike protein (EMPIAR-10516). 3DFlex is run with a K=3 dimensional latent space and has learned a combination of motions of the RBD and NTD domains. The up-RBD in particular undergoes a lot of motion which limits its resolution in rigid refinement. In contrast, flexible refinement improves the resolution of the up-RBD. This result also used a segmentation to enable the adjacent RBD and NTD domains to deform separately (see Mesh Generation below).
This video shows results of 3DFlex on a dataset of 200,000 particle images of a TRPV1 ion channel (EMPIAR-10059). 3DFlex is run with a K=2-dimensional latent space. The video shows the output of the 3DFlex generative model as latent coordinates are varied along each of the two dimensions. The first dimension reveals inward and outward coordinated bending of opposite flexible subunits in the soluble domain. The second dimension reveals twisting of the subunits around the pore axis.
This video shows a comparison between the reconstructed density map from a conventional refinement and flexible refinement using 3DFlex for the TRPV1 ion channel. Map quality and local resolutions are substantially improved in the peripheral helices. Notably, local focused refinement using a mask around the flexible part cannot improve the reconstruction compared to a conventional refinement, because the flexible parts are non-rigid and too small for individual pose alignment.
3DFlex applied to 84,266 particle images of an αV β8 integrin (EMPIAR-10345). 3DFlex using two latent dimensions, learns large bending motions of the flexible arm of the integrin particle, as well as flexibility in the Fabs that are bound.

Installing 3DFlex

3DFlex job types are available in CryoSPARC v4.1 but by default, the new dependencies required for 3DFlex are not installed. In order to set up a CryoSPARC worker to run 3DFlex, please follow the following instructions:

Job Types

The 3DFlex workflow in CryoSPARC involves five new job types. These jobs are described in more detail in the tutorial video below.
  • 3D Flex Data Prep: Prepares particles for use in 3DFlex training and reconstruction
  • 3D Flex Mesh Prep: Takes in a consensus (rigid) refinement density map, plus optionally a segmentation and generates a tetrahedral mesh for 3DFlex. See Mesh Generation below.
  • 3D Flex Training: Uses a mesh and prepared particles (at a downsampled resolution) to train a 3DFlex model. Parameters control the number of latent dimensions, size of the model, and training hyperparameters. This job outputs checkpoints during training.
  • 3D Flex Generator: Takes in a checkpoint from training and generates volume series from it, to show what the model is learning about the motion of the particle. This job can be run while training is ongoing to see progress along the way. This job can also optionally take in a high-resolution density map (e.g., from 3D Flex Reconstruction) and will upscale the deformation model and apply deformations to the high resolution map.
  • 3D Flex Reconstruction: Takes in a checkpoint from training as well as prepared high-resolution particles and performs high-resolution refinement using L-BFGS under the 3DFlex model. This is the stage at which improvements to density in high-res regions are computed. Outputs two half-maps that can be used for FSC validation, sharpening, and other downstream tasks.

Tutorial Video

Please watch the following tutorial video that covers usage of 3DFlex. It explains details of the job types, parameter tuning, and other considerations. Most of these details are not currently in written form in the documentation so we encourage users to watch the entire video.

Mesh Generation

As discussed in the preprint, regularization of deformations is critical for a method like 3DFlex. Without strong regularization, the deep generative model can easily overfit to noise in the data and learn unrealistic deformations. 3DFlex uses a tetrahedral mesh (similar to Finite Element Methods) to represent deformation, and applies a rigidity prior that encourages the model to avoid non-rigidity unless it is well supported by the data.
In 3DFlex, we define a tetrahedral mesh (or tetramesh) using:
  • a set of vertices
  • a set of tetra cells, each connecting four vertices
  • a “tetra index map”, which is an NxNxN map of indices indicating for each voxel, which tetra cell that voxel belongs to.
The tetramesh is defined during the setup of a model. During training, the flow generator outputs a deformation field as a set of deformations of each vertex of the tetramesh, and the convection operator uses the tetra index map to determine how to convect the canonical density based on the movement of the mesh vertices.

Mesh Topology

By default, the 3D Flex Mesh Prep job will automatically generate a regular tetrahedral mesh of specified coarseness, and this typically yields good results, but the 3DFlex method works with any mesh geometry. The mesh topology can be adjusted to introduce additional inductive bias into the model. This is particularly useful for resolving motion of adjacent domains that move differently from each other.
For example, for the SARS-CoV-2 spike protein we obtained good results with a mesh constructed using a sub-mesh for each RBD and NTD domain, fused to a sub-mesh for the central trimer of S2 domains (Figure 11). To construct such a mesh, we provided coarse boundaries between adjacent RBD and NTD domains to the 3D Flex Mesh Prep job, along with the desired topology of the mesh (i.e. which parts are connected to which other parts). The job then automatically generates sub-meshes and fuses them together to form a complete mesh.
Please watch the following tutorial video for details about how to use the 3D Flex Mesh Prep job to adjust mesh topology. The 3D Flex Mesh Prep job supports input of .seg files generated by UCSF Chimera’s Segger tool. This is the easiest way to denote coarse boundaries between segments. The job also supports input of your own custom .mrc files that you can create that label each voxel with a segment number.
The use of custom mesh topology provides helpful inductive bias but does not provide 3DFlex with information about the direction nor types of molecular motions present in the data. Rather, 3DFlex must still learn a non-linear non-rigid deformation from scratch across all mesh nodes jointly during training.
Whether using a regular or custom mesh, there is substantial latitude in specifying the mesh. Where motions are smooth, the size and shape of mesh elements and their precise locations are not critical since they only serve to ensure the deformation is smooth, and the flow generator is able to displace the mesh elements (including changing their size or shape) during deformation. Likewise for custom meshes, the separation of subdomains does not need to be “exact” as the canonical voxel density values and structure within each region of the mesh are still learned from the data by 3DFlex.

Rigidity Weights

Along with the mesh topology, 3DFlex also defines rigidity weights for the mesh. The rigidity weight for each cell denotes the relative strength of the rigidity prior that should be applied to that cell. The overall strength of the prior is also a parameter (set at training time) but the relative rigidity is part of the mesh definition. For example, empty space between two subunits should not be very rigid and should be able to compress/expand allowing the subunits to move apart, while high density core parts of a subunit are more likely to remain rigid during deformation). By default, the 3D Flex Mesh Prep job will automatically generate rigidity weights based on the amount of density within each cell in the input consensus (rigid) refinement map.
It is also possible to modify rigidity weights or provide custom rigidity weights to 3DFlex. See this example cryosparc-tools notebook.

Fully Custom Meshes

It is possible to create and input fully custom meshes for 3DFlex using cryosparc-tools. This example notebook includes more details about how a mesh is defined and how to provide your own vertices, cells, tetra index map, and rigidity weights.

Parameter Tuning

Several parameters of the 3DFlex algorithm must be tuned for each dataset in order to give the best results. Details about parameter tuning are in the tutorial video.
The important parameters to tune are:
  • 3D Flex Mesh Prep:
    • Base num. tetra cells controls the fineness of the tetramesh. Finer meshes allow for more detailed motion but reduce the regularization and with poor quality data or small particles can lead to overfitting.
    • Segmentation and Rigidity weighting see Mesh Generation above.
  • 3D Flex Training
    • Number of latent dims usually best to start with 2, and increase if the data appears to have more complex motions (and sufficient signal to resolve more motions)
    • Number of hidden units can be reduced to e.g., 32 to limit the capacity of the flow generator model for cases with simpler motion or where overfitting is a concern.
    • Rigidity (lambda) controls the overall strength of the rigidity prior. This should be tuned carefully through empirical tests. When too high, the model will ignore more detailed motions in the data. When too low, the model may learn unrealistic motions due to noise in the data.
    • Noise injection stdev controls the noise injected during latent inference. Higher values encourage more smoothness of the latent conformational landscape (i.e., nearby latent positions will encode similar conformations) but higher values also reduce precision in latent inference, potentially limiting how precisely flexible parts are aligned.
    • Latent centering strength controls the strength of a prior that tries to ensure that latent coordinates are generally centered in the latent space and stay within the range (-1.5, 1.5). This must be tuned for each dataset if you see that latent coordinates are all close to zero or are all hitting the edge of the (-1.5, 1.5) domain. It does not have impact on the results or capacity of the model and is simply a nuisance parameter.


3DFlex is an advance in modelling continuous heterogeneity but it does have several limitations. The most important are listed here:
  • Compositional heterogeneity. Being a motion model, 3DFlex currently does not have a way to cleanly represent compositional heterogeneity. It is able to move density around, but cannot delete or add density (the opposite of density-based methods like 3DVA, cryoDRGN, etc.). As such, when presented with data that contain compositional heterogeneity, it may result in strange effects. For example, a domain that is partially occupied in the data may be modelled by creating a deformation that “expands” that domain over a wide space, thereby causing the density to drop, appearing like that domain has been erased. This is obviously not ideal behavior and the 3DFlex model will waste capacity modelling this compositional change rather than conformational changes. Improving 3DFlex in compositional cases is an area of development. Currently we suggest using 3D Classification and Heterogeneous Refinement jobs to ensure that discrete compositional heterogeneity is separated as much as possible before inputting particles into 3DFlex.
  • Intricate motions. Though 3DFlex does well in modelling motion even of relatively small parts of a particle, it is not yet capable of modelling highly intricate motions such as side chain or loop motion. These motions are far smaller than the setup of 3DFlex (e.g., using a tetramesh) can allow to be modelled. Furthermore, small motions and conformational changes are unlikely to even be statistically detectable in single particle data unless those motions happen in tandem with other larger changes in the molecule.
  • Intermediate states with no data. 3DFlex is strongly biased to modelling motion, and so when presented with data with discrete heterogeneity, it will likely learn a model that maps the multiple discrete states together under deformations that unite them. However, if the data is discrete, there will not be any signal about the actual conformational states of intermediate positions between the discrete endpoints of motion. 3DFlex will still model these transitions, but it will only be guided by its rigidity prior for intermediate states that are not actually seen in the data.
  • Interpretation of latent space. The interpretation of 3DFlex is also an interesting area for future work. It is unclear how one should relate the continuous probability distribution of particle images in the 3DFlex latent space to a physically meaningful notion of energy via a Boltzmann distribution. This is because the non-linear capacity of the flow generator means that relative distances and volumes (and hence probability density) in the latent space are arbitrary.

Computational Considerations

3DFlex is relatively computationally demanding. It is GPU accelerated.
  • 3DFlex currently loads all particles into CPU memory at training time and reconstruction time. This means you must have sufficient CPU RAM to fit the entire dataset (at the training box size for train time, and at the high resolution box size for reconstruction time). During 3D Flex Data Prep, you can limit the number of particles as well.
  • 3DFlex does not yet use the CryoSPARC particle caching system. It reads particles directly from project directories in CPU RAM.
  • GPU memory use is relatively limited during training time, but at reconstruction time the GPU must be able to fit at least 2x the size of a volume at the full resolution box size. We have not yet finely profiled memory usage so it may be more.
  • Speed of 3DFlex training (and reconstruction) are primarily driven by two factors: the number of latent dimensions and the number of voxels (i.e. the volume) inside the solvent mask. Training time will increase approximately linearly with both of these factors. Therefore to speed up training, downsampling to a smaller size (while still retaining enough resolution for training to pick up secondary structure, etc.) is very helpful. Similarly, the solvent mask should not be made overly loose (though it should also be loose enough not to cut off any density in flexible regions that are not well resolved in the consensus rigid density).
  • Performance appears to be more strongly affected by GPU performance than other CryoSPARC job types. We have not yet extensively characterized performance but newer/faster GPUs appear to provide substantial benefits.