Tutorial: 3D Flexible Refinement
An in-depth guide to 3D Flexible Refinement (3DFlex) in CryoSPARC.
Last updated
An in-depth guide to 3D Flexible Refinement (3DFlex) in CryoSPARC.
Last updated
3DFlex (BETA) is available in CryoSPARC v4.1+.
3D Flexible Refinement (3DFlex) is a motion-based deep generative model for continuous heterogeneity. It can model non-rigid motion and flexibility of a protein molecule across its conformational landscape, and can use the motion model to combine signal from particle images in different conformations to improve refinement resolution in flexible regions.
The 3DFlex model represents the flexible 3D structure of a protein as deformations of a single canonical 3D density map V. Under the model, a single particle image is associated with a low-dimensional latent coordinate z that encodes the conformation for the particle in the image. A neural flow generator network f_θ converts the latent coordinate into the flow field u and a convection operator then deforms the canonical density to generate a convected map W. This map can then be projected along the particle viewing direction determined by the pose φ, CTF-corrupted, and compared against the experimental image.
Complete details of the architecture and training of 3DFlex can be found in the bioRxiv preprint here.
3DFlex (BETA) is included in CryoSPARC v4.1. This tutorial shows how to run the new job types in CryoSPARC used for creating, training, and using a 3DFlex model. It also covers some of the practical aspects of using the algorithm such as parameter tuning and customizing inputs. Much of the content is covered in a tutorial video below.
All 3D Flex requirements are installed with CryoSPARC v4.4+. Skip this section unless you are running v4.1–v4.3
3DFlex job types are available in CryoSPARC v4.1+ but in v4.1–v4.3, the new dependencies required for 3DFlex are not installed. To ensure a CryoSPARC worker can run 3DFlex, please see the following instructions:
The 3DFlex workflow in CryoSPARC involves five new job types. These jobs are described in more detail in the tutorial video below.
3D Flex Data Prep: Prepares particles for use in 3DFlex training and reconstruction. Note: in CryoSPARC version prior to v4.4, this job outputs pre-computed CTF values for use by downstream jobs. In v4.4+, the job no longer outputs full-resolution CTF values and the downstream jobs now compute CTF values (including higher order aberrations) on the fly. This change reduces disk space and CPU RAM requirements substantially and allows for higher resolution reconstructions.
3D Flex Mesh Prep: Takes in a consensus (rigid) refinement density map, plus optionally a segmentation and generates a tetrahedral mesh for 3DFlex. See Mesh Generation below.
3D Flex Training: Uses a mesh and prepared particles (at a downsampled resolution) to train a 3DFlex model. Parameters control the number of latent dimensions, size of the model, and training hyperparameters. This job outputs checkpoints during training.
3D Flex Generator: Takes in a checkpoint from training and generates volume series from it, to show what the model is learning about the motion of the particle. This job can be run while training is ongoing to see progress along the way. This job can also optionally take in a high-resolution density map (e.g., from 3D Flex Reconstruction) and will upscale the deformation model and apply deformations to the high resolution map.
3D Flex Reconstruction: Takes in a checkpoint from training as well as prepared high-resolution particles and performs high-resolution refinement using L-BFGS under the 3DFlex model. This is the stage at which improvements to density in high-res regions are computed. Outputs two half-maps that can be used for FSC validation, sharpening, and other downstream tasks.
Please watch the following tutorial video that covers usage of 3DFlex. It explains details of the job types, parameter tuning, and other considerations. Most of these details are not currently in written form in the documentation so we encourage users to watch the entire video.
The tetrahedral mesh is an important concept in 3D Flexible Refinement. We cover this topic in significantly more detail in the dedicated guide page on the topic.
As discussed in the preprint, regularization of deformations is critical for a method like 3DFlex. Without strong regularization, the deep generative model can easily overfit to noise in the data and learn unrealistic deformations. 3DFlex uses a tetrahedral mesh (similar to Finite Element Methods) to represent deformation, and applies a rigidity prior that encourages the model to avoid non-rigidity unless it is well supported by the data.
In 3DFlex, we define a tetrahedral mesh (or tetramesh) using:
a set of vertices
a set of tetra cells, each connecting four vertices
a “tetra index map”, which is an NxNxN map of indices indicating for each voxel, which tetra cell that voxel belongs to.
The tetramesh is defined during the setup of a model. During training, the flow generator outputs a deformation field as a set of deformations of each vertex of the tetramesh, and the convection operator uses the tetra index map to determine how to convect the canonical density based on the movement of the mesh vertices.
By default, the 3D Flex Mesh Prep job will automatically generate a regular tetrahedral mesh of specified coarseness, and this typically yields good results, but the 3DFlex method works with any mesh geometry. The mesh topology can be adjusted to introduce additional inductive bias into the model. This is particularly useful for resolving motion of adjacent domains that move differently from each other.
For example, for the SARS-CoV-2 spike protein we obtained good results with a mesh constructed using a sub-mesh for each RBD and NTD domain, fused to a sub-mesh for the central trimer of S2 domains (Figure 11). To construct such a mesh, we provided coarse boundaries between adjacent RBD and NTD domains to the 3D Flex Mesh Prep job, along with the desired topology of the mesh (i.e. which parts are connected to which other parts). The job then automatically generates sub-meshes and fuses them together to form a complete mesh.
Please watch the following tutorial video for details about how to use the 3D Flex Mesh Prep job to adjust mesh topology. The 3D Flex Mesh Prep job supports input of .seg
files generated by UCSF Chimera’s Segger tool. This is the easiest way to denote coarse boundaries between segments. The job also supports input of your own custom .mrc
files that you can create that label each voxel with a segment number.
The use of custom mesh topology provides helpful inductive bias but does not provide 3DFlex with information about the direction nor types of molecular motions present in the data. Rather, 3DFlex must still learn a non-linear non-rigid deformation from scratch across all mesh nodes jointly during training.
Whether using a regular or custom mesh, there is substantial latitude in specifying the mesh. Where motions are smooth, the size and shape of mesh elements and their precise locations are not critical since they only serve to ensure the deformation is smooth, and the flow generator is able to displace the mesh elements (including changing their size or shape) during deformation. Likewise for custom meshes, the separation of subdomains does not need to be “exact” as the canonical voxel density values and structure within each region of the mesh are still learned from the data by 3DFlex.
Along with the mesh topology, 3DFlex also defines rigidity weights for the mesh. The rigidity weight for each cell denotes the relative strength of the rigidity prior that should be applied to that cell. The overall strength of the prior is also a parameter (set at training time) but the relative rigidity is part of the mesh definition. For example, empty space between two subunits should not be very rigid and should be able to compress/expand allowing the subunits to move apart, while high density core parts of a subunit are more likely to remain rigid during deformation). By default, the 3D Flex Mesh Prep job will automatically generate rigidity weights based on the amount of density within each cell in the input consensus (rigid) refinement map.
It is also possible to modify rigidity weights or provide custom rigidity weights to 3DFlex. See this example cryosparc-tools notebook.
It is possible to create and input fully custom meshes for 3DFlex using cryosparc-tools. This example notebook includes more details about how a mesh is defined and how to provide your own vertices, cells, tetra index map, and rigidity weights.
Several parameters of the 3DFlex algorithm must be tuned for each dataset in order to give the best results. Details about parameter tuning are in the tutorial video.
The important parameters to tune are:
3D Flex Mesh Prep:
Base num. tetra cells
controls the fineness of the tetramesh. Finer meshes allow for more detailed motion but reduce the regularization and with poor quality data or small particles can lead to overfitting.
Segmentation
and Rigidity weighting
see Mesh Generation above.
3D Flex Training
Number of latent dims
usually best to start with 2, and increase if the data appears to have more complex motions (and sufficient signal to resolve more motions)
Number of hidden units
can be reduced to e.g., 32 to limit the capacity of the flow generator model for cases with simpler motion or where overfitting is a concern.
Rigidity (lambda)
controls the overall strength of the rigidity prior. This should be tuned carefully through empirical tests. When too high, the model will ignore more detailed motions in the data. When too low, the model may learn unrealistic motions due to noise in the data.
Noise injection stdev
controls the noise injected during latent inference. Higher values encourage more smoothness of the latent conformational landscape (i.e., nearby latent positions will encode similar conformations) but higher values also reduce precision in latent inference, potentially limiting how precisely flexible parts are aligned.
Latent centering strength
controls the strength of a prior that tries to ensure that latent coordinates are generally centered in the latent space and stay within the range (-1.5, 1.5). This must be tuned for each dataset if you see that latent coordinates are all close to zero or are all hitting the edge of the (-1.5, 1.5) domain. It does not have impact on the results or capacity of the model and is simply a nuisance parameter.
3D Flex Reconstruct
Max BFGS iterations
is set to 20 by default. This can be increased for large box sizes or very high resolution. Also, in some cases it is possible for the FSC curve after 3DFlex Reconstruct to not drop off to zero at high resolutions or appears clearly artefactual, which is an indication that the BFGS optimization has not fully converged. In these cases, it can help to increase this parameter to 40.
Load all particles in RAM
is a new option in CryoSPARC v4.4 that is off by default, meaning that particle images will be read from the project directory or from SSD cache during iterations of reconstruction, rather than being first pre-loaded into CPU RAM at the start of the job. Keeping this parameter off substantially reduces the CPU RAM requirements of the job, allowing for larger box size reconstructions. Turning the parameter on may improve speed.
Cache particle images on SSD
is a new option in CryoSPARC v4.4 that is on by default, causing particle images to be cached at the start of the job. Turning this off will cause particles to be read directly from project directories instead of being copied to the cache.
3DFlex is an advance in modelling continuous heterogeneity but it does have several limitations. The most important are listed here:
Compositional heterogeneity. Being a motion model, 3DFlex currently does not have a way to cleanly represent compositional heterogeneity. It is able to move density around, but cannot delete or add density (the opposite of density-based methods like 3DVA, cryoDRGN, etc.). As such, when presented with data that contain compositional heterogeneity, it may result in strange effects. For example, a domain that is partially occupied in the data may be modelled by creating a deformation that “expands” that domain over a wide space, thereby causing the density to drop, appearing like that domain has been erased. This is obviously not ideal behavior and the 3DFlex model will waste capacity modelling this compositional change rather than conformational changes. Improving 3DFlex in compositional cases is an area of development. Currently we suggest using 3D Classification and Heterogeneous Refinement jobs to ensure that discrete compositional heterogeneity is separated as much as possible before inputting particles into 3DFlex.
Intricate motions. Though 3DFlex does well in modelling motion even of relatively small parts of a particle, it is not yet capable of modelling highly intricate motions such as side chain or loop motion. These motions are far smaller than the setup of 3DFlex (e.g., using a tetramesh) can allow to be modelled. Furthermore, small motions and conformational changes are unlikely to even be statistically detectable in single particle data unless those motions happen in tandem with other larger changes in the molecule.
Intermediate states with no data. 3DFlex is strongly biased to modelling motion, and so when presented with data with discrete heterogeneity, it will likely learn a model that maps the multiple discrete states together under deformations that unite them. However, if the data is discrete, there will not be any signal about the actual conformational states of intermediate positions between the discrete endpoints of motion. 3DFlex will still model these transitions, but it will only be guided by its rigidity prior for intermediate states that are not actually seen in the data.
Interpretation of latent space. The interpretation of 3DFlex is also an interesting area for future work. It is unclear how one should relate the continuous probability distribution of particle images in the 3DFlex latent space to a physically meaningful notion of energy via a Boltzmann distribution. This is because the non-linear capacity of the flow generator means that relative distances and volumes (and hence probability density) in the latent space are arbitrary.
3DFlex is relatively computationally demanding. It is GPU accelerated.
Memory:
GPU memory use is relatively limited during training time, but at reconstruction time the GPU must be able to fit at least 2x the size of a volume at the full resolution box size. We have not yet finely profiled memory usage so it may be more.
CPU memory in CryoSPARC v4.4+:
3DFlex loads all particles into CPU memory at training time. This means you must have sufficient CPU RAM to fit the entire dataset (at the training box size). During 3D Flex Data Prep, you can limit the number of particles. During 3D Flex Reconstruction, particles are read from SSD cache by default and therefore do not need to all fit in CPU RAM.
CPU memory in CryoSPARC prior to v4.4:
3DFlex loads all particles into CPU memory at training time and reconstruction time. This means you must have sufficient CPU RAM to fit the entire dataset (at the training box size for train time, and at the high resolution box size for reconstruction time). During 3D Flex Data Prep, you can limit the number of particles as well.
3DFlex does not yet use the CryoSPARC particle caching system. It reads particles directly from project directories into CPU RAM at the start of processing.
Speed:
Speed of 3DFlex training (and reconstruction) are primarily driven by two factors: the number of latent dimensions and the number of voxels (i.e. the volume) inside the solvent mask. Training time will increase approximately linearly with both of these factors. Therefore to speed up training, downsampling to a smaller size (while still retaining enough resolution for training to pick up secondary structure, etc.) is very helpful. Similarly, the solvent mask should not be made overly loose (though it should also be loose enough not to cut off any density in flexible regions that are not well resolved in the consensus rigid density).
Performance appears to be more strongly affected by GPU performance than other CryoSPARC job types. We have not yet extensively characterized performance but newer/faster GPUs appear to provide substantial benefits.