The last two decades have seen stunning advances in imaging and analysis of the human brain. This include advances in microscopic (e.g., CLARITY, SWITCH), mesoscopic (e.g., polarized light imaging (PLI), optical coherence tomography (OCT), and macroscopic imaging (e.g., MRI)). While, in each spatial scale, our tools to understand brain structure have advanced, the ability to link across scales has lagged. In particular, there are no current tools that enable the general purpose linking of micro- and mesoscopic information to spatial scales approachable with in vivo structural and functional imaging. The advent of Deep Learning (DL) has provided the promise of general-purpose algorithms capable of learning local and global context in images and hence providing a key framework for such connections, with an array of other applications derivable from this, such as registration, prediction and automated diagnosis. TRD1 will advance the application of these cutting-edge analytic tools to allow inference of in vivo human brain images using information contained in data acquired at much finer spatial scales.
We will first focus on improvements to mesoscopic ex vivo OCT and MR imaging that will enable us to directly visualize cortical and white matter (WM) features that define architectonic boundaries and influence the signal measured by in vivo MRI, such as cortical laminae, vessels, fibers and neurons, including implementation of techniques for noise reduction, as well as the integration of microscopic multi-photon imaging capable of using 2nd and 3rd harmonic-imaging to directly visualize myelinated fibers under the cut face of the tissue, avoiding cutting-induced, slice-specific distortions in human brain-sized specimens that severely limit registration across scales. Next, we will develop a set of tools for Bayesian reconstruction of histological slices into 3D volumes using OCT as a micron-scale, undistorted coordinate system. This will enable molecular features, which the mesoscopic MR imaging does not directly detect, to be transferred to MRI-based coordinate systems for use in in vivo analysis. Finally, we will bring the promise of DL to cortical surface-based analysis, by integrating deep networks into a novel surface-based representation that removes the confounding variability of cortical folding patterns and provides a natural “gauge” coordinate system for building cortical convolutional neural networks (CCNNs). The CCNNs will be used in conjunction with the imaging and histological reconstruction developed above to segment vascular trees and small white matter fascicles within cortical gray matter, providing important data to other TRDs seeking to incorporate this information into their modeling and analysis, as well as with collaborative projects to segment Focal Cortical Dysplasias (FCDs), predict myelin and task-based fMRI maps, and remove the distortions induced by tissue clearing, enabling accurate transfer of micro- and mesoscopic information to in vivo atlases.