figure b

Biography: Prof. Aydogan Ozcan is the Chancellor’s Professor and the Volgenau Chair for Engineering Innovation at UCLA and an HHMI Professor with the Howard Hughes Medical Institute, leading the Bio- and Nano-Photonics Laboratory at UCLA School of Engineering and is also the Associate Director of the California NanoSystems Institute. Dr. Ozcan is elected Fellow of the National Academy of Inventors (NAI) and holds >45 issued/granted patents and >20 pending patent applications and is also the author of one book and the co-author of >700 peer-reviewed publications in major scientific journals and conferences. Dr. Ozcan is the founder and a member of the Board of Directors of Lucendi Inc., Hana Diagnostics, Pictor Labs, as well as Holomic/Cellmic LLC, which was named a Technology Pioneer by The World Economic Forum in 2015. Dr. Ozcan is also a Fellow of the American Association for the Advancement of Science (AAAS), the International Photonics Society (SPIE), the Optical Society of America (OSA), the American Institute for Medical and Biological Engineering (AIMBE), the Institute of Electrical and Electronics Engineers (IEEE), the Royal Society of Chemistry (RSC), the American Physical Society (APS) and the Guggenheim Foundation, and has received major awards including the Presidential Early Career Award for Scientists and Engineers, International Commission for Optics Prize, Biophotonics Technology Innovator Award, Rahmi M. Koc Science Medal, International Photonics Society Early Career Achievement Award, Army Young Investigator Award, NSF CAREER Award, NIH Director’s New Innovator Award, Navy Young Investigator Award, IEEE Photonics Society Young Investigator Award and Distinguished Lecturer Award, National Geographic Emerging Explorer Award, National Academy of Engineering The Grainger Foundation Frontiers of Engineering Award and MIT’s TR35 Award for his seminal contributions to computational imaging, sensing and diagnostics. Dr. Ozcan is also listed as a Highly Cited Researcher by Web of Science, Clarivate, and serves as the co-Editor-in-Chief of eLight.

While conducting exciting, cutting edge applied research on photonics and optics, we are also training the next generation of engineers, scientists and entrepreneurs through our research programs. Some of our trainees have started up their own labs in the United States, China and other parts of the world, some went to industry, leading their own teams, and some started up companies.

After this 2017 LSA publication, our team has further expanded these results in many unique ways. In an Optica paper3 published in 2018, we demonstrated an innovative application of deep learning to significantly extend the imaging depth of a hologram. We demonstrated a CNN-based approach that simultaneously performs auto-focusing and phase-recovery to significantly extend the depth-of-field (DOF) in holographic image reconstruction. For this, a CNN is trained by using pairs of randomly de-focused back-propagated holograms and their corresponding in-focus phase-recovered images. After this training phase, the CNN takes a single back-propagated hologram of a 3D sample as input to rapidly achieve phase-recovery and reconstruct an in focus image of the sample over a significantly extended DOF. Furthermore, this deep learning based auto-focusing and phase-recovery method is non-iterative, and significantly improves the algorithm time-complexity of holographic image reconstruction from O(nm) to O(1), where n refers to the number of individual object points or particles within the sample volume, and m represents the discrete focusing search space, within which each object point or particle needs to be individually focused.

Another breakthrough result, in my opinion, that is at the intersection of deep learning and holography was published in LSA in 2019, and this work was placed in arXiv in 2018 by our team. In this publication4, we introduced the use of a deep neural network to perform cross-modality image transformation from a digitally back-propagated hologram corresponding to a given depth within the sample volume into an image that is equivalent to a bright-field microscope image acquired at the same depth. Because a single hologram is used to digitally propagate to different sections of the sample to virtually generate bright-field equivalent images of each section, this approach bridges the volumetric imaging capability of digital holography with speckle- and artifact-free image contrast of bright-field microscopy. After its training, the deep neural network learns the statistical image transformation between a holographic imaging system and an incoherent bright-field microscope, and intuitively it brings together “the best of both worlds” by fusing the advantages of holographic and incoherent bright-field imaging modalities.

For this holographic to bright-field image transformation, we used a generative adversarial network (GAN), which was trained using pollen samples imaged by an in-line holographic microscope along with a bright-field incoherent microscope (used as the ground truth). After the training phase, which needs to be performed only once, the generator network blindly takes as input a new hologram (never seen by the network before) to infer its bright-field equivalent image at any arbitrary depth within the sample volume. We experimentally demonstrated the success of this powerful cross-modality image transformation between holography and bright-field microscopy, where the network output images were free from speckle and various other interferometric artifacts observed in holography, matching the contrast and DOF of bright-field microscopy images that were mechanically focused onto the same planes within the 3D sample. We also demonstrated that the deep network also correctly colorizes the output image, using an input hologram acquired with a monochrome sensor and narrow-band illumination, matching the color distribution of the bright-field image.

This deep learning-enabled image transformation between holography and bright-field microscopy replaces the need to mechanically scan a volumetric sample, as it benefits from the digital wave-propagation framework of holography to virtually scan through the sample, where each one of these digitally propagated fields are transformed into bright-field microscopy equivalent images, exhibiting the spatial and color contrast as well as the DOF expected from an incoherent microscope.

figure d

Prof. Aydogan Ozcan in the lab.

Our previous studies on diffractive optical networks have demonstrated the generalization capability of these multi-layer diffractive designs to new, unseen image data. For example, using a 5-layer diffractive network architecture, all-optical blind testing accuracies of >98% and >90% have been reported for the classification of the images of handwritten digits (MNIST dataset) and fashion objects (Fashion-MNIST dataset) that were encoded in the amplitude and phase channels of the input object plane, respectively. For more complex and much harder to classify image datasets such as CIFAR-10, a substantial improvement in the inference performance of diffractive networks was demonstrated using ensemble learning6 published in LSA in 2021. After independently training >1250 individual diffractive networks that were diversely designed with a variety of passive input filters, a pruning algorithm was applied to select an optimized ensemble of diffractive networks that collectively improved the image classification accuracy. Through this pruning strategy, an ensemble of N = 14 diffractive networks collectively achieved a blind testing accuracy of 61.14% on the classification of CIFAR-10 test images, providing an inference improvement of 16.6% compared to the average performance of the individual diffractive networks within each ensemble, demonstrating the “wisdom of the crowd”.

Successful experimental demonstrations of these all-optical image classification systems have been reported using 3D-printed diffractive layers that conduct inference by modulating the in-coming object wave at terahertz (THz) wavelengths. In addition to statistical inference, the same optical information processing framework of diffractive networks has also been utilized to design deterministic optical components, for e.g., ultra-short pulse shaping7, spectral filtering and wavelength division multiplexing8. In these latter examples, the input field was known a priori and was fixed, i.e., unchanged, where the task of the diffractive network was to perform a deterministic, desired transformation for a given/known optical input.

Despite the lack of nonlinear optical elements in these previous implementations, diffractive optical networks have been shown to offer significant advantages in terms of (1) inference/classification accuracy, (2) diffraction efficiency, and (3) output signal contrast, when the number of successive diffractive layers in the network design is increased. We published an important work in LSA detailing some of these characteristics of diffractive optical networks, which is titled: “All-Optical Information Processing Capacity of Diffractive Surfaces9”.

Diffractive optical networks have also been extended to harness broadband input radiation to design spectrally encoded single-pixel machine vision systems, where unknown input objects were classified and reconstructed through a single-pixel detector10. This single-pixel machine vision framework achieved >96% blind testing accuracy for optical classification of handwritten digits (MNIST dataset) based on the spectral power detected at ten distinct wavelengths, each assigned to one digit/class. In addition to the optical classification of objects through spectral encoding of data classes, a shallow electronic neural network with two hidden layers was trained (after the diffractive network training) to rapidly reconstruct the images of the classified objects based on their diffracted power spectra detected by a single-pixel spectroscopic detector. Using only 10 inputs, one for each wavelength, this shallow network successfully reconstructed the images of the input, unknown objects, even if they were (rarely) incorrectly classified by the trained diffractive network. Considering the fact that each image of a handwritten digit is composed of ~800 pixels, this shallow image reconstruction network, with an input vector size of 10, performs a form of image decompression to successfully decode the task-specific spectral encoding of the diffractive network.

In one of our latest papers published in LSA11, we also report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input and output field-of-view. Stated differently, we demonstrated in this recent work the universal approximation capability of diffractive networks for all-optically synthesizing any arbitrarily selected linear transformation with independent phase and amplitude channels. Our methods and conclusions in this recent LSA work can be broadly applied to any part of the electromagnetic spectrum to design all-optical processors using spatially engineered diffractive surfaces to universally perform an arbitrary complex-valued linear transformation. Therefore, these results have wide implications and can be used to design and investigate various coherent optical processors formed by diffractive surfaces such as metamaterials, plasmonic or dielectric-based metasurfaces, as well as flat optics-based designer surfaces that can form information processing networks to execute a desired computational task between an input and output aperture.

figure e

Prof. Ozcan communicating with Prof. Olav Solgaard (left) and Prof. Tianhong Cui (right) at 2017 Light Conference in Changchun.

In a follow-up work by our team, we published another very important result13 in Nature Methods demonstrating super-resolution imaging in fluorescence microscopy and beating the diffraction limit of light through cross-modality image transformations from e.g. confocal microscopy to STED or TIRF to SIM. Perhaps one of the most surprising results that we have got on this line of research was on 3D virtual refocusing of fluorescence microscopy images using deep learning14, also published in Nature Methods. In this work, we introduced a new framework (termed Deep-Z) that is enabled by deep learning to statistically learn and harness hidden spatial features in a fluorescence image to virtually propagate a single fluorescence image onto user-defined 3D surfaces within a fluorescent sample volume. This is achieved without any mechanical scanning, additional optical hardware/components or a trade-off of imaging resolution, field-of-view or speed. Stated differently, we introduced, for the first time, a digital propagation framework that learns (through only image data and without any assumptions or theoretical models) the spatial features in fluorescence images that uniquely encode the 3D fluorescence wave propagation information in an intensity-only 2D recording without additional hardware.

There are various powerful features of Deep-Z that make it transformative for fluorescence imaging at all scales and across various disciplines that use fluorescence. A unique capability of Deep-Z framework is that it enables digital propagation of a fluorescence image of a 3D surface onto another 3D surface, user-defined by the pixel mapping of the corresponding digital propagation matrix (DPM). An analog of the same capability exists in holography under the paraxial approximation. In this sense, Deep-Z framework brings the computational 3D image propagation advantage of holography or other coherent imaging modalities into fluorescence microscopy. Such a unique capability can be useful, among many applications, for e.g., simultaneous auto-focusing of different parts of a fluorescence image after the image capture, measurement or assessment of aberrations introduced by the optical system as well as for correction of such aberrations by applying a desired nonuniform DPM. To exemplify the power of this additional degree of freedom enabled by Deep-Z, we experimentally demonstrated the correction of the planar tilting and curvature of different samples after the acquisition of a single 2D fluorescence image per object.

Yet another unique feature of this Deep-Z framework is that it permits cross-modality digital propagation of fluorescence images, where the neural network is trained with gold standard label images obtained by a different fluorescence microscopy modality to teach the generator network to digitally propagate an input image onto another plane within the sample volume, but this time to match the image of the same plane that is acquired by a different fluorescence imaging modality compared to the input image. We term this related framework Deep-Z+. To demonstrate the proof of concept of this unique capability, we trained Deep-Z+ with input and label images that were acquired with a wide-field fluorescence microscope and a confocal microscope, respectively, to blindly generate at the output of this cross-modality network, digitally propagated images of an input fluorescence image that match confocal microscopy images of the same sample sections, i.e., performing axial sectioning (similar to the contrast that a confocal microscope has) and digital propagation of a fluorescence image, both at the same time.

After its training, Deep-Z remains fixed, and its non-iterative inference requires no parameter estimation or search. The inference is performed in a rapid fashion, as it outputs the desired digitally-propagated fluorescence image without any changes to the optical microscopy set-up, or a trade-off of its spatial resolution or field-of-view. As such, it allows rapid volumetric imaging (limited only by the detector speed) without any axial scanning or hardware modifications. In addition to fluorescence microscopy, Deep-Z framework might be applied to other incoherent imaging modalities, and in fact it provides a bridge to close the gap between coherent and incoherent microscopes by enabling computational 3D imaging of a volume using a single 2D incoherent image.

Approximately, 2 years ago, my lab published a paper in Nature Biomedical Engineering which introduced a deep learning-based method to “virtually stain” autofluorescence images of unlabeled histological tissue sections, eliminating the need for chemical staining15. This technology was developed to leverage the speed and computational power of deep learning to improve upon century-old histochemical staining techniques which can be slow, laborious and expensive. In this seminal paper we showed that this virtual staining technology using deep neural networks is capable of generating highly accurate stains across a wide variety of tissue and stain types. It has the potential to revolutionize the field of histopathology by reducing the cost of tissue staining, while making it much faster, less destructive to the tissue and more consistent/repeatable.

Since the publication of our paper, we have had a number of exciting developments moving the technology forward. We have continued to find new applications for this unique technology, using the computational nature of the technique to generate stains which would be impossible to create using traditional histochemical staining. For example, we have used of what we refer to as a “digital staining matrix” which allows us to generate and digitally blend multiple stains using a single deep neural network, by specifying which stain should be performed on the pixel level. Not only can this unique framework be used to perform multiple different stains on a single tissue section, it can also be used to create micro-structured stains, digitally staining different areas of labelfree tissue with different stains. Furthermore, this digital staining matrix enables these stains to blended together, by setting the encoding matrix to be a mixture of the possible stains. This technology can be used to ensure that pathologists are able to receive the most relevant information possible from the various virtual stains being performed. This work16 was published in LSA in 2020 and has opened up the path for a very exciting new opportunity: stain-to-stain transformations, which enable transforming existing images of tissue biopsy stained with one type of stain into many other types of stains17, almost instantaneously. Published in Nature Communications, this stain-to-stain transformation process takes less than one minute per tissue sample, as opposed to several hours or even more than a day when performed by human experts. And, this speed differential enables faster preliminary diagnoses that require special stains, while also providing significant savings in costs.

Motivated by transformative potential of our virtual staining technology, we have also begun the process of its commercialization and founded Pictor Labs, a new Los Angeles-based startup. Pictor in Latin means “painter” and at Pictor Labs we virtually “paint” the microstructure of tissue samples using deep learning. In 2020, we were successful in raising seed funding from venture capital firms including M Ventures (a subsidy of Merck KGaA), Motus Ventures, as well as private investors.

Through Pictor labs, we aim to revolutionize the histopathology staining workflow using this virtual staining technology, and by building a cloud computing-based platform which facilitates histopathology through AI, we will enable tissue diagnoses and help clinicians manage patient care. I am very excited to have this unique opportunity to bring our cutting-edge academic research into the commercialization phase and look forward to more directly impacting human health over the coming years using this transformative virtual staining technology.

figure g

Prof. Ozcan receiving the World Technology Award on Health & Medicine, presented by the World Technology Network in association with TIME, CNN, AAAS, Science, Technology Review, Fortune and Kurzweil (2012).

These innovative measurement technologies led to >45 issued patents, several licensed by different companies including Honeywell, GE, Northrup Grumman (Litton), Arcelik, NOW Diagnostics and my own start-ups, targeting multi-billion $ markets, with products used in >10 countries, also earning one of my own companies a “Technology Pioneer” Award given by The World Economic Forum in 2015.

My work on mobile microscopy entered introductory level biology textbooks by, e.g., National Geographic, Cengage, and is being used by numerous academic groups world-wide, including in developing countries, also through my lab’s massive collaborations with >25 labs.

My lab was one of the first teams that utilized the cellphone as a platform for advanced measurements, microscopy and sensing covering various applications. For example, we were the first group to image and count individual viruses, individual DNA molecules using mobile phone-based microscopes. We were one of the first groups to utilize the smart phone as a platform for quantitative sensing, for example, quantification of lateral flow tests. Our mobile diagnostic test readers are still being used in industry through a licensee of one of our patents. Lucendi, a start-up that I cofounded, has commercialized a mobile imaging flow cytometer for water quality analysis, including for example alga toxic blooms. In fact this seminal work18 was also published in LSA.

As another example, our team has introduced the first point of care sensor that is designed by machine learning and that runs and makes decisions based on a neural network19,20. This is a vertical flow assay that can look at >80 immunoreactions in parallel. We have shown its efficacy for detection of early stage Lyme disease patients based on IgG and IgM panels (profiling the immunity of the patient), published in ACS Nano. We are also considering a similar approach for COVID-19—especially important to understand, e.g., the efficacy of vaccines and when a booster shot is needed.

figure h

Prof. Ozcan receiving the Popular Mechanics Breakthrough Award.

figure i

Prof. Ozcan explaining his mobile microscopy technology to Michael Bloomberg.

Think of your best work in a given year—that would provide an excellent fit to eLight. Please consider submitting your best work to eLight.

figure j

Prof. Ozcan is the inaugural recipient of the SPIE Biophotonics Technology Innovator Award.

Running a large engineering lab with a broad focus is a lot of effort, but to succeed in all of these areas you need to build interdisciplinary teams with a culture of sharing and learning from each other. My team is very diverse, covering many areas of engineering, and everyone can learn from everyone else. At the same time, we have many interdisciplinary collaborators. This makes our publications high impact and relevant—meaning, we solve problems that really matter—at the intersection of impact and novelty.

In summary: Never stop learning. Be open minded to new directions. Learn from your students, team members and colleagues. And understand that everyone can be wrong, including yourself. No one is perfect and understanding your weaknesses is important.

figure k

Ozcan Group former Postdocs Dr. Sungkyu Seo (top) and Dr. Euan McLeod (bottom) won the Chancellors Award for Postdoctoral Research at UCLA. Dr. Seo is currently a Professor at Korea University and Dr. McLeod is an Associate Professor at the College of Optical Sciences, University of Arizona.

Researchers do not work for awards—we are motivated by our curiosity and our passion for “change”—making the world better and advancing our understanding of the universe through the impact of our science and engineering. Having said this, these awards can sometime motivate researchers to continue in their careers, and help them accelerate their progress, reminding them of the importance of their research and broader impact. In this sense, I find the Rising Stars awards very important and timely.

figure l

2020 Rising Stars of Light online.

Light correspondent

Tingting Sun is an Assistant Professor of Light Publishing Group at Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP), Chinese Academy of Sciences (CAS). She received Engineering Doctor Degree from University of Chinese Academy of Sciences in 2016. She currently serves as an Academic Editor for an Excellence Program leading journal Light: Science & Applications, and is the Editor-in-Chief Assistant of LSA subclassification journal eLight. She is also a senior talent jointly trained by the International Cooperation Bureau of CAS. She came from the frontline of scientific research and had profound scientific research background. She has presided over two scientific research projects as the project leader, and participated in many major scientific projects. She has published many SCI and EI academic papers and applied for two national invention patents.

figure p