Posted in | News | Microscopy | 2D Materials

Deep Learning Takes 2D Material Microscopy Images Further

The reduced dimensionality and heterostructures of 2D materials make them promising candidates for the fabrication of photonic and optical devices. The electrical, mechanical, and optical properties of 2D materials depend on their layered structure.

Deep Learning Takes 2D Material Microscopy Images Further​​​​​​​

​​​​​​​Study: Deep-Learning-Based Microscopic Imagery Classification, Segmentation, and Detection for the Identification of 2D Semiconductors. Image Credit: spainter_vfx/Shutterstock.com

An article recently published in the journal Advanced Theory and Simulations discussed a rough identification method to determine the thickness of 2D materials using new deep learning approaches. The microscopic datasets were processed via three deep-learning architectures for classification, segmentation, and detection.

Later, an evaluation of images with 2D microscopy with different optical contrast variations augmented the images to determine their robustness. Furthermore, the deep learning models were optimized and evaluated for the identification of mono- to multilayered molybdenum sulfide (MoS2) flakes grown on silica/silicon (SiO2/Si) substrate via chemical vapor deposition (CVD).

Application of Deep Learning for Identification of 2D Materials

The fabrication of nanophotonics, nanooptics, and quantum devices has become easier with 2D materials, given that these materials have outstanding mechanical, optical, and electrical properties, alongside their reduced dimensionality to the nanoscale level. Furthermore, their versatility of heterogenous existence with differential stacking of 2D flakes in specific layer numbers has facilitated their integration into sensing and emission applications. Moreover, the thickness of these 2D materials determines their applications.

Atomic force microscopy (AFM), transmission spectroscopy, photoluminescence imaging, spectroscopic ellipsometry, optical contrast, reflection, and Raman spectroscopy are the commonly used analytical techniques to measure the thickness of the 2D flakes.

Of the mentioned techniques, the optical contrast method involves the manual observation of 2D materials under a microscope, which is a time-delayed process. However, the optical contrast’s identification of atomic layer numbers from a computer vision perspective involves classification, segmentation, and detection steps for 2D materials image processing tasks.

Techniques based on deep learning are used in image processing tasks, obtained as raw data from microscopy or spectroscopy. Although deep learning was explored for automated identification using microscopic images, their extensive study on computer vision tasks remains unexecuted in 2D materials.

While the classification model predicts the flakes category, a semantic segmentation model generates a pixel-wise segmentation map for the flakes category. Moreover, a detection model classifies and localizes different 2D materials.

Previous attempts of network training with 2D materials collected data helped identify their rough thickness without using the datasets. However, the same datasets can be used to study different network architectures systematically to investigate the correlation between optical contrast variations and network accuracy.

Identification of 2D Materials via Deep Learning-based Neural Network Models

In the present study, bright-field microscopic images were used to identify accurate atomic layers of 2D semiconductors via computer vision tasks. Three different neural network architectures were employed for three tasks, DenseNet for the classification task, U-Net for the segmentation task, and Mask-RCNN for the detection task. Moreover, gamma contrast sampling strategies were used to train the neural network models based on DenseNet, U-Net, and Mask-RCNN models.

The prediction performances of the three models revealed that for the DenseNet and Mask-RCNN architectures, the multilabel classification and detection models trained with a2, a3, and c2 sampling indices showed higher performance than those trained with a1, c1 indices of original datasets. On the other hand, for the U-Net architecture, the segmentation model trained with b1 sampling indices of original datasets did not show significant variation from the models trained with b2, b3 sampling indices of augmented datasets.

The complexity of the datasets increased due to optical contrast variations, making it difficult to distinguish among categories of 2D materials. However, processing the 2D materials microscopic images for the three deep learning approaches provided the solution for 2D materials identification. Furthermore, statistics and model performances of datasets were analyzed by employing red, green, and blue (RGB) histograms of optical contrast differences and commission on illumination (CIE) 1931 color space. Finally, the pre-trained models were integrated into the graphic user interface (GUI).

Conclusion

In summary, the present work implemented the DenseNet, U-Net, and Mask R-CNN, three deep learning approaches for classification, segmentation, and detection of microscopic images of 2D materials to realize automated mapping of the atomic layer. Here, MoS2 flakes were grown on SiO2/Si substrates with 270 nanometers thick oxidation layer.

The three deep learning approaches were trained and tested on augmented and original datasets using two different optical contrast variation strategies for each operational channel, the same and different gamma contrast sampling indices.

Operating optical contrast variations for the augmentation of the original datasets did not show significant differences among categories, as appeared in the CIE 1931 color space analysis. However, to overcome this drawback, testing and training processes can be adjusted with smaller and larger variation ranges, respectively.

Reference

Dong, X., Li, H., Yan, Y., Cheng, H., Zhang,HX., Zhang, Y., Le,TD et al. (2022). Deep-Learning-Based Microscopic Imagery Classification, Segmentation, and Detection for the Identification of 2D Semiconductors. Advanced Theory and Simulations. https://doi.org/10.1002/adts.202200140

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Bhavna Kaveti

Written by

Bhavna Kaveti

Bhavna Kaveti is a science writer based in Hyderabad, India. She has a Masters in Pharmaceutical Chemistry from Vellore Institute of Technology, India, and a Ph.D. in Organic and Medicinal Chemistry from Universidad de Guanajuato, Mexico. Her research work involved designing and synthesizing heterocycle-based bioactive molecules, where she had exposure to both multistep and multicomponent synthesis. During her doctoral studies, she worked on synthesizing various linked and fused heterocycle-based peptidomimetic molecules that are anticipated to have a bioactive potential for further functionalization. While working on her thesis and research papers, she explored her passion for scientific writing and communications.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Kaveti, Bhavna. (2022, July 19). Deep Learning Takes 2D Material Microscopy Images Further. AZoNano. Retrieved on April 26, 2024 from https://www.azonano.com/news.aspx?newsID=39429.

  • MLA

    Kaveti, Bhavna. "Deep Learning Takes 2D Material Microscopy Images Further". AZoNano. 26 April 2024. <https://www.azonano.com/news.aspx?newsID=39429>.

  • Chicago

    Kaveti, Bhavna. "Deep Learning Takes 2D Material Microscopy Images Further". AZoNano. https://www.azonano.com/news.aspx?newsID=39429. (accessed April 26, 2024).

  • Harvard

    Kaveti, Bhavna. 2022. Deep Learning Takes 2D Material Microscopy Images Further. AZoNano, viewed 26 April 2024, https://www.azonano.com/news.aspx?newsID=39429.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.