Unveiling and swift diagnosing chronic wound healing with artificial intelligence assistance

Jiliang Deng Guoliang Shi Zhihang Ye Quan Xiao Xiaoting Zhang Lei Ren Fangyu Yang Miao Wang

Citation:  Jiliang Deng, Guoliang Shi, Zhihang Ye, Quan Xiao, Xiaoting Zhang, Lei Ren, Fangyu Yang, Miao Wang. Unveiling and swift diagnosing chronic wound healing with artificial intelligence assistance[J]. Chinese Chemical Letters, 2025, 36(3): 110496. doi: 10.1016/j.cclet.2024.110496 shu

Unveiling and swift diagnosing chronic wound healing with artificial intelligence assistance

English

  • Chronic wounds pose a significant healthcare challenge, subjecting patients to both physical and psychological distress over prolonged periods [1]. The elderly and individuals with concurrent conditions like diabetes are particularly vulnerable to impaired wound healing, significantly compromising their quality of life [24]. Current approaches to assessing and treating chronic wounds rely on visual observation of wound characteristics and subsequent empirical selection of dressings. However, wound healing is a dynamic and intricate physiological process, necessitating tissue biopsies for effective monitoring and potential detection of malignancy [57]. Presently, pathological analysis of tissue sections entails a time-intensive and laborious process [8,9]. Conventional histological staining workflows involve the use of various chemical reagents, including xylene, a known carcinogen, posing risks to human health while also consuming excessive resources [10]. Pathology analysis presents its challenges, demanding significant time and effort from clinicians to interpret histopathology images and track inflammation and tissue repair progression. Thus, achieving a balance between expediency and accuracy in histopathologic diagnosis remains a formidable task [11].

    Various AI-based approaches have emerged to streamline the workflow of disease diagnosis [1214]. The integration of AI algorithms into biochemical sensors offers promising applications by effectively assisting in the diagnosis of a wide range of diseases [15]. Machine learning enables peptidome analysis for comprehensive individualized monitoring of membranous nephropathy [16]. Additionally, data analysis of patient urine biomarkers using machine learning has shown good performance in aiding the diagnosis of Parkinson's disease [17]. Virtual staining techniques, leveraging deep learning, can transform unstained tissue samples' microscopic images into replicas of chemically stained sections, yielding results comparable to manual staining [1822]. Notably, these techniques can replicate standard stains like H&E, Masson's, and Jones Silver Stain [23,24]. Additionally, for valuable pathology samples, simulating multiple staining outcomes on a single tissue slice has become feasible, addressing the need to illustrate diverse pathological details within one slice [2527]. Artificial intelligence methods are increasingly being used to diagnose and treat chronic wounds. For instance, machine learning-based self-sterilizing microneedle sensor patches can monitor wound healing in real-time and provide effective antimicrobial therapy [28]. Some studies have utilized deep learning to extract the shape and color features of wounds for wound image segmentation [2931]. Notably, recent research by Maknuna et al. demonstrated the utility of deep learning in automating pathological analysis and quantitatively characterizing scar tissue, effectively identifying and classifying pathological features such as collagen fibers and hair follicles in dermal tissue from H&E stained images [32]. Similarly, Jones et al. trained a convolutional neural network (CNN) capable of segmenting regions of pathological features in H&E-stained sections of wound tissue [33]. However, the methods currently discussed provide only a preliminary assessment of wound healing, raising concerns regarding their diagnostic reliability. Additionally, given the diverse classifications of pathological features associated with chronic wounds, the applicability and reliability of deep learning-based virtual staining and pathological feature analysis in this context remain uncertain. Currently, there is a lack of comprehensive and intelligent software that integrates virtual staining with automated pathology analysis, which could offer clinicians rapid and accessible ancillary diagnostic results. Addressing this gap may require the development of effective chronic wound pathology databases and the enhancement of algorithmic analytical capabilities to facilitate the identification and classification of pathological features in chronic wound tissue images.

    In this study, we established a chronic wound model of animals, synthesized gel dressings containing a variety of wound healing components and engineered robust deep learning architectures aimed at facilitating virtual H&E staining of histopathology images of wounds and categorizing pathological features within H&E-stained images (Fig. 1). A comparison of our network architecture with both the pixel 2-pixel algorithm and the BCI algorithm from antecedent research endeavors reveals superior image quality achieved by our algorithmic model. Moreover, the staining efficacy of our methodology underwent blind evaluation by three board-certified pathologists. Their discernment of histopathological features within images generated by our virtual staining technique corroborated a high degree of concordance between histologically stained images and their virtual counterparts. Subsequently, leveraging the U-network architecture, we implemented algorithmic enhancements empowering our network model to further delineate H&E-stained images for histopathologic classification. These findings demonstrate the network model's proficiency in accurately segmenting pathological features into eight distinct categories, encompassing epidermis, dermis, adipose tissue, granulation tissue, necrotic tissue, hair follicles, muscle tissue, and interstitial tissue. Through rigorous validation, we substantiated the feasibility and efficacy of the learning model for virtual histopathological staining and pathological evaluation of wounds characterized by delayed healing. By amalgamating virtual staining capabilities with the precision delineation of pathological features, we have constructed an intelligent digital assessment framework tailored for chronic wounds, thereby advancing the frontier of wound care diagnostics.

    Figure 1

    Figure 1.  The process of constructing the diagnosing chronic wound healing model with artificial intelligence assistance. (a) Synthesis of functional hydrogel dressings. This step includes the oxidation of sodium alginate (SA) followed by a Schiff base reaction with carboxymethyl chitosan (CMCS) to produce hydrogels, referred to as CSOA. Various pro-healing biological components were incorporated into CSOA to yield hydrogels with distinct functional properties. (b) Database collection of chronic wound pathology in mice. Different functional hydrogel dressings were applied to a mouse model infected with Staphylococcus aureus. Wound tissues were periodically excised, and images of the same tissue sections were captured both before and after H&E staining. (c) Development of a virtual staining technique. A generative adversarial network (GAN) model was utilized to convert images of unstained mouse wound tissue sections into images that mimic H&E-stained sections. This step began with the segmentation of pathology images to ensure that the textures of graphic blocks in pre- and post-stained sections matched and served as training data for the deep learning network. Subsequently, the GAN model transformed unstained pathology images into virtually stained H&E images upon input. (d) Pathological feature extraction and classification using a convolutional neural network. This model was trained using features from regions of interest in manually labeled pathology images. Given an image of an H&E-stained section, the model was able to automatically classify various pathological features.

    The synthesis of hydrogels via the Schiff base reaction between oxidized sodium alginate (OSA) and carboxymethyl chitosan (CMCS) [34] is depicted in Fig. S1a (Supporting information). Before gelation, the mixture remains liquid and flowable as shown in Fig. S1b (Supporting information), with the cessation of flow indicating the onset of hydrogel formation. During the gelation process, solutions of nanosilver, collagen, and their combination were added to produce CSOA/Ag, CSOA/Col, and CSOA/Col/Ag composite hydrogel dressing, respectively. The scanning electron microscopy images of post-freeze-drying, presented in Fig. S1c (Supporting information), reveal a stable three-dimensional network structure, signifying the successful cross-linking between CMCS and OSA. The infrared spectroscopy results (Fig. S2a in Supporting information) display a distinct broad absorption peak of CMCS at 3427 cm−1, which is attributed to the stretching vibrations of polysaccharide components (-OH coupled with -N-H). Additionally, the symmetric stretching vibration of the C=O bond was observed at 1384 cm−1. Notably, the peak for OSA at 1732 cm−1, associated with the asymmetric stretching of C=O, shifted to 1650 cm−1 in the CSOA hydrogel. This shift reflects the Schiff base reaction between the amino groups of CMCS and the aldehyde groups of OSA, resulting in the characteristic telescopic vibrational peaks of the -N=C- amide bond, and confirming the cross-linking reaction. Swelling tests, as detailed in Fig. S2b (Supporting information), demonstrate that the CSOA hydrogel exhibits a swelling rate of 320% within 12 h. The incorporation of collagen and nanosilver notably moderated this swelling, particularly with collagen, which reduces the swelling rate to approximately 220%. Rheological analysis (Figs. S2c and d in Supporting information), documented a consistent rise in both the storage modulus (G′) and loss modulus (G′′). Initially, G′′ exceeds G′, indicating the reactants are in a liquid state. As the reaction progresses, G′′ and G′ become equal, signifying the gelation point. Beyond this intersection, G′ surpasses G′′, marking the transition to a solid state as gelation completes. This rheological transition, from a low-viscosity state to a high-strength gel, underscores the finalized hydrogel's stable mechanical properties. Biocompatibility testing, as illustrated in Fig. S3 (Supporting information), involved immersing the hydrogels in a DMEM medium supplemented with 10% bovine serum. The extracts, tested at varying concentrations (100%, 50%, 25%, and 0%), were co-cultured with HUVEC cells and evaluated for cell viability using a CCK-8 assay, with optical density measurements conducted at 450 nm. The CSOA/Col hydrogel exhibited the lowest biocompatibility score of 86.6% at 100% concentration, indicating its potential as a safe adjunct for wound healing applications.

    Fig. S4a (Supporting information) illustrates that after 14 days, the CSOA/Col/Ag hydrogel group showed minimal wound debris, in stark contrast to the control group, which displayed significantly larger wounds with some crusting. Quantitative analysis of wound closure at various intervals, as shown in Fig. S4b (Supporting information), indicated that on day 3, the CSOA/Col/Ag hydrogel group exhibited a healing rate of approximately 38%, surpassing the control group's 24%, which highlight the accelerated healing. By day 7, all wounds had markedly decreased in size, with the CSOA/Col/Ag hydrogel group achieving a closure rate of approximately 83%, which was notably higher than that of the other groups. Moreover, by day 14, this group achieved near-complete healing (98.6%), while the other groups displayed larger scars and lower healing rates: 86% for CSOA, 95% for CSOA/Col, and 96% for CSOA/Ag. A wound trace diagram, as outlined in Fig. S4c (Supporting information), was prepared to visually characterize the wound closure process. These findings suggest that the CSOA/Ag hydrogel accelerates inflammation reduction but results in an uneven wound surface, whereas the CSOA/Col hydrogel promotes smoother wound repair despite slowing inflammation. The CSOA/Col/Ag hydrogel notably enhances wound healing due to the synergistic effects of its components. Our experiments demonstrated that hydrogels influence the wound healing process and impart different pathological characteristics to wounds. This aspect is crucial for the development of a deep learning training database and aids in validating the effectiveness of the deep learning model in identifying wound pathology features, ultimately optimizing the usage of wound dressings.

    To further substantiate the therapeutic efficacy of our synthetic hydrogel dressings on wound healing, histological analyses were conducted on wound tissues collected at 3-, 7-, and 14-days post-treatment across the diverse treatment groups. The tissues were fixed, embedded, and sectioned in sequence for analysis. Hematoxylin and Eosin (H&E) staining was employed, with the resultant path micrographs displayed in Fig. S5 (Supporting information), illustrating the histopathological features of the wound sections across the four experimental groups. On day 3, a quantitative assessment of inflammatory cell infiltration revealed a diminished inflammatory response in the test group compared to the control, likely attributable to the hydrogel's pro-healing properties. Extracellular matrix (ECM) deposition was notably sparse in the control group but more pronounced in the test group, a result of the antibacterial and anti-inflammatory properties of nanosilver (Ag) and the fibroblast-recruiting capability of collagen (Col). By day 7, dense collagen deposition was observed in both the CSOA/Col and CSOA/Col/Ag hydrogel groups, with some immature skin appendages, such as hair follicles and sebaceous glands, already evident. In contrast, the presence of skin appendages was less apparent in the CSOA/Ag group despite the absence of significant inflammation, which is likely due to collagen's enhanced tissue repair properties. By day 14, all treatment groups exhibited varying degrees of re-epithelialization, indicating ongoing skin repair. In post-treatment, inflammation subsided more rapidly in the CSOA/Ag hydrogel group, although the regeneration of skin appendages was not as robust as in the CSOA/Col and CSOA/Col/Ag groups. The histopathological analysis of H&E-stained sections revealed significant differences in pathological features between different groups.

    Fig. S6 (Supporting information) presents a representative whole slide image (WSI) of the same tissue section both before and after Hematoxylin and Eosin (H&E) staining. The WSI reveals extensive and detailed pathological information, which surpasses what is typically observable through conventional light microscopy. Following the segmentation of the WSI, a comprehensive dataset suitable for deep learning analysis was generated. Excluding a few samples deemed of poor quality, approximately 120 unstained and H&E-stained panoramic pathology images were collected in this study. Fig. S7 (Supporting information) details the examination of the WSIs at increased magnification, which highlights distinct pathological features across various experimental groups during the corresponding periods. These features align with those described in our previous histological analyses, thereby confirming the diversity and validity of the image databases we have compiled.

    The dataset consisting of 1206 high-resolution images was divided into training, validation, and test sets at a ratio of approximately 12:1:1. Given the high resolution of these images (exceeding 200 K), the images were pre-screened using Qpath software to ensure efficient deep learning training. This step also involved identifying and coarsely aligning paired images of the same tissue sections before and after staining. However, since the requirement for training data in adversarial generative networks is a resolution under 1024 pixels × 1024 pixels, the images underwent a "random crop" [35,36] image preprocessing step. As illustrated in Fig. 2a, the data preprocess began with a "random crop". The details are shown in Fig. S8a (Supporting information) where a 1024-pixel square image (position A) was used as a reference to generate additional samples by panning 512 pixels horizontally and vertically to positions B and position C, respectively. This overlapping sampling method continued until a comprehensive training dataset was constructed. To augment sample diversity, each panning step required a calculation of the intersection over union (IOU) between the original and the panned frames according to Eq. 1.

    (1)

    Figure 2

    Figure 2.  The main framework of the proposed virtual histology staining method of unstained mouse wound tissue using the conditional generative adversarial network. (a) Data processing workflow. (b) Training of the algorithm. The dark-field images of unstained mouse wound tissue sections are fed into the generator network to generate synesthetic staining images (top). The input for the discriminator consists of the "H&E-stained images" (bottom) and its output indicates the probability that the image is real. The discriminator aims to correctly distinguish between real and synthetic images. The network model maximizes the benefits for both sides by designing an objective loss function.

    where 'area1' and 'area2' represent the areas of the initial and panned frames, 'interarea' denotes their overlapping region, and 'IOU' is the intersection-to-union ratio, thereby ensuring robust dataset generation. The affine transformation [37] requires the selection of more than four points ideally located at the edges of each corresponding image for optimal spacing (Fig. S8b in Supporting information). The coordinates of these points are then substituted into Eq. 2. The maximum interior rectangle algorithm [38] was implemented by calculating the four vertices of a rectangular frame within each image and extracting the uncolored regions based on these vertices. This approach enabled the generation of training images corresponding to paired textures (Fig. S8b).

    (2)

    This experiment utilizes an optimized conditional adversarial generative network algorithm, with the training architecture depicted in Fig. 2b. The generative network (G) (Fig. S9 in Supporting information) adopts the ResNet-9block architecture which is a variant of the Residual Network designed for deep neural networks to handle extensive datasets while avoiding the vanishing gradient problem, thus enhancing the model performance. The discriminative network (D) employs the PatchGan loss function for loss assessment. The training objective is to minimize the loss in the generative network (G) while maximizing the loss in the discriminative network (D). After the discriminator network, the cGAN outputs a virtually H&E-stained image in response to the input of a dark-field image of an unstained tissue section. The L1 loss and antagonistic generation loss formulas are shown in Eqs. 3 and 4, which are consistent with Ref. [36]. In the equations presented, x denotes the input samples, y represents the labeled samples, and z signifies the noisy samples. L1(G) stands for the L1 loss (mean square absolute error) of the generated images compared with the labeled samples, while LcGan(G, D) denotes the adversarial generation loss. The calibration of paired samples ensures texture correspondence. Therefore, the model focuses on achieving stylistic, color, and label consistency across outputs. Bilinear interpolation is utilized for downsampling image labels to capture detailed texture information across diverse receptive field scales. The Gi-loss label images as illustrated in Fig. S8c (Supporting information). The loss function for single-scale receptive fields corresponds to the L1 loss, while multi-scale receptive fields employ a formula detailed in Eq. 5. The comprehensive loss function is defined in Eq. 6.

    (3)

    (4)

    (5)

    (6)

    As depicted in Fig. 3, our optimized U-Net network algorithm facilitates the virtual staining of skin tissue sections. This process involves the generation of images with distinct visual features that closely resemble traditional histologically stained sections. Erythrocyte exudates are represented in bright red, fibrous connective tissues are shown in light pink, granulation tissues containing fibroblasts and inflammatory cells exhibit purple-blue nuclei, and muscle tissues are depicted in pink. These digitally generated images closely replicate the visual characteristics of traditional histologically stained sections. Further details on the virtual staining, including representations of interstitial and fibrous connective tissues, are provided in supplemental Fig. S10 (Supporting information). The consistency of color and tissue appearance in the virtual images closely matches those observed in bright field images of histologically stained sections. Despite these promising results, the virtual staining technique still requires further refinement, particularly in accurately delineating nuclear details. Potential factors contributing to this limitation include an inadequate sample size, resolution constraints of the scanning microscope, and the need for improvements in software and algorithmic frameworks.

    Figure 3

    Figure 3.  Comparison of virtual staining results with H&E staining images. The top panel displays autofluorescence images of unstained tissue sections, the middle panel presents real H&E staining images, and the bottom panel showcases virtual staining images. Scale bar: 50 µm.

    To assess the fidelity of virtually stained H&E images, a blind evaluation was conducted by three board-certified pathologists who were unaware of the virtual staining method. They rated the quality of cells, tissue, and overall staining on a discriminative scale ranging from 1 (poor) to 4 (perfect). Table 1 presents the results of a blinded evaluation conducted by pathologists, which compares real Hematoxylin and Eosin (H&E) staining images (HS) and virtual staining images (VS). The evaluation used the average scores of pathologists on real stained images as a benchmark, demonstrating that scores for virtual stained images attained 71.1% for cellular features, 75.4% for tissue structures, and 77.8% for overall assessment compared to the real stained images. It can be seen from the results that pathologists could identify histopathological features in virtual stained images with a proficiency close to that achieved with real stained images.

    Table 1

    Table 1.  Blind evaluation of virtual and histological H&E staining in wound.
    DownLoad: CSV

    Further, to evaluate the efficacy of our model for virtual H&E staining, we compared its performance with that of the pixel-to-pixel and BCI algorithms reported in previous studies [3941]. Fig. 4 illustrates the outcomes of virtual H&E staining using different network architectures. The images derived by using the pixel-to-pixel method with Unet256 generally appeared lighter and notable local distortions, whereas the images from the Resnet-9block variant were consistently overly dark and plagued by numerous artifacts unrelated to the image content. In contrast, the BCI algorithm improved color and graphic reproduction but still introduced significant noise. Our algorithm, however, markedly reduced these contaminants, producing images that closely mimic the visual quality of actual H&E-stained images in terms of color and texture information, as indicated by yellow arrows pointing out the localized errors. Although the images generated by our algorithm still contain insignificant noise, they can still be analyzed for various pathological features. To objectively assess image quality, we employed the peak signal noise ratio (PSNR) and structure similarity index measure (SSIM) as the evaluation metrics [42]. As shown in Table S1 (Supporting information), our algorithm achieves a PSNR of 20.265 and a SSIM of 0.634, surpassing other competing network architectures and thus demonstrating enhanced performance post-optimization. Table S2 (Supporting information) elaborates on the efficacy of our multiscale receptive field model. It highlights the advantage of employing bi-trilinear interpolation to downsample image labels, which facilitates improved texture information monitoring across scales.

    Figure 4

    Figure 4.  Comparative analysis of virtual staining performance across different network architectures. Yellow arrows in the figure indicate poorly localized features. Scale bar: 50 µm.

    To facilitate our deep learning network's capability to autonomously classify H&E-stained wound histopathology images, meticulous manual labeling of all collected pathology section images was imperative. As detailed in Fig. S11 (Supporting information), panoramic H&E-stained pathology images were processed using ASAP software, wherein regions of interest (ROI) were delineated by using eight distinct colored lines. In this preparatory step, the images were categorized into distinct histological components-epidermis, dermis, hair follicle, adipose tissue, muscle, granulation tissue, mesenchyme, and necrotic tissue, thus compiling a dataset for the network. A schematic diagram of this methodology is shown in Fig. S12 (Supporting information), where a pathologist manually marks the ROI on an H&E-stained image, and it then serves as input data for the deep learning algorithm. In the described methodology, each pixel of the input image was classified into one of eight predefined regions of interest (ROIs), enabling the H&E-stained image to be represented as a matrix of the same dimensions as the image itself. Specifically, each element within this matrix indicates the classification outcome for the corresponding pixel in the stained image. Assuming that the input image dimensions are W × H pixels, with three color channels, the input can be represented as a W × H × 3 matrix, while the output from the network is formatted as a W × H × 8 matrix. The neural network utilized for pathology image recognition and classification was based on a modified U-Net architecture [43,44], which was designed to accommodate images with larger sizes and to operate efficiently on a single graphics processing unit (GPU). The distribution ratio of training, validation, and test data is approximately 10:1:1, thereby ensuring that the test data comprised of additional stained micrographs are distinctly segregated from the training and validation datasets. This segmentation process is detailed in Fig. S13 (Supporting information). Each mini batch processed by the network contains 16 data points. The training employs a Stochastic Gradient Descent with Momentum (SGDM) optimizer [45], where the initial learning rate is set at 0.001, with a momentum of 0.9 and a decay rate of 0.00001. The training begins with a 'warm-up' phase consisting of 100 steps when the learning rate linearly increases from 0 to 0.001. Following this phase, the learning rate follows a standard decay schedule. The model employs binary cross-entropy as its loss function, expressed as Eq. 7. In the described experiment, yi​ represents the value of the element in the target matrix, pi​ denotes the corresponding element in the output matrix of the network, and N signifies the total number of elements contained in the target matrix. The training regimen extends over 5000 epochs, during which the average validation loss for each epoch is computed. The model exhibiting the lowest loss value is subsequently selected as the outcome.

    (7)

    As depicted in Fig. 5, the results demonstrate that after network training, the algorithm refines the delineation of pathological features through averaging calculations. From the visualized images of the pathology features output by the algorithmic network, we can see that the regions between different pathology features are greatly reduced in the averaging calculation. In addition, the pathology information (e.g., unlabeled hair follicles) that was missed in the labeled images appeared in the output images after the network calculation. The images generated by the algorithm exhibit improved accuracy in classifying pathological features compared to manually labeled images, manifesting a seamless integration between various pathological features that closely resembles the distribution observed in real tissues. Fig. S14 (Supporting information) illustrates additional classification results, elucidating the algorithm's efficacy in optimizing the boundaries delineating different pathological features. This outcome underscores our ability to accurately and swiftly identify intact histopathological sections of H&E-stained wounds utilizing deep learning networks.

    Figure 5

    Figure 5.  Outcomes of automatic classification of H&E-stained images. 'H&E' denotes the original H&E-stained image, 'User' represents the manually labeled image utilized as input data for the deep learning network, and 'Network' indicates the final output classification result of the network.

    In summary, this study demonstrates the application of a deep learning-based virtual staining method and image segmentation classification technique for the analysis of chronic wound tissue sections. Using an optimized U-Net network architecture model, we successfully transformed dark-field micrographs of unstained mouse skin tissue sections into corresponding H&E-stained images. Moreover, the trained network facilitates the intelligent analysis of wound healing, enabling rapid segmentation and quantitative analysis of pathological features in H&E-stained section images. Despite these advances, our study has still certain limitations. Current research supports the feasibility of using rodent histopathological analysis to classify and characterize pathological features. However, real clinical data is essential to enhance the clinical translational potential of our model. Moreover, this research was confined to a mouse model infected with Staphylococcus aureus. Numerous other chronic wounds, such as diabetic ulcers, vascular ulcers, and pressure ulcers, exhibit distinct histopathological characteristics, which need a broader array of wound dressings. Additionally, the diversity of functional wound dressings synthesized in our experiments was limited. Therefore, the introduction of other functional components, like epidermal growth factor and vascular endothelial growth factor, was necessary to better accommodate varying wound types. As the application of AI in histopathology remains emergent, our research not only spearheads a new paradigm for the intelligent monitoring of chronic wounds and rapid clinical decision support but also lays the foundational scientific foundation for the advent of a digital intelligent medical information platform. Future research directions may include optimizing algorithms, enhancing model interpretability and generalizability, and amassing large clinical-pathological slide samples for model training to boost clinical translational efficacy.

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Jiliang Deng: Writing – original draft. Guoliang Shi: Software. Zhihang Ye: Software. Quan Xiao: Software. Xiaoting Zhang: Data curation. Lei Ren: Supervision. Fangyu Yang: Supervision. Miao Wang: Supervision.

    This work was supported by the Fundamental Research Funds for the Central Universities (No. 20720230037), the National Natural Science Foundation of China (No. 52273305), Natural Science Foundation of Fujian Province of China (No. 2023J05012), State Key Laboratory of Vaccines for Infectious Diseases, Xiang An Biomedicine Laboratory (Nos. 2023XAKJ0103071, 2023XAKJ0102061), and Natural Science Foundation of Xiamen, China (No. 3502Z20227010).

    Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.cclet.2024.110496.


    1. [1]

      M. Olsson, K. Järbrink, U. Divakar, et al., Wound Repair Regen. 27 (2019) 114–125. doi: 10.1111/wrr.12683

    2. [2]

      V. Veličković, D. Janković, Int. Wound J. 20 (2023) 792–798. doi: 10.1111/iwj.13924

    3. [3]

      D. Ramachandram, J.L. Ramirez-GarciaLuna, R.D.J. Fraser, et al., JMIR mHealth uHealth 10 (2022) e36977. doi: 10.2196/36977

    4. [4]

      K. Las Heras, M. Igartua, E. Santos-Vizcaino, et al., J. Control. Release 328 (2020) 532–550. doi: 10.1016/j.jconrel.2020.09.039

    5. [5]

      R.G. Frykberg, J. Banks, Adv. Wound Care 4 (2015) 560–582. doi: 10.1089/wound.2015.0635

    6. [6]

      H.W. Rogers, M.A. Weinstock, S.R. Feldman, et al., JAMA Dermatol. 151 (2015) 1081–1086. doi: 10.1001/jamadermatol.2015.1187

    7. [7]

      L. Chow, K.L. Yick, Y. Sun, et al., Int. J. Bioprint. 7 (2021) 327. doi: 10.2307/jj.21995585.20

    8. [8]

      J. Panuncialman, S. Hammerman, P. Carson, et al., Wound Repair Regen. 18 (2010) 21–25. doi: 10.1111/j.1524-475X.2009.00559.x

    9. [9]

      E.C. Murzaku, S. Hayan, B.K. Rao, J. Am. Acad. Dermatol. 71 (2014) 393–395. doi: 10.1016/j.jaad.2014.03.048

    10. [10]

      M.T. McCann, J.A. Ozolek, C.A. Castro, et al., IEEE Signal Process. Mag. 32 (2015) 78–87. doi: 10.1109/MSP.2014.2346443

    11. [11]

      A. Asif, K. Rajpoot, S. Graham, et al., J. Pathol. 260 (2023) 564–577. doi: 10.1002/path.6168

    12. [12]

      M. Dabas, D. Schwartz, D. Beeckman, et al., Adv. Wound Care 12 (2023) 205–240. doi: 10.1089/wound.2021.0144

    13. [13]

      P. Mamoshina, A. Vieira, E. Putin, et al., Mol. Pharm. 13 (2016) 1445–1454. doi: 10.1021/acs.molpharmaceut.5b00982

    14. [14]

      F. Fereidouni, A. Todd, Y. Li, et al., Biomed. Opt. Express 10 (2019) 6516–6530. doi: 10.1364/boe.10.006516

    15. [15]

      Z. Zhou, T. Xu, X. Zhang, TrAC Trends Anal. Chem. 173 (2024) 117613. doi: 10.1016/j.trac.2024.117613

    16. [16]

      Z. Xu, R. Chen, C. Hao, et al., Chin. Chem. Lett. 35 (2024) 108975. doi: 10.1016/j.cclet.2023.108975

    17. [17]

      X. Wang, X. Hao, J. Yan, et al., Chin. Chem. Lett. 34 (2023) 108230. doi: 10.1016/j.cclet.2023.108230

    18. [18]

      Y. Rivenson, H. Wang, Z. Wei, et al., Nat. Biomed. Eng. 3 (2019) 466–477. doi: 10.1038/s41551-019-0362-y

    19. [19]

      X. Meng, X. Li, X. Wang, Comput. Math. Methods Med. 2021 (2021) 4244157.

    20. [20]

      G. Zhang, H. Hui, B. Ning, et al., Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2021, pp. 3492–3495.

    21. [21]

      B. Zeng, Y. Lin, Y. Wang, et al., Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2022, Springer Nature Switzerland, Cham, 2022, pp. 232–241.

    22. [22]

      B. Bai, X. Yang, Y. Li, et al., Light Sci. Appl. 12 (2023) 57. doi: 10.1038/s41377-023-01104-7

    23. [23]

      K. de Haan, Y. Zhang, J.E. Zuckerman, et al., Nat. Commun. 12 (2021) 4884. doi: 10.1038/s41467-021-25221-2

    24. [24]

      D. Li, H. Hui, Y. Zhang, et al., Mol. Imaging Biol. 22 (2020) 1301–1309. doi: 10.1007/s11307-020-01508-6

    25. [25]

      D. Lancet, I. Pecht, Biochemistry 16 (1977) 5150–5157. doi: 10.1021/bi00642a031

    26. [26]

      R. Zhang, Y. Cao, Y. Li, et al., Med. Image Anal. 80 (2022) 102520. doi: 10.1016/j.media.2022.102520

    27. [27]

      Y. Zhang, K. de Haan, Y. Rivenson, et al., Light Sci. Appl. 9 (2020) 78. doi: 10.1093/geroni/igaa057.255

    28. [28]

      J. Xiao, Z. Zhou, G. Zhong, et al., Adv. Funct. Mater. 34 (2024) 2315067. doi: 10.1002/adfm.202315067

    29. [29]

      B.J. Erickson, P. Korfiatis, Z. Akkus, et al., Radiographics 37 (2017) 505–515. doi: 10.1148/rg.2017160130

    30. [30]

      H.S.R. Rajula, G. Verlato, M. Manchia, et al., Medicina 56 (2020) 455. doi: 10.3390/medicina56090455

    31. [31]

      R. Yamashita, M. Nishio, R.K.G. Do, et al., Insights Imaging 9 (2018) 611–629. doi: 10.1007/s13244-018-0639-9

    32. [32]

      L. Maknuna, H. Kim, Y. Lee, et al., Diagnostics 12 (2022).

    33. [33]

      J.D. Jones, K.P. Quinn, J. Investig. Dermatol. 141 (2021) 1367–1370. doi: 10.1016/j.jid.2020.10.010

    34. [34]

      L.H. Fan, X.R. Pan, Y. Zhou, et al., J. Appl. Polym. Sci. 122 (2011) 2331–2337. doi: 10.1002/app.34041

    35. [35]

      B. Cheng, W. Wu, D. Tao, et al., IEEE Trans. Instrum. Meas. 69 (2020) 6795–6806. doi: 10.1109/tim.2020.2976420

    36. [36]

      P. Isola, J.Y. Zhu, T. Zhou, A.A. Efros, Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 5967–5976.

    37. [37]

      X. Xue, K. Zhang, K.C. Tan, et al., IEEE Trans. Cybern. 52 (2020) 6217–6231.

    38. [38]

      Y.T. Luo, C.F. Chen, S.S. Yeh, Int. J. Comput. Integr. Manuf. 5 (2022) 706–724. doi: 10.1080/0951192x.2021.2022761

    39. [39]

      P.L. Vidal, J. de Moura, J. Novo, et al., Biomed. Signal Process. 79 (2023) 104098. doi: 10.1016/j.bspc.2022.104098

    40. [40]

      G. Duan, Y. Cao, W. Guo, et al., Proceedings of the 2023 16th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 2023, pp. 1–5.

    41. [41]

      S. Liu, C. Zhu, F. Xu, et al., Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2022, pp. 1814–1823.

    42. [42]

      R. Janapati, V. Dalal, N. Govardhan, et al., IOP Conf. Ser. Mater. Sci. Eng. 981 (2020) 032019. doi: 10.1088/1757-899x/981/3/032019

    43. [43]

      O. Ronneberger, P. Fischer, T. Brox, Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Springer International Publishing, Cham, 2015, pp. 234–241.

    44. [44]

      J. Ho, A. Jain, P. Abbeel, Adv. Neural Inf. Process. Syst. 33 (2020) 6840–6851.

    45. [45]

      J. Mills, J. Hu, G. Min, et al., IEEE Trans. Comput. 72 (2023) 1804–1814. doi: 10.1109/tc.2022.3212631

  • Figure 1  The process of constructing the diagnosing chronic wound healing model with artificial intelligence assistance. (a) Synthesis of functional hydrogel dressings. This step includes the oxidation of sodium alginate (SA) followed by a Schiff base reaction with carboxymethyl chitosan (CMCS) to produce hydrogels, referred to as CSOA. Various pro-healing biological components were incorporated into CSOA to yield hydrogels with distinct functional properties. (b) Database collection of chronic wound pathology in mice. Different functional hydrogel dressings were applied to a mouse model infected with Staphylococcus aureus. Wound tissues were periodically excised, and images of the same tissue sections were captured both before and after H&E staining. (c) Development of a virtual staining technique. A generative adversarial network (GAN) model was utilized to convert images of unstained mouse wound tissue sections into images that mimic H&E-stained sections. This step began with the segmentation of pathology images to ensure that the textures of graphic blocks in pre- and post-stained sections matched and served as training data for the deep learning network. Subsequently, the GAN model transformed unstained pathology images into virtually stained H&E images upon input. (d) Pathological feature extraction and classification using a convolutional neural network. This model was trained using features from regions of interest in manually labeled pathology images. Given an image of an H&E-stained section, the model was able to automatically classify various pathological features.

    Figure 2  The main framework of the proposed virtual histology staining method of unstained mouse wound tissue using the conditional generative adversarial network. (a) Data processing workflow. (b) Training of the algorithm. The dark-field images of unstained mouse wound tissue sections are fed into the generator network to generate synesthetic staining images (top). The input for the discriminator consists of the "H&E-stained images" (bottom) and its output indicates the probability that the image is real. The discriminator aims to correctly distinguish between real and synthetic images. The network model maximizes the benefits for both sides by designing an objective loss function.

    Figure 3  Comparison of virtual staining results with H&E staining images. The top panel displays autofluorescence images of unstained tissue sections, the middle panel presents real H&E staining images, and the bottom panel showcases virtual staining images. Scale bar: 50 µm.

    Figure 4  Comparative analysis of virtual staining performance across different network architectures. Yellow arrows in the figure indicate poorly localized features. Scale bar: 50 µm.

    Figure 5  Outcomes of automatic classification of H&E-stained images. 'H&E' denotes the original H&E-stained image, 'User' represents the manually labeled image utilized as input data for the deep learning network, and 'Network' indicates the final output classification result of the network.

    Table 1.  Blind evaluation of virtual and histological H&E staining in wound.

    下载: 导出CSV
  • 加载中
计量
  • PDF下载量:  1
  • 文章访问数:  116
  • HTML全文浏览量:  6
文章相关
  • 发布日期:  2025-03-15
  • 收稿日期:  2024-07-08
  • 接受日期:  2024-09-23
  • 修回日期:  2024-09-18
  • 网络出版日期:  2024-10-17
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

/

返回文章