Objective To propose a dual-domain CBCT reconstruction framework (DualSFR-Net) based on generative projection interpolation to reduce artifacts in sparse-view cone beam computed tomography (CBCT) reconstruction. Methods The proposed method DualSFR-Net consists of a generative projection interpolation module, a domain transformation module, and an image restoration module. The generative projection interpolation module includes a sparse projection interpolation network (SPINet) based on a generative adversarial network and a full-view projection restoration network (FPRNet). SPINet performs projection interpolation to synthesize full-view projection data from the sparse-view projection data, while FPRNet further restores the synthesized full-view projection data. The domain transformation module introduces the FDK reconstruction and forward projection operators to complete the forward and gradient backpropagation processes. The image restoration module includes an image restoration network FIRNet that fine-tunes the domain-transformed images to eliminate residual artifacts and noise. Results Validation experiments conducted on a dental CT dataset demonstrated that DualSFR-Net was capable to reconstruct high-quality CBCT images under sparse-view sampling protocols. Quantitatively, compared to the current best methods, the DualSFR-Net method improved the PSNR by 0.6615 and 0.7658 and increased the SSIM by 0.0053 and 0.0134 under 2-fold and 4-fold sparse protocols, respectively. Conclusion The proposed generative projection interpolation-based dual-domain CBCT sparse-view reconstruction method can effectively reduce stripe artifacts to improve image quality and enables efficient joint training for dual-domain imaging networks in sparse-view CBCT reconstruction.
PauwelsR, ArakiK, SiewerdsenJH, et al. Technical aspects of dental CBCT: state of the art[J]. Dentomaxillofac Radiol, 2015, 44(1): 20140224.
[2]
JaffrayDA, SiewerdsenJH, WongJW, et al. Flat-panel cone-beam computed tomography for image-guided radiation therapy[J]. Int J Radiat Oncol Biol Phys, 2002, 53(5): 1337-49.
[3]
WangG, ZhaoSY, HeuscherD. A knowledge-based cone-beam X-ray CT algorithm for dynamic volumetric cardiac imaging[J]. Med Phys, 2002, 29(8): 1807-22.
[4]
HallEJ, BrennerDJ. Cancer risks from diagnostic radiology[J]. Br J Radiol, 2008, 81(965): 362-78.
[5]
FeldkampLA, DavisLC, KressJW. Practical cone-beam algorithm[J]. J Opt Soc Am A, 1984, 1(6): 612.
[6]
LiS, CaoQ, ChenY, et al. Dictionary learning based sinogram inpainting for CT sparse reconstruction[J]. Optik, 2014, 125(12): 2862-7.
[7]
ZhangH, SonkeJJ. Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography[J]. J Xray Sci Technol, 2013, 21(4): 481-96.
[8]
HanY, YeJC. Framing U-net via deep convolutional framelets: application to sparse-view CT[J]. IEEE Trans Med Imaging, 2018, 37(6): 1418-29.
[9]
JiangZR, ChenYX, ZhangYW, et al. Augmentation of CBCT reconstructed from under-sampled projections using deep learning[J]. IEEE Trans Med Imaging, 2019, 38(11): 2705-15.
[10]
IsolaP, ZhuJY, ZhouTH, et al. Image-to-image translation with conditional adversarial networks[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA. IEEE, 2017: 5967-76.
[11]
ZhangZC, LiangXK, DongX, et al. A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution[J]. IEEE Trans Med Imaging, 2018, 37(6): 1407-17.
[12]
XieSP, YangT. Artifact removal in sparse-angle CT based on feature fusion residual network[J]. IEEE Trans Radiat Plasma Med Sci, 2021, 5(2): 261-71.
[13]
MaJH, ZhangH, GaoY, et al. Iterative image reconstruction for cerebral perfusion CT using a pre-contrast scan induced edge-preserving prior[J]. Phys Med Biol, 2012, 57(22): 7519-42.
[14]
LiuY, MaJH, FanY, et al. Adaptive-weighted total variation minimization for sparse data toward low-dose X-ray computed tomography image reconstruction[J]. Phys Med Biol, 2012, 57(23): 7923-56.
[15]
HeJ, YangY, WangYB, et al. Optimizing a parameterized plug-and-play ADMM for iterative low-dose CT reconstruction[J]. IEEE Trans Med Imaging, 2019, 38(2): 371-82.
[16]
LeeD, ChoiS, KimHJ. High quality imaging from sparsely sampled computed tomography data with deep learning and wavelet transform in various domains[J]. Med Phys, 2019, 46(1): 104-15.
[17]
ZhouB, ChenXC, ZhouSK, et al. DuDoDR-Net: dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography[J]. Med Image Anal, 2022, 75: 102289.
LiRR, LiQ, WangHX, et al. DDPTransformer: dual-domain with parallel transformer network for sparse view CT image reconstruction[J]. IEEE Trans Comput Imag, 2022, 8: 1101-16.
[20]
HeJ, WangYB, MaJH. Radon inversion via deep learning[J]. IEEE Trans Med Imaging, 2020, 39(6): 2076-87.
[21]
HeJ, ChenSL, ZhangH, et al. Downsampled imaging geometric modeling for accurate CT reconstruction via deep learning[J]. IEEE Trans Med Imaging, 2021, 40(11): 2976-85.
[22]
ChaoLY, WangZW, ZhangHB, et al. Sparse-view cone beam CT reconstruction using dual CNNs in projection domain and image domain[J]. Neurocomputing, 2022, 493: 536-47.
[23]
ZhaoXZ, LiuX, WangXY, et al. Dual-domain neural networks for clinical and low-dose CBCT reconstruction[C]//2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). Seoul, Korea, Republic of. IEEE, 2024: 17-8.
WuJY, LiY, WangZ, et al. Dual-domain fusion network for metal artifact reduction in CT[C]//Medical Imaging 2024: Physics of Medical Imaging. February 18-23, 2024. San Diego, USA. SPIE, 2024: 378-384.
[26]
WurflT, HoffmannM, ChristleinV, et al. Deep learning computed tomography: learning projection-domain weights from image domain in limited angle problems[J]. IEEE Trans Med Imaging, 2018, 37(6): 1454-63.
[27]
RonnebergerO, FischerP, BroxT. U-net: Convolutional networks for biomedical image segmentation[C]//Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III.
[28]
ChenH, ZhangY, KalraMK, et al. Low-dose CT with a residual encoder-decoder convolutional neural network[J]. IEEE Trans Med Imaging, 2017, 36(12): 2524-35.
[29]
YamanakaJ, KuwashimaS, KuritaT. Fast and accurate image super resolution by deep CNN with skip connection and network in network[C]//International Conference on Neural Information Processing. Cham: Springer, 2017: 217-25.
[30]
YangQS, YanPK, ZhangYB, et al. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss[J]. IEEE Trans Med Imaging, 2018, 37(6): 1348-57.
[31]
SimonyanK, ZissermanA. Very deep convolutional networks for large-scale image recognition[EB/OL]. 2014: arXiv: 1409.1556.
[32]
BeraS, BiswasPK. Noise conscious training of non local neural network powered by self attentive spectral normalized Markovian patch GAN for low dose CT denoising[J]. IEEE Trans Med Imaging, 2021, 40(12): 3663-73.
[33]
KingmaDP, BaJ. Adam: a method for stochastic optimization[EB/OL]. 2014: arXiv: 1412.6980.
[34]
JinKyongHwan, McCannMT, FrousteyE, et al. Deep convolutional neural network for inverse problems in imaging[J]. IEEE Trans Image Process, 2017, 26(9): 4509-22.
[35]
WangS, GaoJ, LiZ, et al. A closer look at self-supervised lightweight vision transformers[C]//International Conference on Machine Learning. PMLR, 2023: 35624-41.
[36]
WangA, ChenH, LinZJ, et al. Rep ViT: revisiting mobile CNN from ViT perspective[C]//2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA. IEEE, 2024: 15909-20.