ser considerada como una opci
´
on viable para la super-
resoluci
´
on, remoci
´
on de ruido y mejora de ancho de banda
de los sinogramas provenientes de sistemas para TOA. Estos
hallazgos son relevantes para el campo de la biomedici-
na, ya que demuestran el potencial de los modelos de
aprendizaje autom
´
atico en la mejora de la calidad de las
im
´
agenes de resonancias mamarias provenientes de sistemas
para TOA, lo que podr
´
ıa ayudar a mejorar la precisi
´
on
y confiabilidad de los diagn
´
osticos m
´
edicos. En futuros
trabajos ser
´
ıa interesante explorar el uso de las redes neu-
ronales transformers para realizar esta tarea [35], [36]. Por
otro lado, se podr
´
ıa implementar un barrido exhaustivo de
hiperpar
´
ametros, incluyendo la variaci
´
on de la cantidad de
capas convolucionales de los RRDB, prueba que no se pod
´
ıa
realizar en este trabajo por limitaciones de memoria de GPU.
AGRADECIMIENTOS
Este trabajo fue financiado por la Universidad de Buenos
Aires (UBACYT 20020190100032BA), CONICET (PIP
11220200101826CO) and la Agencia I+D+i (PICT 2018-
04589, PICT 2020-01336).
REFERENCIAS
[1] R. A. Kruger, W. L. Kiser, D. R. Reinecke, G. A. Kruger, and
K. D. Miller, “Thermoacoustic molecular imaging of small animals,”
Molecular imaging, vol. 2, no. 2, p. 15353500200303109, 2003.
[2] X. Wang, Y. Xu, M. Xu, S. Yokoo, E. S. Fry, and L. V. Wang,
“Photoacoustic tomography of biological tissues with high cross-
section resolution: Reconstruction and experiment,” Medical physics,
vol. 29, no. 12, pp. 2799–2805, 2002.
[3] X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang,
“Noninvasive laser-induced photoacoustic tomography for structural
and functional in vivo imaging of the brain,” Nature biotechnology,
vol. 21, no. 7, pp. 803–806, 2003.
[4] P. Beard, “Biomedical photoacoustic imaging,” Interface focus, vol. 1,
no. 4, pp. 602–631, 2011.
[5] I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S.
Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,”
Photoacoustics, vol. 14, pp. 77–98, 2019.
[6] M. Mehrmohammadi, S. Joon Yoon, D. Yeager, and S. Y Emelianov,
“Photoacoustic imaging for cancer detection and staging,” Current
Molecular Imaging (Discontinued), vol. 2, no. 1, pp. 89–105, 2013.
[7] P. Hai, Y. Qu, Y. Li, L. Zhu, L. Shmuylovich, L. A. Cornelius, and
L. V. Wang, “Label-free high-throughput photoacoustic tomography
of suspected circulating melanoma tumor cells in patients in vivo,”
Journal of biomedical optics, vol. 25, no. 3, p. 036002, 2020.
[8] L. V. Wang and J. Yao, “A practical guide to photoacoustic tomogra-
phy in the life sciences,” Nature methods, vol. 13, no. 8, pp. 627–638,
2016.
[9] R. A. Kruger, R. B. Lam, D. R. Reinecke, S. P. Del Rio, and R. P.
Doyle, “Photoacoustic angiography of the breast,” Medical physics,
vol. 37, no. 11, pp. 6096–6100, 2010.
[10] G. Ku and L. V. Wang, “Deeply penetrating photoacoustic tomog-
raphy in biological tissues enhanced with an optical contrast agent,”
Optics letters, vol. 30, no. 5, pp. 507–509, 2005.
[11] A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang,
N. Dadashzadeh, J. Xia, and K. M. Avanaki, “Review of cost
reduction methods in photoacoustic computed tomography,” Photoa-
coustics, vol. 15, p. 100137, 2019.
[12] N. Awasthi, S. K. Kalva, M. Pramanik, and P. K. Yalavarthy, “Vector
extrapolation methods for accelerating iterative reconstruction meth-
ods in limited-data photoacoustic tomography,” Journal of biomedical
optics, vol. 23, no. 7, 2018.
[13] N. Awasthi, R. Pardasani, S. K. Kalva, M. Pramanik, and P. K.
Yalavarthy, “Sinogram super-resolution and denoising convolutional
neural network (srcn) for limited data photoacoustic tomography,”
arXiv preprint arXiv:2001.06434, 2020.
[14] W. Choi, D. Oh, and C. Kim, “Practical photoacoustic tomography:
realistic limitations and technical solutions,” Journal of Applied
Physics, vol. 127, no. 23, p. 230903, 2020.
[15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,”
Advances in neural information processing systems, vol. 27, 2014.
[16] N. Awasthi, G. Jain, S. K. Kalva, M. Pramanik, and P. K. Yalavarthy,
“Deep neural network-based sinogram super-resolution and band-
width enhancement for limited-data photoacoustic tomography,” IEEE
transactions on ultrasonics, ferroelectrics, and frequency control,
vol. 67, no. 12, pp. 2660–2673, 2020.
[17] A. Hauptmann and B. T. Cox, “Deep learning in photoacoustic
tomography: current approaches and future directions,” Journal of
Biomedical Optics, vol. 25, no. 11, 2020.
[18] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S.
Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Large-
scale machine learning on heterogeneous distributed systems,” arXiv
preprint arXiv:1603.04467, 2016.
[19] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan,
T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An
imperative style, high-performance deep learning library,” Advances
in neural information processing systems, vol. 32, 2019.
[20] C. B. Shaw, J. Prakash, M. Pramanik, and P. K. Yalavarthy, “Least
squares qr-based decomposition provides an efficient way of comput-
ing optimal regularization parameter in photoacoustic tomography,”
Journal of Biomedical Optics, vol. 18, no. 8, 2013.
[21] J. Prakash, A. S. Raju, C. B. Shaw, M. Pramanik, and P. K.
Yalavarthy, “Basis pursuit deconvolution for improving model-based
reconstructed images in photoacoustic tomography,” Biomedical op-
tics express, vol. 5, no. 5, pp. 1363–1377, 2014.
[22] A. Sarno, G. Mettivier, F. di Franco, A. Varallo, K. Bliznakova,
A. M. Hernandez, J. M. Boone, and P. Russo, “Dataset of patient-
derived digital breast phantoms for in silico studies in breast computed
tomography, digital breast tomosynthesis, and digital mammography,”
Medical Physics, vol. 48, no. 5, pp. 2682–2693, 2021.
[23] S. Gutta, M. Bhatt, S. K. Kalva, M. Pramanik, and P. K. Yalavarthy,
“Modeling errors compensation with total least squares for limited
data photoacoustic tomography,” IEEE Journal of Selected Topics in
Quantum Electronics, vol. 25, no. 1, pp. 1–14, 2017.
[24] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT
press, 2016.
[25] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional
networks for biomedical image segmentation,” in Medical Image
Computing and Computer-Assisted Intervention–MICCAI 2015: 18th
International Conference, Munich, Germany, October 5-9, 2015,
Proceedings, Part III 18. Springer, 2015, pp. 234–241.
[26] G. Developers. Descending into ml: Training and loss — machine
learning. [Online]. Available: https://developers.google.com/machine-
learning/crash-course/descending-into-ml/training-and-loss
[27] X. Wang, L. Xie, C. Dong, and Y. Shan, “Real-esrgan: Training real-
world blind super-resolution with pure synthetic data,” in Proceedings
of the IEEE/CVF International Conference on Computer Vision, 2021,
pp. 1905–1914.
[28] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and
C. Change Loy, “Esrgan: Enhanced super-resolution generative ad-
versarial networks,” in Proceedings of the European conference on
computer vision (ECCV) workshops, 2018.
[29] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral
normalization for generative adversarial networks,” arXiv preprint
arXiv:1802.05957, 2018.
[30] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time
style transfer and super-resolution,” in Computer Vision–ECCV 2016:
14th European Conference, Amsterdam, The Netherlands, October 11-
14, 2016, Proceedings, Part II 14. Springer, 2016, pp. 694–711.
[31] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Te-
jani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single im-
age super-resolution using a generative adversarial network. corr
abs/1609.04802 (2016),” arXiv preprint arXiv:1609.04802, 2016.
[32] L. Statistics. Pearson product-moment correlation. [On-
line]. Available: https://statistics.laerd.com/statistical-guides/pearson-
correlation-coefficient-statistical-guide.php
[33] M. M. H. Center. Psnr. [Online]. Available: https://nl.mathworks.
com/help/images/ref/psnr.html
[34] U. Sara, M. Akter, and M. S. Uddin, “Image quality assessment
through fsim, ssim, mse and psnr—a comparative study,” Journal of
Computer and Communications, vol. 7, no. 3, pp. 8–18, 2019.
[35] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N.
Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,”
Advances in neural information processing systems, vol. 30, 2017.
[36] C. Yao, S. Jin, M. Liu, and X. Ban, “Dense residual transformer for
image denoising,” Electronics, vol. 11, no. 3, p. 418, 2022.
Revista elektron, Vol. 7, No. 1, pp. 7-18 (2023)
http://elektron.fi.uba.ar