
toacoustic computed tomography,” Phys. Rev. E, vol. 71, p. 016706,
2005.
[15] P. Burgholzer, J. Bauer-Marschallinger, H. Gr
¨
un, M. Haltmeier, and
G. Paltauf, “Temporal back-projection algorithms for photoacous-
tic tomography with integrating line detectors,” Inverse Problems,
vol. 23, no. 6, p. S65–S80, Nov 2007.
[16] A. Rosenthal, V. Ntziachristos, and D. Razansky, “Acoustic inversion
in optoacoustic tomography: A review,” Current Medical Imaging
Reviews, vol. 9, pp. 318–336, 2013.
[17] C. Tian, M. Pei, K. Shen, S. Liu, Z. Hu, and T. Feng, “Impact
of system factors on the performance of photoacoustic tomography
scanners,” Phys. Rev. Applied, vol. 13, p. 014001, 2020.
[18] A. Rosenthal, D. Razansky, and V. Ntziachristos, “Fast semi-analytical
model-based acoustic inversion for quantitative optoacoustic tomo-
graphy,” IEEE Transactions on Medical Imaging, vol. 29, no. 6, p.
1275–1285, Jun 2010.
[19] L. Hirsch, M. G. Gonz
´
alez, and L. Rey Vega, “On the robustness of
model-based algorithms for photoacoustic tomography: Comparison
between time and frequency domains,” Review of Scientific Instru-
ments, vol. 92, no. 11, p. 114901, Nov 2021.
[20] G. Paltauf, P. R. Torke, and R. Nuster, “Modeling photoacoustic ima-
ging with a scanning focused detector using monte carlo simulation
of energy deposition,” Journal of biomedical optics, vol. 23, no. 12,
2018.
[21] A. Rosenthal, V. Ntziachristos, and D. Razansky, “Model-based op-
toacoustic inversion with arbitrary-shape detectors,” Medical Physics,
vol. 38, no. 7, pp. 4285–4295, 2011.
[22] A. D. Cezaro and F. T. D. Cezaro, “Regularization approaches for
quantitative photoacoustic tomography using the radiative transfer
equation,” Journal of Computational and Applied Mathematics, vol.
292, pp. 1–14, 2015.
[23] S. R. Arridge, M. M. Betcke, B. T. Cox, F. Lucka, and B. E. Treeby,
“On the adjoint operator in photoacoustic tomography,” Inverse Pro-
blems, vol. 32, no. 11, p. 115012, Oct 2016.
[24] M. G. Gonz
´
alez, M. Vera, and L. J. R. Vega, “Combining band-
frequency separation and deep neural networks for optoacoustic
imaging,” Optics and Lasers in Engineering, vol. 163, p. 107471,
2023.
[25] A. Hyv
¨
arinen, “Estimation of non-normalized statistical models by
score matching,” Journal of Machine Learning Research, vol. 6,
no. 24, pp. 695–709, 2005.
[26] DRIVE, “DRIVE: Digital retinal images for vessel extraction,” 2020.
[Online]. Available: https://drive.grand-challenge.org/
[27] ARIA, “Automated retinal image analysis,” 2006. [Online]. Available:
http://www.damianjjfarnell.com/
[28] RITE, “Retinal images vessel tree extraction,” 2013. [Online].
Available: https://medicine.uiowa.edu/eye/rite-dataset
[29] STARE, “Structured analysis of the retina,” 2000. [Online]. Available:
https://cecas.clemson.edu/
∼
ahoover/stare/
[30] A. Hatamizadeh, H. Hosseini, N. Patel, J. Choi, C. Pole, C. Hoeferlin,
S. Schwartz, and D. Terzopoulos, “RAVIR: A dataset and methodo-
logy for the semantic segmentation and quantitative analysis of retinal
arteries and veins in infrared reflectance imaging,” IEEE Journal of
Biomedical and Health Informatics, 2022.
[31] M. Gonz
´
alez, M. Vera, A. Dreszman, and L. Rey Vega, “Diffusion
assisted image reconstruction in optoacoustic tomography,” Optics
and Lasers in Engineering, vol. 178, p. 108242, 2024.
[32] K. B. Petersen and M. S. Pedersen, “The matrix cookbook,” 2008.
AP
´
ENDICE A
DEMOSTRACI
´
ON DEL LEMA 1
Mientras que (14) es una identidad est
´
andar, las identida-
des (15) y (16) requieren un an
´
alisis de factores. Es sencillo
notar que la distribuci
´
on de X|K = k es normal debido a
las hip
´
otesis del modelo. Su media y la covarianza pueden
calcularse como:
E[X|K = k] = E [E[X|Y ]|K = k] = A · µ
k
(23)
cov(X|K = k) = E [cov(X|Y )|K = k] + cov (E[X|Y ]|K = k)
= σ
2
v
· I + A · Λ
k
· A
T
(24)
De este modo, (16) puede obtenerse mediante la regla
de Bayes. Con esta t
´
ecnica es f
´
acil hallar la distribuci
´
on
conjunta:
X
Y
K=k
∼ N
Aµ
k
µ
k
,
σ
2
v
I + AΛ
k
A
T
AΛ
k
Λ
k
A
T
Λ
k
(25)
La ecuaci
´
on (15) se prueba utilizando propiedades de
variables normales multivariadas [32].
AP
´
ENDICE B
DEMOSTRACI
´
ON DEL LEMA 2
La funci
´
on score se define como ψ(x) = ∇
x
log p(x).
Por un lado, puede escribirse intercambiando el orden de
diferenciaci
´
on e integraci
´
on como:
ψ(x) =
∇
x
p(x)
p(x)
=
1
p(x)
Z
R
d
y
p(y)∇
x
p(x|y)dy (26)
En nuestro modelo X|Y = y ∼ N (Ay, σ
2
v
I), el gradiente
puede calcularse como:
∇
x
p(x|y) =
Ay − x
σ
2
v
p(x|y) (27)
y la funci
´
on score es
ψ(x) =
1
p(x)
Z
R
d
y
p(y)
Ay − x
σ
2
v
p(x|y)dy
=
A · E[Y |X = x] − x
σ
2
v
(28)
De esta manera, (17) se demuestra resolviendo (28). Por
otro lado, la densidad de probabilidad p(x) en este modelo
puede escribirse como:
p(x) =
1
(2πσ
2
v
)
d
x
2
Z
R
d
y
p(y)e
−
1
2σ
2
v
∥x−Ay∥
2
dy
=
1
(2πσ
2
v
)
d
x
2
E
e
−
1
2σ
2
v
∥x−AY ∥
2
(29)
Por lo tanto, la funci
´
on score tambi
´
en puede expresarse
como
ψ(x) = ∇
x
log E
e
−
1
2σ
2
v
∥x−AY ∥
2
(30)
Finalmente, el lema queda demostrado.
Revista elektron, Vol. 9, No. 2, pp. 76-83 (2025)
http://elektron.fi.uba.ar