In my problem, I have M observations y_i (y_i is a vector of N elements), and a set of parameters \theta_i (a vector of 3 elements) for each observation. There is a deterministic model that link the parameter \theta_i to the i-th observation y_i. My objective is to estimate the parameters \theta_i for every observations.
My model is built as follow:
- Each set of observation is independent and have an independent likelihood function p(y_i | \theta_i).
- For each observation i, the parameters \theta_i has a multivariate Gaussian prior with parameter \mu and \Sigma_\mu. \mu and \Sigma_\mu are the same for each observation i
- \mu and \Sigma_\mu follow some prior p(\mu, \Sigma_\mu)
Hence, the posterior for my model is:
p(\theta_{1:M}, \mu, \Sigma_\mu | y_{1:M}) \propto p(\mu, \Sigma_\mu) \prod_{i=1}^Mp(y_i|\theta_i)p(\theta_i|\mu, \Sigma_\mu)
The likelihood p(y_i | \theta_i) for each observation is ad-hoc, so it is implemented through its logp with DensityDist
function.
What do you think would be the best strategy to implement the product of the likelihoods functions:
- Implement it with one
DensityDist
and one biglogp
function, inside which I will do the sum of \log p(y_i|\theta_i) for i=1..M
or
- Should I define a generic function for the
logp
of each observation i andDensityDist
, and then do the sum with thePotential
function?
The second solution raises an other question: how to do the sum M likelihood function within the Potential
function?
Thanks!