The Best Fit Bayesian Hierarchical Generalized Linear Model Selection Using Information Complexity Criteria in the MCMC Approach


Creative Commons License

Ebrahim E., Cengiz M. A.

Journal of Mathematics, cilt.2024, sa.1459524, ss.1-14, 2024 (SCI-Expanded)

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 2024 Sayı: 1459524
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1155/2024/1459524
  • Dergi Adı: Journal of Mathematics
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED)
  • Sayfa Sayıları: ss.1-14
  • Ondokuz Mayıs Üniversitesi Adresli: Evet

Özet

Both frequentist and Bayesian statistics schools have improved statistical tools and model choices for the collected data or measurements. Model selection approaches have advanced due to the difficulty of comparing complicated hierarchical models in which linear predictors vary by grouping variables, and the number of model parameters is not distinct. Many regression model selection criteria are considered, including the maximum likelihood (ML) point estimation of the parameter and the logarithm of the likelihood of the dataset. This paper demonstrates the information complexity (ICOMP), Bayesian deviance information, or the widely applicable information criterion (WAIC) of the BRMS to hierarchical linear models fitted with repeated measures with a simulation and two real data examples. The Fisher information matrix for the Bayesian hierarchical model considering fixed and random parameters under maximizing a posterior estimation is derived. Using Gibbs sampling and Hybrid Hamiltonian Monte Carlo approaches, six different models were fitted for three distinct application datasets. The best-fitted candidate models were identified under each application dataset with the two MCMC approaches. In this case, the Bayesian hierarchical (mixed effect) linear model with random intercepts and random slopes estimated using the Hamiltonian Monte Carlo method best fits the two application datasets. Information complexity (ICOMP) is a better indicator of the best-fitted models than DIC and WAIC. In addition, the information complexity criterion showed that hierarchical models with gradient-based Hamiltonian Monte Carlo estimation are the best fit and have supper convergence relative to the gradient-free Gibbs sampling methods.