keywords: Multiple imputation, maximum likelihood, missingness, listwise
In this study, the multiple imputation and maximum likelihood methods of imputing missing data in a randomised complete block design are compared. The aim is to seek for a more efficient technique for imputing missing data. The Multiple Imputation (MI) method involves imputing missing values repeatedly in order to account for variability due to imputations, while the Maximum Likelihood (ML) method (EM algorithm) first takes the estimate of variances, covariances and means from listwise deletion. These estimates are then used to solve for regression coefficients and the estimation of missing data. Data was collected from the Department of Animal Health and Production Technology, NVRI Vom. The data was that of an experiment on the effect of temperature and storage length on protein content of table eggs. The MI and ML methods were compared at levels 4, 5, 10 and 15 missing observations at m=20, 30 and 40 imputations using SPSS version 25 for the analysis. It was observed that the ML method performed better than the MI method at four (4) missing observations, except for m=40 imputation. Apart from that situation, the MI method performed better than the ML in other levels of missing observations. It was concluded that the ML is more efficient when the number of missing observations are few, although the MI can perform equally efficiently for the same situation when the number of imputation is excessively increased. The MI method performs excellently better than the ML method when the number of missing observations are more than 10% of the entire number of observations.