In competence data there are usually a number of missing responses due to omitted or not reached items. If not appropriately accounted for, missing responses may result in biased estimates of item and person parameters. Although promising approaches have been developed to account for missing responses, they have not been implemented in large scale studies. Recently, model based IRT approaches have been developed that have a number of advantages over previous strategies. These approaches have, however, not yet been extended so that they fully address all issues of large scale assessments. First, the model assumptions made in these models have not been tested for plausibility in empirical data. Second, the models account for only one kind of missing response, respectively. In practice we are, however, usually faced with both kinds of missing responses. Third, the models only focus on one competence domain and on only one measurement occasion. In large scale assessments we usually need to scale several competence domains and – in longitudinal designs – different measurement occasions simultaneously. Finally, there is evidence that missing responses do depend on ability as well as on demographic and personality variables. The models proposed, however, only account for the dependency of missing responses on ability. In this project we will test and extend the models to address the issues of empirical studies and provide guidelines on how to deal with different missing responses in large scale studies.