The use of data in educational policy development and claims of "evidence" supported decisions are based on specific concepts and understandings which are not always critically examined and questioned. These may provide circumstances which encourage ideological discourses. Data are used for decision making and as a tool for holding institutions and professionals accountable for the measured results. In this context, the "evidence" stemming from quantitative (large scale) studies on schooling and its outcomes is often taken for granted, i.e. it is not considered to what degree the given data are based on underlying theoretical or other assumptions relating to "what counts" as indicative for success or failure in schooling. Moreover, other types of evidence (such as qualitative analyses, experimental research, or stake holder experience) are in most cases not considered as important or reliable. This round table will include consideration of ways in which government management of the work of schools over the past thirty years is increasingly using detailed quantitative data-gathering and benchmarking as its tool of improvement, and examples of how such technologies have impacted on the life of schools, cutting across the curriculum and pedagogy, in some cases leading to very public failure of an intended direction of reform. It brings researchers from research on policy making and research on teaching together: • to discuss the public and political use of scientific evidence, • to highlight the often untold relation of theoretical modelling and empirical evidence and its implications for "what counts", and • to address the question of how we can improve the chances of a broader and better informed use of the whole range of relevant research.Contributors will draw on approaches based on literature review, discourse analysis, conceptual analysis and ethnographic studies of school reforms. For example one research project is an ethnographic study of a school attempting to reform its middle high school program to improve retention and engagement, in which context the attempt to treat students as persons is difficult because of the overall framing and culture of school activity in terms of measured quantitative indicators of outcomes. A second example draws on a study of curriculum reforms across Australia over the past 30 years.The outcomes of the round table will include illumination of the problem of outcome indicators squeezing out the life of teaching and ways in which the non-aligned agendas that are often part of school policy set up difficulties for the work of schools. Furthermore the round table will aim to highlight critical approaches in relation to the process of educational policy development and decision making.
|Publication status||Published - 2009|
|Event||The European Conference on Educational Research 2009 - Vienna, Austria|
Duration: 28 Sep 2009 → 30 Sep 2009
|Conference||The European Conference on Educational Research 2009|
|Abbreviated title||ECER 2009|
|Period||28/09/09 → 30/09/09|
|Other||Theory and Evidence in European Educational Research|
Hudson, B., Hopmann, S., Yates, L., & Zgaga, P. (2009). Theory and evidence in didactical research: the politics and epistemology of teaching. The European Conference on Educational Research 2009, Vienna, Austria.