Firstly, a word about the survey. Even a brief look at it suggested problems. It was, apparently, a survey produced by a professional research group, rather than an undergraduate up against a hand-in deadline.
The information/brief notes below come from the class session. I need to read through Foddy's (1993) work (hard copy of chapter 2 provided) to understand more.
Foddy explores the key assumptions of behaviourism (a post-positivist school of thought):
- The purpose of a survey is to discover information about the world objectively, as it exists 'out there'. Behaviourists produce questions that allow us to access that view as accurately as possible - behaviourists believe you can reach this truth.
- Everyone has a common and shared understanding of the questions posed; questioning is based on the view of a common vocabulary. Language is unambiguous, and clear questions can provide accurate objective data. Whole heap of problems within here to unpack!
- Questions utilise a stimulus-response model in which they are carefully standardised to ensure that the respondent gives only one response. These are directive questions, with a closed question approach.
- Because questions are standardised and respondents provide only one response, each question can be meaningfully compared, otherwise there is a problem with consistency.
Blumer (1997) states that the SI approach rests upon the premise that human action takes place always in a situation that confronts(?) the actor. The 'being with' is significant; different interviewers will elicit different responses from people.
The SI perspective:
- Reality is negotiated; we are interpreting the interpretation of others (secondary interpretation). Behaviourists want to control a situation by forms of words. SIs believe that this is folly.
- Attitudes are transient. Responses vary according to time and place.
- The social context is paramount; each person has a self concept and is reflexively shaped and influenced.
Analysing the questionnaireWe were asked to critique it from an SI point of view. These are my thoughts:
- directive tick boxes are used extensively
- Use of examples limit responses
- There are some free text boxes, but these limit free text as they are only one to two lines.
- It provides just an overview - there is no depth of understanding
- It is situational - where you complete the questionnaire may affect how you answer it (at home/work)
- questioning on the role at the start of the sheet may set the 'character' of the 'actor' in completing the work: I'm deputy head, so I must write as that role rather than my true feelings
- SIs would not use anything like this!
- questions could be interpreted in a range of ways
- acronyms were not explained
- vague, open questioning
- Categories - what's the difference between 'a little' and 'some'?
- Eliding of categorical data with ordinal data causing confusion and ambiguity (q4)
- What is good is that there are some closed questions.
Interviews may be onerous (transcription etc.) but provide richer data, through listening, interpreting and analysing.
The most interesting part of the activity for me was when the tutor provided the responses to one of the questions ('what image does educational research conjure up for you?') and asked us to draw together themes. This was quite labour intensive, but it was relatively straightforward to pull out the common threads. Of course, I say that but I have to begin to consider how my own frames of reference can (and indeed do) affect the sorts of themes I draw out and how I'd write an analysis based on those themes.
The themes I pulled out in this brief activity include:
- a dichotomy between those talking about 'doing' research and those talking about 'being done to'
- How research is used/misused
- questioning the relevance of research
- it's academic, not practice-based
- an imposition on the workload
- for the development of practice
- discussion of the mechanics - the techniques used
- Used to justify change for change's sake
- Deliberate manipulation of findings to suit the audience
- Research is done, but is not implemented
- The excitement of new, positive ideas
- Promoting a positive change
- acknowledging complexity - no clear answers.
This starts by suggesting that a number of important themes emerge (but... who decides that they are important? That also suggests that some are unimportant). It uses numbers within the analysis in an attempt to provide 'evidence' of scientific robustness. The word 'unsurprisingly' is used to explain that a number of respondents look at the technical side. That's rather loaded. Who is it who is/isn't going to be surprised? To be honest, based on the question asked, I'm quite surprised that people answered it with 'technical' words such as 'statistics' or 'graphs, surveys and videos'. Perhaps if the respondents had been given more room to write, more meaningful data could have been produced. The analysis then goes on to talk about the positives drawn from the findings, even though these are much fewer. There is an acknowledgement that some answered 'yes, but...' ie qualified their positive outlook with a negative viewpoint. The research then admits to the dominant negative views of research. It pulls out the most dominant negative theme as research being done by 'others' not on the ground.
This analysis is constructed in such a way as to emphasise the positive over the negative. Certain responses are privileged, that is they are selected over other, discarded responses. Tis links back to research concealing as well as revealing. research is a 'snapshot' which reveals the motivation of the researcher. it is a construct which shows the researchers' views of teachers' views of research. We all do it, but we need to recognise that we do. There is always 'the view from somewhere', and no impartiality, with emphasis, privileging and focussing on particular themes over others.