Aims, Focus and Methods

Aims and Focus

The aim of the research was to ascertain influencing factors on uptake of technology in learning and teaching, and which (driving and restraining forces) might be at work on an individual employee in academia. These factors would then be compared with common influencing factors identified from the literature. A Problems and Benefits Hierarchy was proposed, using five rankings of ‘variables’: real, imagined, intermittent, persistent, legacy.

Primary stakeholder research would contribute to the Problems and Benefits hierarchy via three distinct stakeholder groups. The possibility that a variety of academic roles may exert influence was acknowledged, so respondent roles for Research Group 1 (RG1) were not limited to teaching staff, but also included support, research and administration staff who may also have influence, albeit indirectly. A small sample of staff (staff n=8) were sourced and took part. More input from a second Research Group (RG2) with other academics was also gathered using informal techniques (staff n=<20) via LinkedIn & Research Gate.

A small group of mainly undergraduate students (students n=8), from a variety of disciplines formed the third Research Group (RG3), and was used to measure key student expectations of technology utilisation in teaching and study scenarios. This was to provide some counter landscape to that which the staff would provide. Amount of input varied between each student, and students came and went in the participation, as the group was informal with no pressure to take part or not.

Focus for the literature review would be on journals and significant texts concerned with aspects of online technology use(s) in learning and teaching, but not limited only to practice in teaching scenarios.

Method

Literature Review Analysis

A decision was made to use a core selection of significant books and relevant academic papers for the literature review. Around 10 books and reports were used, plus a selection of up to 15 current papers and articles. Books, journal articles and research papers were selected in part on the basis of their currency, and nothing is used that is older than 2007 (except Hayes, 2000, on usability) by reason of the exponential growth in use of technology since then. In 2007 the world internet population was less than half what it is today, and is set to double every 5.32 years (Guo-Qing et al, 2008), and Smartphone penetration has topped 1 billion users since the advent of the iPhone in 2007, and is set to double by 2015 (Strategy Analytics, 2012).

All texts were also chosen on the basis of their focus on Web 2.0 or semantic technology, social media or other online applications. Other learning technologies such as interactive whiteboards, Second Life or uses of Learning Management Systems were not included as being deemed to be more generally thought of as Web 1.0 or ‘older’ technology, as “the content-centric course design approach and the standard LMS are no longer meeting the student’s preferences and needs…” (Kusen & Hoic-Bozic, 2014, p 181) and are “closed-platform Web 1.0 type technologies conducive to teacher-driven pedagogical approaches and not […] the networked and collective learning possibilities of Web 2.0” (Brown, 2011).

To begin the work of recognising and measuring popular themes and factors commonly discussed in the literature, each was noted and allocated either a problem or a benefit label, summarised by the perception of the general context and tone of the theme in the literature.

Research Groups and Methods

A variety of methods were utilised to obtain primary data. These included use of multiple online questionnaires, an informal social media Facebook (‘secret’, i.e. closed) group for the student research, and discussions initiated by this research using an academic LinkedIn group or a small group in ResearchGate, both with participants who were self-selecting, i.e. taking part of their own volition, out of interest in the topic.

Stakeholder research was carried out concurrently with the writing of the review, with iterative development of questions, in part formulated with some reference to what was being established from the literature.

Research Group1: Staff*

8 members of academic staff, a sample of academics known to the researcher (though some not personally) taken from a variety of job roles, including lecturing, administration, senior management, libraries, research, student affairs, academic development and e-learning support. International universities are included: UK, USA, Australia & Canada (1 person) and China.

Research Group2: Staff*

a) Participants from the LinkedIn discussion group ‘Higher Education Teaching and Learning’ who responded to questions posed for this research and took part of their own volition.

> See this link for the  LinkedIn Discussion

b) Participants from the ResearchGate social network, and who responded to questions posed for this research and took part of their own volition.

> See this link for the Research Gate Discussion

Research Group 3: Students*

8 students, 7 undergraduate, 1 postgraduate, from a sample of students known to the researcher, from a possible group of around 50, with a wide variety of subject disciplines: social sciences, computer science, politics, life sciences. Multinational (many with English as second language), including Italian, Bulgarian, Polish, Bangladeshi, English.

*Further details of all groups are available to evidence that real people took part. These are recorded in the Participants page in the Appendices. This page is password protected for privacy purposes. Please contact me for further information.

Questionnaires

For Research Group 1 (staff), a technique of using multiple short questionnaires with quick-fire questions based on a theme was used. This allowed for groups of answers to generate analytics and metrics separately, and also to focus the mind of the respondent clearly. It also is a successful way of not requiring a lot of time from the respondent, they know it won’t take more than a few minutes each time they complete a set of questions. Questionnaires were based in part on iterative theme development from literature and previous questionnaire responses as well as themes discussed with the student group (RG3).

Six sets of questions were developed, all following themes to do with use of technology from the point of view of a single user. This was referred to as developing the ‘Technology Profiles’ of the individuals taking part. (Please refer to Technology Profiles Questions.)

Questionnaires were delivered to respondents in groups of 2, that is, Sets 1 and 2, Sets 3 and 4, Sets 5 and 6. This avoids respondent ‘fatigue’, a syndrome which can adversely affect more detailed or prolonged questioning of participants in many circumstances.

Social Media

Social media was chosen as a suitable medium to garner informal feedback from the student group, (RG3), and also to obtain first hand responses from staff unknown to the researcher, in an immediate and useful way (RG2). Overall it proved extremely successful, especially the LinkedIn group. Though the disadvantage of this was that participants were self selecting, and so do not represent any sample (even a random one), it proved that not only ‘fans’ of Technology Enhanced Learning took part, indeed, several respondents were sceptical about technology and any benefits they could see deriving from it.

A Facebook ‘secret’ (Facebook terminology for a private, unseen to the public group) group was used to discuss all sorts of aspects of the topic with students. Students came and went during the discussions, so every student did not consistently give comments on every question. Two or three students became the core respondents in this group, and interestingly, sometimes held very different points of view (though they were not actively discussing with each other, more responding individually to my questions and probing).

Approach to data analysis

The work has been undertaken as interpretivist research from a critical realist perspective, where systems and organisations represent the reality which is a constant, but that this reality has multiple perceptions (Krauss 2005). These perceptions are what is of most interest to this research. Thinking about the Real, the Actual and the Empirical (Baskhar, 1978), this research has interpreted the Real as being the policies, systems and organisational aspects of IT and e-learning, the events and behaviours (the Actual) as the provision of support, chosen pedagogies and learning and teaching practices, and the Empirical being the staff and student experiences (and perceptions of those), and the measurement and interpretation of those experiences from the raw data.

The Problems and Benefits Hierarchy used five ranking factors (real, imagined, intermittent, persistent, legacy) which were formulated by the researcher prior to any primary data or literature review findings. The researcher, on having some knowledge and experience of the field of TEL, has brought this experience to bear in formulating these five rankings, in an attempt to structure a ‘weighting of importance’. The analysis used contextual and response categories derived from the literature review, and from the primary data gathered from RG1, RG2 and RG3 to interpret data into these rankings. Strata of context involved up to three stages of context categorisation.

The research, both through the literature review and RG primary data, was in some part to establish whether this hypothetical set of rankings was indeed an accurate or useful way of weighting the importance of themes. So, the five PBH rankings were not devised directly from literature findings, but rather were used to develop the overall structure of importance or significance from an interpretivist standpoint. By then placing each theme from the literature, based on initial overall context impression, as either a ‘problem’ or a ‘benefit’, a first categorisation was made. Developing a second scale of ‘contextual categories’ for a fuller literature theme context measurement it was possible to place them into the Problems and Benefits Hierarchy (PBH) in terms of the real, imagined, intermittent, persistent, legacy ranking factors (see Table 3, Findings). The use of simple ‘response categorisation’ derived from RG 2 and RG3 responses, was then further categorised into the existing ‘contextual categories’ and used to add to the literature theme contextual category rankings. Both these sets of categories (contextual and response) were developed from interpreting primary data and literature findings.

Grounded theory and critical realism were investigated through the work of Oliver (2012), whose ‘Critical Realist Grounded Theory: A New Approach for Social Work Research’ offers valuable commentary on the combining of the paradigm and the methodology, which seems relevant to this project. “Constructivists have used grounded theory to make explicit the assumptions and unspoken knowledge of participants, elicit their meaning-making rather than make claims about an objective reality and develop contextualised theory for practical application”, and “(a) critical realist grounded theory would draw inspiration from the hermeneutical (text interpretation) bent and fluidity of the constructivist approach” with “(c)ritical realist grounded theory would address both the event itself and the meanings made of it” (p378).

Response data from Research Group 1 was used to challenge or confirm the core placing of ‘problem’ or ‘benefit’ for each of the top six themes. Additional value and understanding was also derived from knowing individual technology profile characteristics of respondents in Research Group 1. A ‘Rogers Diffusion of Innovations (RDI) Indicator’ was assigned to each respondent, and this would ideally have been matched to theme PBH contextual rankings to build further evidence for their placement interpretation in the PBH. In this study there is not enough data to achieve this effectively, however, respondent RDI is used to create a glimpse of the technical characteristics of stakeholders in university professions as all key finding responses have been coded.

The evidence was analysed in this order, to build some system of triangulation, though this is not strictly proving results, merely building possible interpretation of results. See the diagram for data analysis architecture below for an overview.

Diagram 1 - Data Analysis

Fig 1: showing Data Analysis Architecture

To minimise interpretive scaling, categorisation was largely done by looking for simple keywords in text to match criteria. For initial problem or benefit placing, an indication of general tone of text or commentary. In contextual categories, a reference to the past, an assumption, a reference to research, an expert experience, a conjecture about data which was not evidenced, as well as the strength or weakness of each context. For response categories this was a similar technique, but matched to that data, and simpler.

Whilst it is acknowledged that the ‘interpreting’ of contextual categorisation which was applied to themes to create the initial PBH from the literature and beyond was not as robust as would be ideal in a larger study, it was an attempt to make a start on such a system.