Research Group Discussion
(Research Group 1)
Rogers Diffusion of Innovations Roles
Using the Rogers Diffusion of Innovation model to refer to characteristics of users in learning and teaching scenarios is not new. Sahan’s ‘Detailed Review of Rogers Diffusion of Innovations Theory and Educational Technologies Studies based on Rogers Theory’ (2006), reviews a number of studies, with perhaps Jacobsen’s work (1998) being of most relevance to this study. She used a variety of technical and computer competencies to inform her user characteristics, some of which seem surprisingly similar to those investigated in this project, though were not known about at its start. Jacobsen most relevant criteria listed below:
- Patterns of Computer Technology Use
- Computer Experience
- Generalized Self-Efficacy
- Participant Information
Jacobsen has done much consequent work of a similar nature which doubtless would also be of relevance to this project, though has not been referred to here (due to time constraints).
While this study uses the idea of Rogers’ Adopter Categories (innovator, early adopter, early majority, late majority, laggard), exactly which technical factors might help define those categories is not present in Rogers work, as he defines these categories only with social or personal characteristics and traits, but no technical specifications at all. This is perhaps no longer adequate in todays post digital information revolution setting, and this study has attempted to build on some of Jacobsen’s work in this respect by adding technical profiling factors to the Rogers Adopter categories.
The development of a scale in order to allocate a technical aspect to the ‘RDI’ (Rogers Diffusion of Innovations) indicator for each respondent in RG1 was a simple way of integrating the Technology Profile data set into the Rogers Adopter categories. A variety of questions in the question sets involved factors listed in the scale used, so responses were used to place each respondent from Research Group 1 (RG1) into the scale.
> Link to Technical Profile RDI Indicator work
This was an approximate exercise and would need further specification if used on a larger sample, or for more in depth analysis, however, for the purposes of this study has proved adequate. By knowing more about the skills and perceptions of who is responding to specific theme issues, more can be understood or validated in relation to their responses. For example, if R1 is an Innovator, their responses can be interpreted in that context, but if R1 is a Late Majority, sometimes the very same responses might be interpreted very differently.
Question Set Responses
Interpreting and correlating RG1 responses to validate or challenge literature theme placements in the PBH was possible in terms of whether they were problems or benefits. Contextual analysis was not appropriate as questions had been set by the researcher, therefore context was not relevant. Some key quotes were given a context, but not enough data was gathered in this way to analyse more widely, so perhaps this might be a further adaptation to consider for future work. Overall, the response data did shed light on what real users actually thought about those issues, and whether literature interpretation was accurate. This could prove significant to those involved in change management, as in order to innovate practice, policy makers (often) aim predominantly at innovators, early adopters and early majority as it is those stakeholders who are most engaged with change, in this case in technology enhanced learning and teaching. If we rely only on data which is compiled from unknown sets of users, e.g. the TEL report (2012), or numerous of the literature papers which do not tell us about those taking part beyond at most knowing their job role, we cannot know enough in order to cater adequately in providing training approaches, technical equipment and content production techniques or sharing.
Adding the RDI Indicator factor to the responses gave an understanding of who might be saying and thinking what in relation to the Problems and Benefits placed in the hierarchy, in terms of their general technical perceptions profile. A great example of this is found where R2 (Late Majority) was either absent in some of the positive and future facing aspects (pedagogy and learning design section) or they were prominent in some other sections, such as clearly negative views towards shared resources. Though this may sometimes only tell us ‘what we already know’, being able to measure such response differences based on an indicator of technical efficacy may potentially lead to more useful support provision or change management and delivery being offered in relation to specific needs or those of particular perceptions.
> Link to Question Set Analysis for Top 6 themes
Qualitative Data Analysis
(Research Group 2 & 3)
LinkedIn and ResearchGate
(Research Group 2)
Gathering qualitative and largely participant self-instigated data was an integral part of this research, in order to provide authentic experiences and perceptions of technology in learning and teaching. As the project was largely taking place in an online environment, it seemed therefore logical to utilise social networks to gather that data, and overall this proved very successful for a project of this size. However, LinkedIn proved a much more useful setting for professional discussion than ResearchGate as provided a greater emphasis on expert knowledge and the referring to other research, which was not evident in the ResearchGate comments, beyond referring to a participant’s own current research projects, but reporting no findings.
The findings confirmed two of the main theme areas – institutional support (top-down/bottom-up) and effectiveness (learning quality), but a third strong theme emerged from RG2 that was not very prominent in the literature, that of ‘what’s in it for me’. This equated with staff (individual) motivation in the literature themes, but unlike there, was a frequent topic in the discussion. From a personal perspective then, individual advantage is a stronger driving force than might be acknowledged by literature alone.
As it was experiential data derived from ‘real people’, it might actually be a more accurate snapshot to hold up to literature interpretations, and reflect the initial placing with real peoples opinions, as it was self initiated (unlike data from RG1), beyond the initial first question to kick off the discussion.
> Link to RG2 LinkedIn & ResearchGate Analysis
(Research Group 3)
It was noted that it was more difficult to engage the group than anticipated, even though they were motivated to help, as the topic seemed uninteresting to them beyond any small commentary about basic provision or lack of it in their Learning Management System. They also appeared generally quite unmotivated about new ideas for how technology could be used, though there was one suggestion about not always using essays and utilising more of what the internet and multimedia might offer, as it encouraged communication in a digital sphere, which was knowledgeable and worth further consideration. This lack of seeing the potential of technology is similar to the lack of ideas often seen in staff in relation to uses of technology for learning and teaching.
The one aspect that did come across clearly is the strong impression by students of the lack of technical skills amongst staff, which appeared to be perceived as much less than the students, in general. Expectations by students also seemed quite ambivalent, which echoes other studies (A course is a course is a course, Dziubian & Moskal, 2011). They are mostly concerned with having engaging lecturers who are passionate about their subjects and will act as great mentors to encourage others into the field.
> Link to RG3 The Students Analysis
To read more from this section, please use the page navigation below