The research approach and methods

It became clear as the project progressed that part of what was being researched were the methods utilised by which a wide variety of data types and sources such as those used here could be collated and measured in order to provide a more detailed and three dimensional image of the landscape surrounding technology adoption in learning and teaching. The literature research data, stakeholder individual digital and technical characteristics as well as stakeholder experiences, opinions and perceptions were all important sources in relation to technology adoption, but posed fairly complex challenges for analytical approaches. As the project progressed, these were adapted in order to best utilize all data in some way, so as to be measured as a whole.

Whilst the methods by which the data has been compiled and analysed are at this stage somewhat primitive, they are a beginning at trying to bring together this variety of disparate sources of information and data, and bring logic and systematic scaling to what these sources offer, to make possible measurement as a whole, using all sources.

Jennifer Mason (2006) in ‘Six strategies for mixing methods and linking data in social science research’, discusses a strategy to mixed methods which bears great resemblance to that taken in this project, that of ‘Integrative Logic’, where “studies are designed with several or multiple components […] with a clear sense that these deal with integrated parts of a whole” and that “different methods may be deployed because each is felt to be the best suited to its own specific part of the problem being researched, and because in combination they give a better sense of the whole”. This is in a nutshell what is being attempted in this project. Risks and challenges surrounding the theoretical basis on which multiple data strands are analysed are noted in that paper which are very pertinent to this project, but as this was a pilot ‘beginning’, perhaps now knowing this, future research could explicitly specify theoretical analysis approaches more clearly, perhaps also more expertly with the knowledge gained from this project. As Mason states: “(integrative logic) is a great deal more challenging to put into practice… […] this approach really does call for an explicit and considered theory of data integration […] problems can arise because methods, approaches, and the theories underpinning these, do not always add up to a consensual take on the social world, or what its constituent parts might be, nor how they fit together”. Jacobsen’s relevant PhD work (1998) also used mixed methods, and states: “The strength of a mixed-method, or “multi-instrument approach” (Pelto and Pelto, 1978) to educational and psychological research, lies in its “triangulation” of multiple sources of data (Jaeger, 1988; Lincoln & Guba, 1985).” She goes on to extol a variety of virtues for using both qualitative and quantitative methods for gathering data. Whether this was then referred to as Integrative Logic is not known.


Literature Review Discussion

Analysis approach

The literature review in this project took the form of a ‘current academic research paper analysis’, with an interpretivist perspective (using empirical techniques) being brought to bear to attempt to develop a system by which research in this area could be analysed, in order to understand more about the key factors hindering or promoting technology utilisation in learning and teaching contexts. The system is (as yet) only in an early pilot stage of what might be developed with consequent further work, additional journal paper analysis as well as data derived from more direct sources such as technology profiling of academic staff. The topic is a popular one, and will only become more relevant to higher education in the future, for example, even cursory examination of other social media academic conversations involving technology enhanced learning demonstrates that this topic continues to be a ‘hot potato’, with heated exchange on some boards and forums. In this sense the topic ‘has legs’ (Meyer & McNeal, 2011).

The three stages of literature analysis, (selection process, theme occurrences and context categorisation), were felt to largely be a success, as they were shown to provide a reasonably sound basis on which the stakeholder data could then be measured against, as the stakeholder data largely confirmed the initial findings of the literature.

Literature Data Selection

To make possible a more robust and repeatable methodology to how literature sources would be selected, a more explicit checklist of criteria could be developed and applied to all selections, which would likely include the following:

  • Date published (in previous 5 years or less)
  • Topic areas to fall within:
    • Web 2.0 Applications in education
    • Social Media in education
    • Online Courses
    • Internet and academic workplace
    • Internet and higher education infrastructure
    • Higher education and the digital society
    • Open Educational Resources (digital)
    • Shared Digital Memory Systems and Archives
    • Pedagogies for the 21st Century
    • IPR, licensing or associated legal aspects concerned with digital spaces
  • Formal stipulation or categorisation as to global territories under review
  • Number of types of paper and topics in any given study ‘sprint’

Because there is a very large amount of research on a variety of topics of relevance, available from academic journals and other suitable professional publications, some way of controlling the amount of published research to be analysed at any one time would also need to be established. In the world of project management this might be referred to as ‘sprints’ of work, using an AGILE methodology. In the context of this type of study, sprints would work very well, as differing approaches to analysis could be applied and then compared, to constantly enhance the process of analysis iteratively.

Theme Occurrences

The themes were derived from the data itself, so used interpretative analysis to take terms most used or topics most mentioned and turn them into themes. These were effective at bringing numbers into the analysis (number of occurrences of a theme) and allowed the placement of each paper into a number of relevant themes it was concerned with or focussing on. Though there was overlap between themes, the system was quite successful at creating the initial literature theme analysis. Central to overlap was student centred learning, which might therefore be placed at the centre of future ways analysing importance.

Venn Diagram of Theme Overlap

 Fig 1: showing visual representation of student centred learning at the centre of top theme overlap

The top themes (all sixteen) were not necessarily saying anything that surprising, however the noticeable absence of much discussion in the literature about formal accreditation in order to encourage TEL was probably the most interesting finding.

> Link to Literature Themes (frequency table)

Contextual Category Analysis

The contextual categories used to give context to theme occurrences in the literature may in general terms reflect ways in which many individuals might interpret the literature research data, so whilst the categories are not in themselves very robustly developed (at this stage) they might be representative of how many in higher education might similarly react to what they think such data is telling them.

The contextual categories were again derived from the data itself, looking at the context of a theme occurrence, and assigning it a set of values concerning aspects of that context. Values consisted of the nature of the contexts factuality, reasoning and level of assumption. These values were then matched to the PBH scale. This allowed analysis of the occurrence as to its level within that scale of the real, imagined, intermittent, persistent or legacy factors. This was arguably the most difficult part of the analysis, and would need much more work in terms of theoretical underpinning as well as some clear explicit interpretation measurement system, if possible. This might be considered to be the most important aspect of this project, over time, as is attempting to evaluate how literature (research) might be ‘interpreted’, as well as how to measure its validity in a wider picture.

> Link to Contextual Categories and Theme Correlation

> Link to Contextual Categories to PBH for TOP Themes


Research Group Discussion

Technical Profiles

(Research Group 1)

Rogers Diffusion of Innovations Roles

Using the Rogers Diffusion of Innovation model to refer to characteristics of users in learning and teaching scenarios is not new. Sahan’s ‘Detailed Review of Rogers Diffusion of Innovations Theory and Educational Technologies Studies based on Rogers Theory’ (2006), reviews a number of studies, with perhaps Jacobsen’s work (1998) being of most relevance to this study. She used a variety of technical and computer competencies to inform her user characteristics, some of which seem surprisingly similar to those investigated in this project, though were not known about at its start. Jacobsen most relevant criteria  listed below:

  1. Patterns of Computer Technology Use
  2. Computer Experience
  3. Generalized Self-Efficacy
  4. Participant Information

(Jacobsen, 1998)

Jacobsen has done much consequent work of a similar nature which doubtless would also be of relevance to this project, though has not been referred to here (due to time constraints).

While this study uses the idea of Rogers’ Adopter Categories (innovator, early adopter, early majority, late majority, laggard), exactly which technical factors might help define those categories is not present in Rogers work, as he defines these categories only with social or personal characteristics and traits, but no technical specifications at all. This is perhaps no longer adequate in todays post digital information revolution setting, and this study has attempted to build on some of Jacobsen’s work in this respect by adding technical profiling factors to the Rogers Adopter categories.

The development of a scale in order to allocate a technical aspect to the ‘RDI’ (Rogers Diffusion of Innovations) indicator for each respondent in RG1 was a simple way of integrating the Technology Profile data set into the Rogers Adopter categories. A variety of questions in the question sets involved factors listed in the scale used, so responses were used to place each respondent from Research Group 1 (RG1) into the scale.

> Link to Technical Profile RDI Indicator work

This was an approximate exercise and would need further specification if used on a larger sample, or for more in depth analysis, however, for the purposes of this study has proved adequate. By knowing more about the skills and perceptions of who is responding to specific theme issues, more can be understood or validated in relation to their responses. For example, if R1 is an Innovator, their responses can be interpreted in that context, but if R1 is a Late Majority, sometimes the very same responses might be interpreted very differently.

Question Set Responses

Interpreting and correlating RG1 responses to validate or challenge literature theme placements in the PBH was possible in terms of whether they were problems or benefits. Contextual analysis was not appropriate as questions had been set by the researcher, therefore context was not relevant. Some key quotes were given a context, but not enough data was gathered in this way to analyse more widely, so perhaps this might be a further adaptation to consider for future work. Overall, the response data did shed light on what real users actually thought about those issues, and whether literature interpretation was accurate. This could prove significant to those involved in change management, as in order to innovate practice, policy makers (often) aim predominantly at innovators, early adopters and early majority as it is those stakeholders who are most engaged with change, in this case in technology enhanced learning and teaching. If we rely only on data which is compiled from unknown sets of users, e.g. the TEL report (2012), or numerous of the literature papers which do not tell us about those taking part beyond at most knowing their job role, we cannot know enough in order to cater adequately in providing training approaches, technical equipment and content production techniques or sharing.

Adding the RDI Indicator factor to the responses gave an understanding of who might be saying and thinking what in relation to the Problems and Benefits placed in the hierarchy, in terms of their general technical perceptions profile. A great example of this is found where R2 (Late Majority) was either absent in some of the positive and future facing aspects (pedagogy and learning design section) or they were prominent in some other sections, such as clearly negative views towards shared resources. Though this may sometimes only tell us ‘what we already know’, being able to measure such response differences based on an indicator of technical efficacy may potentially lead to more useful support provision or change management and delivery being offered in relation to specific needs or those of particular perceptions.

> Link to Question Set Analysis for Top 6 themes


Qualitative Data Analysis

(Research Group 2 & 3)

LinkedIn and ResearchGate

(Research Group 2)

Gathering qualitative and largely participant self-instigated data was an integral part of this research, in order to provide authentic experiences and perceptions of technology in learning and teaching. As the project was largely taking place in an online environment, it seemed therefore logical to utilise social networks to gather that data, and overall this proved very successful for a project of this size. However, LinkedIn proved a much more useful setting for professional discussion than ResearchGate as provided a greater emphasis on expert knowledge and the referring to other research, which was not evident in the ResearchGate comments, beyond referring to a participant’s own current research projects, but reporting no findings.

The findings confirmed two of the main theme areas – institutional support (top-down/bottom-up) and effectiveness (learning quality), but a third strong theme emerged from RG2 that was not very prominent in the literature, that of ‘what’s in it for me’. This equated with staff (individual) motivation in the literature themes, but unlike there, was a frequent topic in the discussion. From a personal perspective then, individual advantage is a stronger driving force than might be acknowledged by literature alone.

As it was experiential data derived from ‘real people’, it might actually be a more accurate snapshot to hold up to literature interpretations, and reflect the initial placing with real peoples opinions, as it was self initiated (unlike data from RG1), beyond the initial first question to kick off the discussion.

> Link to RG2 LinkedIn & ResearchGate Analysis

The Students

(Research Group 3)

It was noted that it was more difficult to engage the group than anticipated, even though they were motivated to help, as the topic seemed uninteresting to them beyond any small commentary about basic provision or lack of it in their Learning Management System. They also appeared generally quite unmotivated about new ideas for how technology could be used, though there was one suggestion about not always using essays and utilising more of what the internet and multimedia might offer, as it encouraged communication in a digital sphere, which was knowledgeable and worth further consideration. This lack of seeing the potential of technology is similar to the lack of ideas often seen in staff in relation to uses of technology for learning and teaching.

The one aspect that did come across clearly is the strong impression by students of the lack of technical skills amongst staff, which appeared to be perceived as much less than the students, in general. Expectations by students also seemed quite ambivalent, which echoes other studies (A course is a course is a course, Dziubian & Moskal, 2011). They are mostly concerned with having engaging lecturers who are passionate about their subjects and will act as great mentors to encourage others into the field.

> Link to RG3 The Students Analysis

Key aspects relevant to Metropolitan Universities

It is somewhat difficult to establish with fixed clarity what is meant by ‘metropolitan university’ in the context of UK higher education. The term itself is more often used in the USA, where 46% of universities are located in ‘metropolitan’ areas (Goddard & Vallance, 2011). ‘Publicly funded’ universities might be another way of looking at this type of higher education, or one might look at a widely used source of where one might find a definition: the Wikipedia entry for ‘Urban university’ states: P.E. Mulhollan […] defined a metropolitan university, in its simplest terms, “[as] an institution that accepts all of higher education’s traditional values in teaching, research, and professional service, but takes upon itself the additional responsibility of providing leadership to its metropolitan region by using its human and financial resources to improve the region’s quality of life”. 

For the purposes of this research, then, a metropolitan university was considered to be an institution located in a large urban area, with a remit to educate its local population, as well as those from farther afield. While striving for research excellence, it would likely also have strong business and knowledge partnerships with the local economy and work force, preparing students for employment and social contribution, most especially in its local area. Goddard & Vallance make several interesting connections about the importance of the renewed purposes of the ‘civic’ university, which may be a more appropriate term in the UK.

Relevance to metropolitan universities is significant in a number of themes present in the literature, (especially in the top six themes analysed). A variety of aspects all of core importance to the existence and purpose of metropolitan universities are present, including factors listed below:

  • Inclusivity
  • Diversity
  • Accessibility
  • Learner Differences
  • Equivalency
  • Flexibility
  • Student centered learning
  • Student developed learning
  • Personalised learning
  • Work based learning

These terms were used to either define themes themselves or as indicators for contextual category presence (which were then allocated to themes) when interpreting data, and are therefore listed here as general factors most relevant when considering urban or community universities and colleges.

Diversity and inclusivity might be said to be at the core of metropolitan university life and purpose. For example, several of the texts refer to colleges with a remit of widening participation, or fulfilling the requirement of much wider access to higher education (not quite the same thing) and that technology is a very significant player in the achievement of those aims and purposes (Lynch 2008, Oblinger 2013, Tate & Klein Collins 2013, Altbach et al, 2009). But diverse student populations have a number of considerations and issues which tend to multiply the more diverse the student body is, and this makes for increased potential problems when using technology. The use of technology in learning and teaching throws up sometimes major new issues and problems which may not be present when technology is not used, principally those of accessibility.

Accessibility can involve complex considerations: digital efficacy, physical or other impairment access requirements, other learner differences, any required equivalency of provision and relevant intellectual property, data privacy and security legislation. These may often be of most significance to universities with very diverse student populations, which again involve a number of factors: of gender, age, work and family commitments, other differences such as language, culture or health and disability. In other words, ‘non-traditional learners‘. Full consideration of these issues in relation to uses of technology in learning and teaching would merit its own research project (or several), so in this more limited context it might be more suitable to acknowledge them and suggest other work of relevance in these areas that is known to this researcher, such as that by Taylor & Newton (2013) previously covered in the literature review, and sources listed below, which would form the basis of any follow up to this research, and have variously been referenced in this project or used as background advice.

  • Ali Tarhinia, Kate Honea, Xiaohui Liua, 2013, User Acceptance Towards Web-based Learning Systems: Investigating the role of Social, Organizational and Individual factors in European Higher Education, UK, Procedia Computer Science 17 (2013) 189 — 197 (for computer self efficacy, usability, flexibility)
  • Wattenberg, T, 2004, Beyond legal compliance: Communities of advocacy that support accessible online learning, Internet and Higher Education 7 (2004) 123—139 (accessibility online)
  • Kanwar, A and Uvalic-Trumbic, S (ed), 2011, A Basic Guide to Open Educational Resources (OER), UK, Commonwealth of Learning (for Open Education Resources organisational planning concerns, policy directives and advice, intellectual property issues)
  • Beetham, H & Sharpe, R (ed) 2007, Rethinking Pedagogy for a Digital Age, Designing and delivering e-learning, UK, Routledge, Taylor and Francis Group (for learner differences, design and pedagogy issues for equivalency and efficacy of access)
  • Sharpe, R, et al, 2009, Learners Experiences of Elearning Synthesis Report: Explaining Learner Differences, UK, JISC (in depth learner differences)
  • Dabbagh, N., & Kitsantas, A., 2011, Personal Learning Environments, social media, and self-regulated learning: A natural formula for connecting formal and informal learning, Internet and Higher Education (2011) (for personalised learning)

Issues surrounding equivalency, equity of access and legislation requirements also impact other themes noted from the literature such as cost and policy, both institutional as well as national and even international. These potentially have more impact on a metropolitan university, as they may have the widest remit to educate both local, and internationally diverse student populations, and yet have the least and perhaps most precarious funding. Metropolitan universities are often also at the brunt of national policy, being a reflection of changing ideologies as governments and national priorities change.

Student centred learning, including student developed, work based and personalised learning, may also be of greater significance to metropolitan universities, i.e. those that might be most concerned with professional skills degrees, which often benefit from such aspects of student centred learning. The metropolitan university is at the forefront of these approaches, and could perhaps increase learning quality (another top 6 theme) by the more effective use of those technologies suitable for such purposes (for example Dabbagh & Kitsantas, 2011).

We see these (student centred) issues echoed across multiple themes found through the literature review and other data from this research, and it is difficult to single out any one theme above any other in this respect. However, noting that at least 4 of the top 6 themes analysed are directly relevant to this area is in itself demonstrative of the impact of technology in these pedagogical approaches. In contrast, data derived both from the literature review as well as data from RG2 and RG3 gave a worrying picture as to student input and engagement in the learning design process. For example, Brown (2011) reported that student influence was minimal in encouraging academics to utilise web 2.0 applications, and from RG2, only two comments were made about student input – one negative. RG3 were not especially enthusiastic about technology use in their learning either. Whether this is fully relevant to student centred learning as a pedagogy is questionable, however, it does show that on the ground, ‘the jury is still out’.