Warning: Declaration of Custom_Menu_Wizard_Walker::walk($elements, $max_depth) should be compatible with Walker::walk($elements, $max_depth, ...$args) in /homepages/2/d108111940/htdocs/A614/wp-content/plugins/custom-menu-wizard/include/class.walker.php on line 1320

Warning: Declaration of Custom_Menu_Wizard_Sorter::walk($elements, $max_depth = 0) should be compatible with Walker::walk($elements, $max_depth, ...$args) in /homepages/2/d108111940/htdocs/A614/wp-content/plugins/custom-menu-wizard/include/class.sorter.php on line 73


Pages: 1 2 3

The research approach and methods

It became clear as the project progressed that part of what was being researched were the methods utilised by which a wide variety of data types and sources such as those used here could be collated and measured in order to provide a more detailed and three dimensional image of the landscape surrounding technology adoption in learning and teaching. The literature research data, stakeholder individual digital and technical characteristics as well as stakeholder experiences, opinions and perceptions were all important sources in relation to technology adoption, but posed fairly complex challenges for analytical approaches. As the project progressed, these were adapted in order to best utilize all data in some way, so as to be measured as a whole.

Whilst the methods by which the data has been compiled and analysed are at this stage somewhat primitive, they are a beginning at trying to bring together this variety of disparate sources of information and data, and bring logic and systematic scaling to what these sources offer, to make possible measurement as a whole, using all sources.

Jennifer Mason (2006) in ‘Six strategies for mixing methods and linking data in social science research’, discusses a strategy to mixed methods which bears great resemblance to that taken in this project, that of ‘Integrative Logic’, where “studies are designed with several or multiple components […] with a clear sense that these deal with integrated parts of a whole” and that “different methods may be deployed because each is felt to be the best suited to its own specific part of the problem being researched, and because in combination they give a better sense of the whole”. This is in a nutshell what is being attempted in this project. Risks and challenges surrounding the theoretical basis on which multiple data strands are analysed are noted in that paper which are very pertinent to this project, but as this was a pilot ‘beginning’, perhaps now knowing this, future research could explicitly specify theoretical analysis approaches more clearly, perhaps also more expertly with the knowledge gained from this project. As Mason states: “(integrative logic) is a great deal more challenging to put into practice… […] this approach really does call for an explicit and considered theory of data integration […] problems can arise because methods, approaches, and the theories underpinning these, do not always add up to a consensual take on the social world, or what its constituent parts might be, nor how they fit together”. Jacobsen’s relevant PhD work (1998) also used mixed methods, and states: “The strength of a mixed-method, or “multi-instrument approach” (Pelto and Pelto, 1978) to educational and psychological research, lies in its “triangulation” of multiple sources of data (Jaeger, 1988; Lincoln & Guba, 1985).” She goes on to extol a variety of virtues for using both qualitative and quantitative methods for gathering data. Whether this was then referred to as Integrative Logic is not known.


Literature Review Discussion

Analysis approach

The literature review in this project took the form of a ‘current academic research paper analysis’, with an interpretivist perspective (using empirical techniques) being brought to bear to attempt to develop a system by which research in this area could be analysed, in order to understand more about the key factors hindering or promoting technology utilisation in learning and teaching contexts. The system is (as yet) only in an early pilot stage of what might be developed with consequent further work, additional journal paper analysis as well as data derived from more direct sources such as technology profiling of academic staff. The topic is a popular one, and will only become more relevant to higher education in the future, for example, even cursory examination of other social media academic conversations involving technology enhanced learning demonstrates that this topic continues to be a ‘hot potato’, with heated exchange on some boards and forums. In this sense the topic ‘has legs’ (Meyer & McNeal, 2011).

The three stages of literature analysis, (selection process, theme occurrences and context categorisation), were felt to largely be a success, as they were shown to provide a reasonably sound basis on which the stakeholder data could then be measured against, as the stakeholder data largely confirmed the initial findings of the literature.

Literature Data Selection

To make possible a more robust and repeatable methodology to how literature sources would be selected, a more explicit checklist of criteria could be developed and applied to all selections, which would likely include the following:

  • Date published (in previous 5 years or less)
  • Topic areas to fall within:
    • Web 2.0 Applications in education
    • Social Media in education
    • Online Courses
    • Internet and academic workplace
    • Internet and higher education infrastructure
    • Higher education and the digital society
    • Open Educational Resources (digital)
    • Shared Digital Memory Systems and Archives
    • Pedagogies for the 21st Century
    • IPR, licensing or associated legal aspects concerned with digital spaces
  • Formal stipulation or categorisation as to global territories under review
  • Number of types of paper and topics in any given study ‘sprint’

Because there is a very large amount of research on a variety of topics of relevance, available from academic journals and other suitable professional publications, some way of controlling the amount of published research to be analysed at any one time would also need to be established. In the world of project management this might be referred to as ‘sprints’ of work, using an AGILE methodology. In the context of this type of study, sprints would work very well, as differing approaches to analysis could be applied and then compared, to constantly enhance the process of analysis iteratively.

Theme Occurrences

The themes were derived from the data itself, so used interpretative analysis to take terms most used or topics most mentioned and turn them into themes. These were effective at bringing numbers into the analysis (number of occurrences of a theme) and allowed the placement of each paper into a number of relevant themes it was concerned with or focussing on. Though there was overlap between themes, the system was quite successful at creating the initial literature theme analysis. Central to overlap was student centred learning, which might therefore be placed at the centre of future ways analysing importance.

Venn Diagram of Theme Overlap

 Fig 1: showing visual representation of student centred learning at the centre of top theme overlap

The top themes (all sixteen) were not necessarily saying anything that surprising, however the noticeable absence of much discussion in the literature about formal accreditation in order to encourage TEL was probably the most interesting finding.

> Link to Literature Themes (frequency table)

Contextual Category Analysis

The contextual categories used to give context to theme occurrences in the literature may in general terms reflect ways in which many individuals might interpret the literature research data, so whilst the categories are not in themselves very robustly developed (at this stage) they might be representative of how many in higher education might similarly react to what they think such data is telling them.

The contextual categories were again derived from the data itself, looking at the context of a theme occurrence, and assigning it a set of values concerning aspects of that context. Values consisted of the nature of the contexts factuality, reasoning and level of assumption. These values were then matched to the PBH scale. This allowed analysis of the occurrence as to its level within that scale of the real, imagined, intermittent, persistent or legacy factors. This was arguably the most difficult part of the analysis, and would need much more work in terms of theoretical underpinning as well as some clear explicit interpretation measurement system, if possible. This might be considered to be the most important aspect of this project, over time, as is attempting to evaluate how literature (research) might be ‘interpreted’, as well as how to measure its validity in a wider picture.

> Link to Contextual Categories and Theme Correlation

> Link to Contextual Categories to PBH for TOP Themes


To read more from this section, please use the page navigation below