Conclusions & Recommendations

All research is subjectively interpreted to an extent, in as much as a ‘reality’ can only be experienced and ‘known’ subjectively and interpreted subjectively – “(critical realism) holds that the world is characterized by a kind of duality in which (intransitive) objects […] have their own existence (and agency) outside of human knowledge and interpretation, but can only be known in their specific contents, rich textures, and nuances in and through (transitive) scientific inquiry and human interpretation/construction”, (Hedlund-de Witt, 2012). This research used mixed methods, some of which may have over-relied on subjective interpretation, though attempt was made to standardise those interpretations with a simple approach of categorisation used for all qualitative data. Some quantitative data could only contribute more limited input to the overall findings, however this was useful to provide further context.

It was felt that by looking at literature themes, assigning contextual categories to each theme and then contrasting those with data derived from actual people was a useful way of thinking about how to collect, analyse and interpret such data over time and from a wider pool of data sources if future research were to be conducted. By challenging with real feedback the likely common (mis)interpretations of current literature research (using interpretivist contextual categories) more might become known about what may act as an influence over those individuals taking account of such data in order to develop policy, technical provision or training and support for TEL. We might then achieve more effective and sustainable support provision. This is central to the concept of developing the priorities hierarchy used in this research, with multiple types of data analysed as a whole.

The results themselves showed quite clearly which were the top six themes that were most talked about and of common concern or interest. Generally speaking, consideration of institutional support and learning design might be most relevant to technology uptake in learning and teaching, with the provision of allocated time and professional assistance to the redesign of curricula ­(or ‘courses’) in relation to best use of technologies being seen as most useful to encourage more uptake of uses of technology in teaching scenarios.

TEL support, now, in the future

It is becoming widely accepted that providing face to face support for academic staff, whether it be for ICT training or for e-learning training, is no longer sustainable (Taylor & Newton, 2013). This may be particularly true for the metropolitan university, who have less strategically allocated funding, tend to use more part time or visiting lecturers, and may have other restrictions such as older technical infrastructure. Continuing to develop such training and support, often aimed at a generic ‘everyone’, risks the disengagement of many stakeholders (Moser, 2007), and is therefore not only unsustainable but also ineffective, (though this may deserve further research). Two concepts might be considered in relation to training support provision, in the light of unsustainable current models, and generic unfocused training: the concept of the ‘zero tolerance training window’, and the idea of ‘smart training’ delivery.

The ‘zero tolerance training window’ in this case refers to the tolerance level by the implementor of a technology application towards the users of that application, and the time window it may take to train them to use that application (the training window). The expression seeks to encompass an idea of a level of tolerance towards users by those implementing technological applications, not tolerance by users, of the ease of use and perceived ease of use of an application. This type of approach is increasingly utilised by popular online platforms such as Facebook, Google or others like Del.icio.us, and used in conjunction with comprehensive online help systems, appears to be growing in popularity as a way to deal with change in connection with using technology to achieve tasks.

The concept of ‘smart training’ here is referred to as a method by which training can be recommended and delivered to the personalised specific requirements of a user, in part based on data gathered from use of online applications, or profiles built up through job application data gathering or in some similar way. It would work in the same way as systems such as Spotify, Google personalised search, or an Amazon purchase history.

‘Zero Tolerance Training’

Looking to other areas of computer life, it might be of value to investigate approaches now being taken by others in relation to teaching users about technology applications. The example of Facebook, which appears to have what might be termed a ‘zero tolerance training window’ to its persistent user interface and functionality changes is a possible area of new research which could further investigate the relationship between technical applications purpose, functionality and required skills, and the intended user groups for those applications. In the case of Facebook it appears that ‘perceived usefulness’ outweighs ‘ease of use’ (Davis, 1989), i.e. the persistent challenges of interface change in relation to previous familiarity is accepted by the user, and the learning of new functions, interface layouts and settings is a constant experience, with no apparent end. The user accepts this condition of using the application as they deem its usefulness to outweigh the inconvenience of constant and persistent ease of use challenges.

This tells us much about new attitudes towards using ‘very useful’ technology (Google also employ similar techniques to changes and additions to their services, with some caveat). Both these technologies, arguably the largest of the Internet, have also created extensive online support, with often highly effective search mechanisms to aid a user in their ability to solve a problem while using the technology. Google user support forums are sometimes supported by Google employees, though not often in relation to numbers of users, and Facebook also rarely support their user forums, perhaps again because of sheer numbers of user queries, so users are left to help each other. Facebook and Google do not often even announce new changes prior to implementing them. Yet, after a short period of complaint by the users, new changes are embraced and later fully accepted into the user experience as standard. In this the reliance is on user ‘self-training’, not the provision of training by others towards users.

Therefore, it might be said that Facebook and Google have zero tolerance towards their users in relation to any period for learning new interfaces or functions. It should be noted that the time it takes for users to adopt new interface familiarity is becoming shorter, though may merit further research. Perceived use and ease of use in relation to an application are demonstrably of significant importance in relation to future training and support models for any technical application, and this would also apply to any TEL, including pedagogical aspects, which might be part of the perceived usefulness. This may prove a rich vein of future research.

Smart Training

Much of existing face to face training would also in most cases be offered regardless of participant level of technical efficacy (awareness and skills), and this may be another reason to add to those which indicate that new approaches are needed to technical and TEL support and training provision. This thinking is what was central to the development of the Rogers Diffusion of Innovations indicator from RG1 data, which added technical efficacy factors to the Rogers Adopter Categories (Rogers, 1995, 2003). In essence this turned out to almost form a separate arm of the research, though did add value to responses given to question sets that were used in the definition of whether a theme was a problem or a benefit. What began to become apparent was that the data being compiled which formed each technology profile was of itself useful, potentially. If this kind of data was developed on a large enough scale, perhaps it might become possible to provide a system whereby ‘the training finds you’. This could work on the same basis as any smart search, smart media provision or even smart advertising delivery model. Tate & Klein-Collins (2013) refer in some depth to new information technology systems being used in the USA to help students and others make much better informed decisions about where and what to study for their degrees, “…(T)here are also other online resources that […] match the student to career pathways and educational programs that build on the student’s existing skills and knowledge.” Giving an example of the Minnesota State Colleges and Universities System, with links to the VETS initiative, which shows ex or future military personnel “how their military training can count for credit at Minnesota State Colleges and Universities institutions”, this demonstrates the value of ‘smart’ data provision based on prior data being available about the individual. Though this is not directly connected to training and support of TEL, it shows the power of smart data, and how delivery of products or services directly relevant and applicable to the individual makes for (potentially) much more effective use of those services or products. If this approach could be taken to the delivery of support to the individual for increasing their use of TEL, based on their technical efficacies, subject discipline and other relevant factors, it may be possible to provide them with much more focused and appropriate types of support. The RDI indicator is an attempt at standardising a set of technical efficacies, placing them alongside Rogers other traits for Adopter Categories and with some additional professional information concerning job role etc, could form an overall Technology Profile for each employee, on which training and support needs might in part be based and delivered online as suggestions to the employee.

After initial data gathering, application development and implementation investment time and costs of such a system had been met, a more long term solution might be consequently developed over time. This might then be shared by all institutions, thereby also sharing cost and even support systems and sessions themselves between institutions, in the way that shared resources for learning materials or library data repositories as discussed in the Tower and the Cloud, (various, 2008) or the shared IT and library infrastructures of Columbia University and Cornell University discussed by Oblinger in Game Changers (2013).

The Problems and Benefits Hierarchy

The Problems and Benefits Hierarchy developed from this research project was a reasonable success, in that it did show a possible way to evaluate factors of concern in relation to technology enhanced learning, and in some way indicate whether they were perceived as problems or benefits and the contextual setting for that perception. Though rather primitive in terms of academic rigour or robust analysis mechanisms, there is still worth in the overall hierarchy as a concept on which to base potential further research.

The PBH poses possibly more questions than it answers. But these are useful questions, in terms of how data which can be so widely varied in its perception (for example, what might actually constitute a good TEL support session) and how that data might be measured and interpreted within a more fixed set of criteria. The contextual categories (both the real, imagined, intermittent, persistent and legacy of the PBH itself, and the ‘sub-categories’ used to inform those, see Contextual categories to Theme Correlation) were overall deemed to be the most significant and useful aspect of the criteria used to develop the PBH, and would merit further development so as to be more robust and able to be accurately and repeatably used by individuals not directly connected with this research (i.e. to be used independently of the researcher who developed them). Whilst other response category criteria were used to initially categorise comments from RG2 and RG3, perhaps more ideally, this type of discourse qualitative data would in future be categorised only using the same contextual categories as those used for the literature, and which were used here as a secondary match to the response criteria. This would be more difficult, but would also help to develop the contextual categories to be more efficient in a number of qualitative data environments.

The PBH also highlighted the question of how to quantify and interpret a theme that is both a problem and a benefit, and use of contextual categories again becomes significant, as the ranking in those categories, if separated into problem and benefit contexts, would define the predominance of whether the theme being analysed was overall a problem or a benefit. This study was too limited in nature to have been able to go deeper into the data, and the data itself was too limited in volume, but with a larger study, this would be possible and likely provide useful information to further inform the interpretation of the most significant problems and benefits, thereby establishing more clearly the strongest driving and restraining forces of uptake of technology in learning and teaching.

A final conclusion is that driving and restraining forces in relation to TEL uptake that might have particular relevance to metropolitan universities may best be specified by the differences between metropolitan universities and other institutions, for resources, reputation and mission statement. These factors are significant in that they are the defining controllers of funding, purpose and remit, and would be at the core of for example the approach to the diversity of the student population, the student experience, the resources available for learning, teaching, assessment and student life, and the pedagogical approaches that may or may not encourage the uses and potentials of TEL. Perhaps therefore, a study undertaken with only metropolitan universities might tell us more, which would first require a clear definition of what a metropolitan university is (and isn’t).

 

 


Warning: Use of undefined constant php - assumed 'php' (this will throw an Error in a future version of PHP) in /homepages/2/d108111940/htdocs/A614/wp-content/themes/path-jr-wt/index.php on line 75