1 The E-Learning High School Project in Jamaica and its effects on Students’ Attainment at the end of Compulsory Schooling Granville William Pitter Institute of Education, University College of London IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF EDUCATION 2022 2 Declaration I, Granville Pitter declare that except where I quoted, make reference to other works and sources, the information presented in this thesis for evaluation is entirely my own. Word Count (exclusive of appendices, the list of references, diagrams, figures and tables): 39,159 words 3 Abstract Integrating ICTs into schools is important in educational reform worldwide. Several studies have been published about e-learning and ICTs' effectiveness in the classroom and by extension on high stakes school leaving examinations. Despite the implementation of e-learning projects in developing countries such as Jamaica, a significant number of projects used different measures to determine goal achievement. This study evaluated the effects of the E-Learning High School Project Pilot (e-LHSPP) on students’ attainment at the end of compulsory schooling. A total of 68 schools, 26 pilots, and 42 other schools were included in the study. Administrative archival quantitative indirect data and documents were collected from the Caribbean Examination Council (CXC), Caribbean Secondary Examination Certificate (CSEC), the Ministry of Education, and other government agencies in the piloted subjects of English language, Mathematics, Chemistry, Biology, and Information Technology. An evaluative research design using a quantitative approach with indirect data and pre-existing administrative archival documents as data was used in the document analysis. The quantitative analysis results revealed that the e-LHSPP showed very small increases in students’ performance of less than 1 average GPA point in mathematics, chemistry, and Information Technology in 2009 and chemistry and Information Technology in the spillover year 2010. The results for both years were not statistically significant and the effect sizes for each of the subjects were small. The document analysis produced five themes which are (1) Technological support for success, (2) Key stakeholders' involvement and outcome, (3) Institutions' contribution to the e- LHSPP, (4) Supervision of the project, and (5)The resources available to the e- LHSPP. The supervision of the e-LHSPP at all levels needed improvement, the ICT equipment, for the most part, was adequate but there were shortcomings in student preparation, administrative inefficiencies between agencies, ICT integration training for teachers, and online access to educational databases. Keywords: e-learning and ICT; Difference in Difference (DiD); Administrative Archival Indirect Data; Evaluation. 4 Impact Statement Using ICT in Secondary Schools/Classrooms Over the past decade and a half, Jamaica has introduced e-learning/ICT in secondary schools to improve the academic performance of students aged 15 to 16 in high-stakes schools leaving examinations. The E-Learning High School Project (e-LHSP) began in 2006 but was delayed for over a year. The e-LHSP Pilot began the first phase and five subjects were piloted in 26 schools. The project sought to improve the quality of education, enhance the learning experience and ensure high levels of passes in the Caribbean Examination Council, and Caribbean Secondary Education Certificate. Since the pilot phase examination results were not originally measured or evaluated when ICT is introduced this research aims to address this gap. Two research questions were proposed: 1) What are the effects on students’ attainment when the e-LHSPP was implemented in schools; and 2) Were the components and design of the e- LHSPP appropriate for the successful piloting? and What were the reported issues affecting students’ attainments during the e-LHSPP? I used the Difference in Difference (DiD) statistical technique to analyse the administrative archival quantitative indirect data and secondary documents to provide the context for the quantitative results. Benefits inside academia This thesis highlights the importance of pilot studies in high-stakes school leaving examinations. In Jamaica and other countries, a lacuna exists in the use of administrative archival quantitative indirect data and documents as data to evaluate key stages 14 and 15 in high schools' e-learning/ICT pilot projects, particularly with students’ academic performance. The thesis evaluated the e- LHSPP which provided crucial lessons relating to the pitfalls, successes, and proof of the viability of the e-LHSP. Three major pitfalls were the Critical Success Factors (CSFs) training of teachers, administrative bundling, and students. The schedule for teacher training regarding ICT skills and technology integration was unrealistic and thus ineffective because insufficient time was given for testing, measurement, evaluation, and research in the classroom before the 5 examinations. The second major pitfall included procurement and delays in vital resources. The third major pitfall was the students’ preparation for the pilot. One of the logical frameworks for the e-LHSP notes student training as a verifiable output. None of the official reports about the e-LHSPP mentioned students’ training which is a critical success component. The successes of the e-LHSPP were seen in technology acquisition, instructional materials acquisition though not locally developed, and infrastructure for the pilot schools. This thesis is unique as it provides comprehensive quantitative and contextual results for the pilot and non-pilot schools during the e-LHSPP by comparing both groups. Benefits outside academia It would be helpful if Government agencies can implement pilot projects modelling the approach of the DiD with linear regression to examine the effects of an intervention and contextualise the quantitative results using document analysis. The thesis highlights the need to adequately preserve data and the collection instruments for use in administrative archival secondary data analysis. An opportunity also exists to compare the Grade 11 School leaving examination results with their expectations of the e-LHSPP. This thesis is a potential reference for the e-LHSPP and similar future projects. 6 Acknowledgement This project would not have been a success without the priceless contribution made by my supervisor Dr. Maria Kambouri who gave me a once-in-a-lifetime opportunity to excel beyond my expectations and Dr. Golo Henseke who both exposed me to the areas of my studies that strengthened my abilities to write, analyse and interpret both administrative documents as data and quantitative indirect data. Secondly, I would like to thank the other tutors and administrative staff both past and present from the EdD programme for their support throughout my tenure at the Institute of Education, University College of London. I would also like to thank my family for the support and continued encouragement during my studies and for the various institution’s administrations that provided me with access to the information I needed. Special thanks to the University of Technology, Jamaica for giving me the time and assistance I needed to complete my studies. 7 Table of Contents Declaration ............................................................................................................ 2 Abstract…. ............................................................................................................. 3 Impact Statement .................................................................................................. 4 Acknowledgement ................................................................................................. 6 List of Figures ...................................................................................................... 11 List of Tables ....................................................................................................... 11 Reflective Statement ........................................................................................... 13 Introduction ...................................................................................................... 13 Reflections on the Assignments and Academic Development ......................... 13 Chapter 1 ........................................................................................................... 20 1.1 Introduction ................................................................................................. 20 1.2 Research Problem ...................................................................................... 21 1.3 Purpose of the Study .................................................................................. 22 1.4 Jamaican Educational Context ................................................................... 23 1.4.1 Teacher Training ..................................................................................25 1.5 Professional Significance of the Study ....................................................... 26 Chapter 2 Review of Literature ........................................................................... 28 2.1 Introduction ................................................................................................. 28 2.2 E-Learning .................................................................................................. 29 2.2.1 E-Learning Definitions ..........................................................................29 2.2.2 E-Learning Benefits and Challenges ....................................................32 2.3 Theoretical Frameworks ............................................................................. 36 2.3.1 Logical Framework Approach/ Logical Framework Analysis ................36 2.3.2 E-Learning Theoretical Framework ......................................................38 2.4 Analysis of E-Learning in Other Countries.................................................. 42 8 2.4.1 ICT in Lebanon ....................................................................................43 2.4.2 ICT in South Africa ...............................................................................44 2.4.3 ICT in Argentina ...................................................................................46 2.4.4 ICT in Nigeria .......................................................................................47 2.4.5 ICT in Australia ....................................................................................49 2.4.6 ICT in the United Kingdom ...................................................................50 2.4.7 ICT in India ...........................................................................................52 2.4.8 ICT/e-learning in Jamaica ....................................................................53 2.5 E-learning in Developing Countries ............................................................ 54 2.6 E-learning/ICT and Educational Outcomes ................................................ 56 2.6.1 ICT Context in Kenya ...........................................................................58 2.7 Summary .................................................................................................... 60 Chapter 3 Research Methodology ....................................................................... 62 3.1 Overview .................................................................................................... 63 3.2 Research Design ........................................................................................ 64 3.2.1 Methods ...............................................................................................65 3.2.2 Ethical Consideration ...........................................................................66 3.3 Estimating the Impact of e-LHSPP on Attainment ...................................... 68 3.3.1 Dataset .................................................................................................68 3.3.2 Variables ..............................................................................................72 3.3.3 Outcome Variable ................................................................................74 3.3.4 Control Variables..................................................................................75 3.3.4 The Analytical Strategy ........................................................................75 3.4 Document Analysis Approach: Technologies and Educational Resources used in e-LHSPP ....................................................................................... 79 3.4.1 Procedure to Analyse the Documents ..................................................80 Chapter 4A Quantitative Results ......................................................................... 82 4.1 Research Question 1 .................................................................................. 82 9 4.1.1 Descriptive Statistics ............................................................................82 4.1.2 Mean Grade Point Average (GPA) 2009 - 2010 ...................................84 4.1.3 Grade Point Average (GPA) in DiD with Linear Regression .................85 4.1.4 Effect Size ............................................................................................90 4.1.5 Covariates ............................................................................................92 4.2 Summary .................................................................................................... 96 Chapter 4B Document Analysis ........................................................................... 97 Data Analysis to Answer RQ 2 and 2A .........................................................97 4.3 Research Question 2: Are the components and design appropriate for the successful piloting of the e-LHSPP? .......................................................... 97 4.3.1 Technological Support for Success ......................................................99 4.3.2 Key stakeholders' Involvement and Outcome ....................................100 4.3.3 Institutions' Contribution to the Project ...............................................101 4.4 Research Question 2A: What Were the Reported Issues Affecting Students’ Attainments During the e-LHSPP? ........................................................... 103 4.4.1 Supervision of the Project ..................................................................103 4.4.2 Resources Available to the e-LHSPP.................................................103 4.5 Summary .................................................................................................. 104 Chapter 5 Discussion ........................................................................................ 105 5.1 Introduction ............................................................................................... 105 5. 2 Research Question 1: What are the effects on students’ attainment when the e-LHSPP was implemented in schools? ............................................ 106 5.3 Research Question 2: Are the components and design appropriate for the successful piloting of the e-LHSPP? ........................................................ 111 5.3.1 Technology support for success ........................................................111 5.3.2 Key stakeholders' Involvement and Outcome ....................................112 5.3.3 Institutions' Contribution to the Project ...............................................114 5. 4 Research Question 2A: What are the reported issues affecting students’ attainments during the e-LHSPP? ............................................................ 115 10 Chapter 6 Conclusions, Implications, and Recommendations .......................... 118 6.1 Conclusions .............................................................................................. 118 6.2 Thesis Contribution .................................................................................. 125 6.2.1 Thesis Contribution to Knowledge .....................................................125 6.2.2 Professional Contribution ...................................................................126 6.3 Implication for Practice ............................................................................. 127 6.4 Recommendations ................................................................................... 128 6.5 Recommendations for Further Study ........................................................ 130 6.6 Assumptions, Limitations, and Delimitation of the Study .......................... 130 Definition of Terms ......................................................................................131 References ........................................................................................................ 132 Appendices ........................................................................................................ 146 Appendix A Teaching Curriculum ...................................................................... 146 Appendix B Permission Letter from OEC ......................................................... 151 Appendix C Example of Student Results ........................................................... 152 Appendix D Parallel Trend Lines ....................................................................... 153 Appendix E Technology Resources Distributed to Pilot Schools ....................... 157 11 List of Figures Figure 2.1 Holistic e-learning systems theoretical framework .............................. 39 Figure 2.2. Element of a pedagogical model for e-learning ................................. 41 Figure 4.1 The Critical Success Factors (CSFs) of E-learning ............................ 98 Figure 5.1 Funnel Model for implementing e-learning ....................................... 116 Figure 6.1 Strategic perspective ...................................................................... 1211 List of Tables Table 2.1 The Dimensions of E-Learning ............................................................ 31 Table 2.2 Benefits of E-learning .......................................................................... 33 Table 2.3 Definitions of Effectiveness. ................................................................. 35 Table 2.4 Research Study Method ...................................................................... 36 Table 2.5 Logic Framework Matrix ...................................................................... 37 Table 2.6 Grades conversion to points: UCT’s admission rating system for the South African School Leaving Qualification ........................................ 45 Table 2.7 Mean relative gain score for High and Low ICT users by subjects ...... 51 Table 3.1 Population and Sample Schools .......................................................... 63 Table 3.2 Research Questions and Hypotheses ................................................. 64 Table 3.3 Evaluation Steps .................................................................................. 68 Table 3.4 CXC/CSEC Grading……………………………………………………….70 Table 3.5 Codebook: Variables and what they Measure ..................................... 73 Table 4.1 Research Question One, Hypotheses and Analysis ............................ 82 Table 4.2 Summary Statistics for all Assessment (GPA) ..................................... 82 Table 4.3 Summary Statistics for Base Year 2008 GPA ...................................... 83 Table 4.4 GPA by subject in pilot and non-pilot schools, 2009 and 2010 ............ 84 Table 4.5 (DID) GPA attainment Outcome English Language 2009 and 2010 .. 865 Table 4.6 (DID) GPA attainment Outcome Mathematics 2009 and 2010 ............ 87 Table 4.7 (DID) GPA attainment Outcome Chemistry 2009 and 2010 ................ 88 Table 4.8 (DID) GPA attainment Outcome Biology 2009 and 2010 ..................... 89 12 Table 4.9 (DID) GPA attainment Outcome Information Technology 2009 and 2010 ................................................................................................... 89 Table 4.10 Effect size for all Subjects 2009 and 2010 ......................................... 90 Table 4.11 Effect Size per Subject for 2009 and 2010 ........................................ 91 Table 4.12 Summary for Locality GPA 2008 ....................................................... 91 Table 4.13 Summary for School Type GPA 2008 ................................................ 92 Table 4.14 Summary for School Gender GPA 2008 ............................................ 92 Table 4.15 Faculty CSF for e-LHSPP. ............................................................... 100 Table 5.1 LFA Analysis of Countries ................................................................. 109 Table 5.2 CSF Student ...................................................................................... 112 13 Reflective Statement Introduction My journey through the taught modules on the EdD International programme at the University of London, Institute of Education (IOE) was a learning experience worth every moment of it. I must admit that there were areas during the modules that were very difficult for me such as understanding epistemologies and their possible relation to empirical methodologies, and how these philosophical positions are associated with the analysis of data. Nevertheless, I have seen my attitude to learning, approach, academic writing skills, and research skills transition to the doctoral level and which prepared me to pursue the Institution- Focused Study (IFS). Before I entered university, I had a deep interest in Information and Communication Technology (ICT) as it relates to teaching and learning at the secondary level of the education system. During the application process to IOE, I worked with my supervisor on focusing my interest to determine what aspects of my interest in ICT were worth pursuing. My preparation for the IFS began with the first module, and so, student teachers’ understanding and teaching using ICT have been my focus. This became my focus not only because I am interested in the area but also because I am working for the University of Technology, Jamaica (UTech) to prepare teachers for the secondary level. To show my professional and academic development through and after completing the taught modules, I will reflect on my assignments and academic development, as well as on the relationship between the assignments. I will then discuss the progression across the first year of taught courses, and finally the relationship between the work on the EdD and my professional practice and development. Reflections on the Assignments and Academic Development The module Foundations of Professionalism (FoP) was the first module I took in the Autumn Term of 2013. At first, I was very excited about what the module had to offer and the professional way in which it was presented. Later my excitement faded when I realised the amount of reading and preparation for classes I had to do and some concepts were difficult to understand. Some of these difficult areas were the different interpretations of professionalism and the lexicon of 14 professionalism which has several terms and words that were perplexing. I experienced a glimmer of hope when I realised that I was not the only one in the class having these challenges. This was my first lesson at the doctoral level, “read smart” if you are going to complete your readings. Using the library resources effectively was another imperative that has to be mastered if I was to succeed at this level and this includes accessing the resources from Jamaica, doing advanced searching of databases, journals, etc., and sifting through a large amount of information in a short time. My first assignment was entitled, “Professionalism at a Newly Upgraded University: In what ways are teacher educators’ professional identities changing?” My initial submission was almost a disaster. I can clearly remember the first line in my tutor’s comments, “this assignment is written in a way that is overly descriptive and personal for an academic piece of writing.” How could I have gotten it so wrong, was the question I asked myself. I must admit that this was the first piece of academic writing for me at the doctoral level which was indeed structurally weak and lacked the necessary theoretical framework and arguments required. It was also the first time I was exposed to this content. My tutor mentioned some good points about my essay and gave me some excellent suggestions and readings that would help me formulate the theoretical underpinnings and develop my writing technique. Some of these suggestions were to avoid unsubstantiated statements and to read Giddens (1984) and Henkel (2000) who had done excellent work in the area of structure and agency. Their work would help me develop my arguments on teacher-educator identities. During the corrections to the initial assignment, I regained my confidence as I was now able to engage with the literature in ways I could not do before. My arguments were much more convincing because they were supported by other studies and grounded in a theoretical framework. The assignment caused me to reflect on my own professional identity in my field of expertise. I found that teacher educators’ professional identities are developing and that the profession is in transition at my university. The final feedback was like a breath of fresh air in a smoke-filled room. I have survived with a very good grade along with my confidence restored. 15 I started Methods of Enquiry 1(MoE1) with a sense of excitement because of the new knowledge I received in FoP and the opportunity to write a proposal in my area of interest. I have written research proposals but not at the doctoral level. I decided to explore, “How has ICT helped student teachers develop as teachers?” This proposal would assist me in understanding how student teachers develop their pedagogies to teach ICT. I found that there were major gaps in my knowledge and this had to do with my theoretical grounding. This module introduced me to the theoretical and conceptual issues in educational research. I started learning about new terms such as epistemology, theoretical perspective, positivism, and interpretivism /constructionism. These terms were not easy to grasp but they gave me a sense of the broad range of theoretical and practical considerations when designing research. I have adopted the positivist paradigm which assumes that there is an “objective reality” and the methods (quantitative) I will use can keep my views and opinions from influencing the research outcome. The initial essay feedback from my second essay was excellent. It covered such areas as my style of writing, the structure of my proposal, the relevance of the literature review to the study, and the flow of my essay. Though the areas mentioned previously were not perfectly done, they showed significant improvements in structure, flow, and focus over the FoP initial assignment. My tutor recommended that I simplified the theoretical framework and make closer linkages/connections with the research context. The comment signaled to me that I had started to develop, engage and apply the theoretical framework in my writing, a significant step toward academic writing and understanding research. The suggestions given to me improved the quality of my final essay and the comments made by the examiners were mainly additional information that I could have included in each section. This assignment exposed me to the complexities of writing a proposal and showed me how to deal with these complexities. The Method of Enquiry 2 (MoE2) module was based on the assumption that MoE1 was completed so that small-scale research could be conducted. The knowledge gained from MoE1 was invaluable and allowed me to complete the MoE2 proposal with a greater understanding, focus, and clarity of the issues. I moved on from my MoE1 title to researching “How do student teachers understand and use ICT to aid their teaching? Student teachers' use of ICT in Jamaican schools has national importance because upon graduation they are 16 expected to be part of the national strategic ICT plan in schools. I had planned to use MoE2 as the staging area to begin my IFS and hoped to use MoE2 as my pilot study for the IFS. MoE2 gave me a firm grip on writing research questions, research design, sampling, information-gathering techniques and analysis of data, reporting the results, and discussing the results. I participated in two computer-based workshops offered which helped increase my understanding of analyzing data using SPSS and NVivo. The SPSS was the easiest of the two research software for me to learn and this was because I was exposed to using it before I came to IoE. NVivo is not as user-friendly and required extensive memorizing of steps and I managed to use it along the way. I used SPSS in my IFS assignment to analyse the questions in the questionnaire. SPSS also allowed me to determine the reliability of the instrument. My experience conducting small-scale research during MoE2 was very rewarding. I am comfortable doing quantitative research, but my tutor encouraged me to conduct qualitative research instead based on the size of the sample and the information needed. I was hesitant at first because this area was new to me. However, I reviewed the material given to me in MoE1 and MoE2 which increased my confidence to undertake the challenge of a new paradigm and conduct the research. During the planning for this first study, I learned a lot about the demands of conducting insider research and the uncertainties, complexities, and ethical considerations that accompany it. Drafting the interview questions guideline was another skill I acquired that proved valuable during the interviews. Interviewing as a methodology was added to my repertoire of knowledge and this experience provided me with the techniques and skills to conduct interviews. The transcription of the audio files was a tedious and time-consuming process but it was worth it because I learned to use the software RCD Digital Voice Manager. The RCD Digital Voice Manager can record conversations and provide the software that allowed me to listen to and transcribe the conversation. Among the very important set of skills I developed when analyzing qualitative data was using “thematic analysis” and “Coding”. This allowed me to develop categories that were of interest to the area on which the research questions were based and gave me a better understanding of the issues. 17 Reflection on the relationship between the assignments and progression across the taught courses and assignment I began my studies with a clear focus on student teachers and their use of ICT in the classroom. The enquiry is therefore centered on teachers and ICT, and this can be seen throughout the assignments I have completed so far. My first assignment in FoP looked at the changing identities of teacher educators at a newly upgraded university. This assignment allowed me to set the background for the institution in my proposed IFS study and situate it as part of the context. It was important to establish that at the university, teacher educators’ professional identities were changing. These identity changes have implications for teacher training including the training of teachers to use ICT. A major implication is that teachers are now called upon to do research in the area of ICT and not only to know how to use the tools provided by ICT. The MoE1 assignment was a proposal focused on how ICT helped student teachers develop as teachers. The assignment set the societal context in which the university in this study is operating and the social, economic, and political pressures that are affecting the delivery of its programmes, in particular ICT training. The assignment introduces the various stakeholders such as the Government of Jamaica and their interest in the proper training of teachers to use ICT. MoE1 allowed me to search the literature to provide me with the theories about learning and teaching with ICT and the theoretical framework that unpins learning and teaching pedagogies associated with the use of ICT in the classroom. Finally, the MoE2 assignment took my research interest a stage further by examining student teachers' understanding and use of ICT to aid their teaching. Four students were interviewed who represented different groups and the findings were unexpected. There seems to be a mismatch between the student teachers’ understanding and application of ICT and the government’s expectations. Their understandings were categorised as Category A type understanding related to making teaching easier and more efficient, Category B type of understanding related to changing the learning experience of students, and Category C type of understanding related to getting and maintaining students’ attention. The government on the other hand invested over US50 18 million dollars to improve learning which was not mentioned in the students’ responses. My IFS would seek to determine the extent of this mismatch and provide possible solutions. The relationship between the work on the EdD and my professional practice and development I have been a lecturer for the past twenty years of which five was in the capacity of Programme Director for Business and Computer Studies. I have been appointed Programme Director for undergraduate degrees in education on September 1, 2022. The university where I work can be considered a newly upgraded university. There is only about 21% of the academic staff with a doctorate or equivalent and the university is encouraging academic staff to undertake doctoral studies, as a doctorate is now the basic qualification for lecturers. I am fortunate to be among the lecturers pursuing a doctorate and the university community is supporting me all the way. The modules that I have completed so far have helped me to understand the issues related to ICT and student teachers at the university and the wider society. I feel more confident addressing some of the concerns at work because I can speak with some amount of authority being a doctoral student. The body of knowledge that I have acquired in reading the literature around ICT for learning and teaching, especially the theoretical frameworks has guided me to change my approach in how I engage with student teachers in their learning and teaching pedagogies to use ICT in the classroom. Before I joined the programme at IoE I was thinking locally about the issues in Jamaica. Now that I have been exposed to the international arena, my perspectives on teaching and research have changed. I realise that it is important to have a comparative perspective to enrich the localised perspective that small countries may have and how it can impact smaller countries. 19 Conclusion The beginning of the EdD was not an easy journey and I made a few mistakes along the way which allowed me to learn fast. Now having successfully completed all the required taught modules, I have gained confidence in my research abilities at the doctoral level, a supportive network of my peers and tutors, and additional information that has allowed me to better focus my research interest. The taught courses revealed the relevance and importance of my research in several areas. Firstly, my institution’s development could be improved by developing pedagogies that can be used by all academic staff using ICT. Concerning national development, student teachers will be better able to participate in the government’s ICT programme in schools and not have to be retrained. My professional development and career would be enhanced by the body of knowledge I am being exposed to and by the opportunity to present at conferences and publish. I can truly say that I have been born again into the research community of practitioners in education with an added international flavor. 20 Chapter 1 1.1 Introduction Jamaica, like many countries around the world, is seeking to improve education at the secondary level through the use of Information and Communication Technology (ICT), more specifically through the use of ICT for learning known as e-learning. The use of ICT in schools was first set out in the ICT in Education Policy (MoE, 2013) and later in the ICT in Education Policy (2022) purports to provide learners with equitable educational opportunities as the country transitions to developed country status. Jamaica is the third largest island in the Caribbean about 10,990 square kilometres and situated in the Greater Antilles. The country is divided into 14 parishes and three counties. Classified as a middle-income country it has the largest population in the English-speaking Caribbean. Jamaica became an independent nation on August 6, 1962, but remains a member of the British Commonwealth and adopted the Westminster- Whitehall (British) System of Government. The political system is a stable multi- party democracy based on the British Westminster Model. The British socio- political culture is very evident in Jamaica and is mainly seen in government institutions. The transformation of the education sector in Jamaica, including the use of ICT, has been a major focus for successive governments over many decades. The initial attempt to include computers in education can be traced back to the 1980s when 10 secondary (high) schools were equipped with computer laboratories (Miller, 2004). Miller referred to this act of integration as “A case of bottom-up reform” (2004, p. 101) as education stakeholders formed alliances with these schools to introduce computers to teaching and learning. This initial enthusiasm waned, but a renewed thrust to include ICT in the education sector recommenced in 2005 when the then Minister of Commerce, Science, and Technology, Phillip Paulwell argued that the demand for the Internet was too low because Jamaica’s positionality as an oral language society diminished the use of data and written forms of communication. Accordingly, the Ministry of Commerce, Science and Technology (MCST) and the then Ministry of Education, Youth and Culture (MoEYC) were charged to collaborate on a joint project designed to improve the 21 quality of education for students in Grades 7 – 11 (Paulwell, 2005). The improved “quality of education” would enhance students’ grades and the demand for the Internet. Thus the E-Learning High School Project (e-LHSP) was rolled out in 2007 at an estimated cost of US$50 million (Crawford, 2006). The e-LHSP was executed by the e-Learning Jamaica Company Ltd (e-LJam) which is an agency of the MCST (now called the Ministry of Science, Energy, and Technology [MoSET]) and incorporated in March 2005 (e-LJam, 2012). The primary goal of the e-LHSP is to “utilize ICT’s to contribute to an improvement in the quality of education in high schools, to enhance the learning experience and to improve the level of passes in the school-leaving CXC/CSEC examinations” (e-LJam, 2012, p.2). A pilot was undertaken as the first phase of the e-LHSP. This study will focus on this small-scale implementation which will be referred to throughout the work as the E-Learning High School Project Pilot (e-LHSPP). The e-LHSPP was scheduled to rollout between the years 2006 and 2007 but was delayed by one year. Already behind schedule, the project suffered yet another delay but eventually commenced in July 2008. Meanwhile, a pilot involving 26 public schools commenced. At the time of the rollout of the e-LHSPP, there was no national ICT policy plan to guide the use of ICTs in the delivery of educational services. The e-LHSPP was an original and unique initiative at the secondary school level in Jamaica up to 2022, and its evaluation will and has served as a valuable source of information for future projects of a similar nature. 1.2 Research Problem In Jamaica, education stakeholders perceive that the use of e-learning in secondary schools can change things for the better including improving students’ attainment in their school leaving examinations (Peart, 2011). E-learning in its simplest definition is the use of electronic media and devices as tools to aid in teaching and learning. The claim that e-learning can improve school conditions in developing countries is supported by multiple researchers. For instance, Gholami et al. (2010) and Kozma et al. (2004) posited that without ICT resources or the inability to use them effectively, countries were at risk of providing low-quality education and harvesting a low return on students and social investment. Following the implementation of e-learning projects in developing countries such 22 as Jamaica, projects are assessed by different measures to determine whether they have achieved their goal. However, the methods used by a researcher to measure the effects of students’ attainment in school-leaving examinations when a government introduces e-learning in schools can be a complex and daunting task. Governments rely on statistics that measure and evaluate the performance of the many projects they approve. However, assessments and evaluations differ because of the research design, worldview, or reporting standards. For instance, an earlier evaluation of e-LHSPP, while insightful, could not definitively confirm its success. Part of this gap could be attributed to the research design as Morrison (2016) was unable to discuss challenges with stakeholders in the pilot phase and the quantitative analysis comprised simple comparative and trend analyses, and measures of central tendency and dispersion. The evaluation showed differences but failed to account for pre-existing differences and covariates which are confounding variables. Those shortcomings motivated my better-informed and more sophisticated approach to the e-LHSPP. Consequently, this research addresses these gaps by applying a different statistical approach to assess the e- LHSPP in Jamaica. 1.3 Purpose of the Study This study sought to address e-learning at the secondary level of the Jamaica education system by evaluating an e-learning pilot project not previously studied. Secondary education is a pivotal stage of human development and is described as the crucial link between primary schooling, tertiary education, and the labour market (World Bank, 2005a). The World Bank further contends that the expansion of secondary education is crucial “on the grounds of growth, poverty alleviation, equity, and social cohesion,” and “is a vital part of a virtuous circle of economic growth within the context of a global economic system” (p. 17). Therefore, the secondary school system is the most strategically placed education sector to improve productivity and drive economic growth (Hanushek et al., 1994). Therefore, it is the purpose of this study to determine the effects of the e-LHSPP on students’ performance in their school-leaving examinations. In this research, I will focus on data for students aged 15 to16 for several reasons: 23 1 These students are preparing for high-stakes examinations and the grades will determine if they go onto university and access other opportunities in the world of work; school administrators should be adopting a serious approach to examination preparation. 2 School leaving examination results are often used to rank schools and measure their success (Gayle, 2017). 3 Examination results are usually a primary indicator and measure of students’ attainment of literacy and numeracy skills. 4 To understand what the e-LHSPP planned to achieve. The study focused on five subjects (English language, Mathematics, Chemistry, Biology, and Information Technology) which were the five subjects piloted during the pilot phase of the e-LHSP. Considerable attention is being placed on the secondary-level school sector because of the high youth unemployment and violent behaviours being exhibited, especially among unattached youth. This demographic is regarded as persons between the ages of 16 to 25 who are unemployed or unengaged in any social or economic activity that can improve their lives (Hayes et al., 2012). Education is seen as the major vehicle for upward social mobility in Jamaica, hence successive governments have invested large sums of money in projects such as the e-LHSP to address the situation. 1.4 Jamaican Educational Context The Ministry of Education, Youth and Information (MoEYI) formerly the Ministry of Education (MoE), identifies four educational stages in Jamaica: early childhood (pre-primary), primary, secondary, and tertiary. At the time of writing this thesis, the total number of public education institutions is 41 Early Childhood, 583 Primary, 97 All Age, 84 Primary and Junior High, 10 Special, 150 Secondary High Schools, 14 Technical High Schools, 2 Agricultural High Schools, 5 Community Colleges, 5 Teacher’s College and 2 Universities (MoEYI, 2016b). The total number of independent institutions is 247 Kindergarten/ Preparatory, 42 Secondary High with Preparatory Department, 35 Secondary High, 124 Vocational High, 131 Commercial /Business Colleges, and 15 Special (MoEYI, 2016b). At the secondary level, schools are grouped into six regions. Region I (Kingston), Region II (Port Antonio), Region III (Brown’s Town), Region IV 24 (Montego Bay), Region V (Mandeville), and Region VI (Old Harbour) (MoEYI, 2016b). These regions support all schools through school supervision, assistance with guidance and counselling, financial services, school personnel, and administrative services through workshops and seminars, and public relations. However, this study will focus only on students in high schools from the parishes of Kingston and St. Andrew in Region 1, St. Thomas in Region II, and St. Catherine in Region VI. These Regions were chosen by the e-LHSPP project team primarily because of their proximity, ease of access, and representativeness of all school types. The Charter of Fundamental Rights and Freedom Act (2011) amended the constitution to provide Jamaican citizens with compulsory and free education at the pre-primary and primary stages of their education. The Education Regulation Act (1980), specifically states that the minimum age for admission as a student of a public education institution will be four years for pre-primary, six years for primary, and eleven years for secondary. Students will spend five years at the primary stage and leave at age 11 or 12. Students were enrolled at the secondary stage through the Grade Six Achievement Test (GSAT) which was replaced with the Primary Exit Profile (PEP) in 2018. PEP’s primary goal is to assess students’ knowledge and place emphasis on students’ demonstration of 21st-century skills such as critical thinking, communication, collaboration, and creativity. Students are placed at one of the secondary high schools based on their choice of schools and their test scores results. The legislation does not mandate a secondary stage of education in Jamaica, however, the government urges parents to continue their children’s education. The secondary stage has two cycles. The first cycle ranges from Grades 7 to 9 in All Age Schools, Primary and Junior High schools, and High schools, including Technical High and Independent/Private High schools (MoEYI, 2017a). The second cycle ranges from Grades 11 to 12 of these schools (except for All Age and Primary and Junior High Schools) and the Agricultural, Technical, and Vocational Schools (MoEYI, 2017a). The standard practice is for students in Grade 10 to begin their preparation for the Caribbean Secondary Education Certificate (CSEC) subjects, administered by the Caribbean Examinations Council (CXC), and sit the examination in Grade 11. Students who are successful in five or more CSEC subjects with good grades are encouraged to continue to 25 the sixth-form/pre-university programme (Grades 12 and 13) that is if their school offers the programme. Students who are in Sixth-form sit for the Caribbean Advanced Proficiency Examination (CAPE) at the end of Grades 12 and 13. Some tertiary institutions will accept students with CSEC while others require, in addition to CSEC, two or more CAPE subjects. The CSEC and CAPE examination are similar to the UK’s GCSE “O” and “A” levels, respectively. Jamaica has a student-teacher ratio of about 25:1 (MoEYI, 2016a). A new addition to the current structure of the sixth-form is proposed by the present Minister of Education, Fayval Williams, to commence in September 2022. This new proposed initiative by the government will allow students who complete Grade 11 to continue for a further two years in an alternative programme alongside the traditional sixth-form curriculum, thus extending high school from five years to a compulsory seven-year programme. According to the Minister, the new programme should better prepare students for higher-level academics and or entry into the workforce. The next section will provide information regarding the requirements for teacher training. The requirement for teacher training is important because it shows the entry-level curriculum for teachers which can be used to highlight their readiness for the 21st-century classroom. 1.4.1 Teacher Training The University Council of Jamaica (UCJ) was established by the Government of Jamaica through the University Council of Jamaica Act. One of the UCJ’s primary functions is to approve courses of study to be pursued by candidates in tertiary institutions across Jamaica. The UCJ is the official accrediting body for all courses of study at the tertiary level. It sets the standard for teacher training; teacher training programmes wishing to gain accreditation must be certified by the UCJ (UCJ, 2017). According to the UCJ, the minimum requirement for admission to the Bachelor's Degree in Education should normally be at least 5 CSEC/CXC subjects at General Proficiency Levels 1 and 2 or 5 GCSE ‘O’ level subjects at grades A and B, or the equivalent. These should include English language and Mathematics. Other entry requirements are also outlined but these are dealt with on a case-by-case basis especially as it relates to mature applicants. 26 The structure of the Bachelor's Degree in Education is mandated to have a minimum of 120 credits with an additional minimum of 15 credits for the Teaching Practicum/Internship. Each credit hour is 15 hours of teaching or lecture time and 3 hours of laboratory work which is equal to one hour of teaching or lecture time. The Teaching Practicum/Internship of 45 hours is accorded one credit. The teaching curriculum is outlined in Appendix A adopted from the (UCJ, 2017). Another institution responsible for teacher education is the Joint Board of Teacher Education (JBTE), which has provided quality assurance and professional development in teacher education since 1965 and has certified approximately 50,000 teachers across the Caribbean. The MoEYI has recognised the importance of the use of ICT by teachers in the classroom, and so, the training and certification of student teachers in ICT at the tertiary level will soon become mandatory. Indeed, the process to achieve this has begun. 1.5 Professional Significance of the Study This research will contribute to the body of works that uses grey literature to analyse and determine the effects of e-learning on students’ attainments. This is particularly in cases when developing countries invest in e-learning in education, especially at the secondary level. The research will seek to influence government research policy concerning the availability of datasets for use in an administrative archival indirect secondary analysis. This study will showcase lessons learned from the implementation of the pilot to see how e-learning can be embedded within the school system and the required components for success. Hopefully, the work will present a better understanding of the issues surrounding the results of e-LHSPP and inform on the criteria for future projects of this nature. My role as a researcher/practitioner spans twenty years, from 2001 at a teacher training school to my current position as a senior lecturer at the University of Technology (UTech). During my tenure, I served as Programme Director for five years from 2009 – 2013 and was appointed again in 2022 – 2024 in the Schools of Technical and Vocational Education at the UTech, Jamaica—the institution responsible for preparing teachers for the secondary level of the education sector. I joined the University of London as a doctoral student in 2013 and since then I have published articles about student teachers and ICT in Jamaica and presented papers at local academic conferences about my research. 27 My job entails evaluating and assessing current activities in ICT education and ICT-related projects because of the need to adequately prepare students for the teaching profession. I am usually part of my university’s team offering consultancy services for ICT in education. I am currently working with a team of researchers from my university researching the topic: Pivoting Teacher Pedagogical Practices: Implications for Pre-Service Teacher Preparation post- COVID-19. The study is to inform about changes within the Jamaican context. It is against this background that I became interested in the e-LHSP which has affected past students and will affect current and future ones, especially during the COVID-19 pandemic when new ways of teaching are emerging. New e- learning systems are constantly being proposed and the successful piloting of these e-learning systems will be essential for successful implementation. 28 Chapter 2 Review of Literature 2.1 Introduction At the start of the 21st century, e-learning is one of the changing trends that characterise education systems worldwide. E-Learning has now become pivotal in the growth and development of a nation particularly because it combines learning and technologies in ways in which ordinary people can have access to quality education. Thus, it plays a critical role in preparing both teachers and students with 21st-century skills for problem resolution and knowledge development in society. Modern technologies have allowed learners and teachers to participate outside and inside the classroom in educational experiences almost anywhere in the world. This section presents a review of existing peer-reviewed articles and grey literature surrounding this topic and the problems the research aims to resolve. To this end, general e-learning/ICT educational outcomes and programme reviews in similar countries around the time of Jamaica’s e-LHSPP implementation will be analysed. The appropriateness of the theoretical framework is also included along with the literature that supports its appropriateness for this research. Technological advancements in ICT have significantly changed the way teaching and learning are conducted. To understand e-learning in education, a person must have some basic understanding of IT and ICT. In its simplest definition, IT refers to the use of computers to capture, store, manipulate, retrieve, and transmit data or information mainly in the context of business operations and is a subset of ICT. Defining ICT can be challenging because of the diverse applications of the term and its use in education, economics, information technology, socioeconomic development, and governance (Zuppo, 2012). When ICT is used in this study, the definition will focus on its use in education. ICTs consist of the hardware, software, networks, and media for the collection, storage, processing, transmission, and presentation of information (voice, data text, images), as well as related services (UNESCO, 2005). A working definition refers to ICT as a diverse range of technological tools and resources used to transmit, store, create, share, or exchange information (Hinostroza et al., 2014). 29 While many of the technologies that are currently being used in schools were not developed for educational purposes they have become important tools in teaching and learning. Some of these technologies are well-known and include computers (Flewitt et al., 2014), tablet computers (Enriquez, 2010), iPads (Flewitt et al., 2014), cell phones, e-readers, RSS readers, smart tablets (Cassidy et al., 2014), interactive whiteboards (Jang & Tsai, 2012) and multimedia (Ellis & Long, 2004). Regardless of their intended purposes, there is growing evidence that their use can improve educational outcomes to include academic performance in positive ways (Schacter & Fagnano, 1999; Underwood, 2009). Underwood (2009), for instance, highlighted how ICT use in the UK and USA led to noticeable changes in learning gains in their national curriculums. E-learning simply is the application of both IT and ICT to deliver learning experiences/assessments to students. Therefore, it is common to see ICT used as a broad umbrella term to refer to the different electronic teaching and learning formations in education. The next section will provide other scholarly definitions of e-learning, including benefits and challenges. It will also investigate critical factors for the successful implementation of e-learning in developing countries and explore countries that have used e-learning/ICT to improve educational outcomes. 2.2 E-Learning 2.2.1 E-Learning Definitions A wide range of definitions of e-learning exists based on its use in different professional practices and interests (Oblinger & Hawkins, 2006). Over the years, there is still a lack of consensus on a definition despite the prolific use of the term (Arkorful & Adaidoo, 2015). In their investigation into the definitions and characteristics of e-learning, Moubayed et al. (2018) devised four categorisations they identified as (1) Technology-Driven Definitions which focus on the technology aspect of e-learning, (2) Delivery-System-Oriented Definitions which focus on the accessibility of resources, (3) Communication-Oriented Definition which focuses on the tools of communication and the interaction between them, and finally, (4) Educational-Paradigm Definition which focus on new ways of learning or improving on the existing educational model. Other researchers maintained that e-learning covered learning, methods, processes, and the 30 applications that support them (Rossi, 2009), albeit Jenkins and Hanson (2003) more specifically referred to e-learning as “Learning facilitated and supported through the use of information and communication technologies” (p.4). The use of IT resources such as the Internet, intranet, satellite broadcast, and multimedia applications to deliver information in educational settings is crucial (Al-Homod & Shafi, 2013), and so researchers such as Wagner et al. (2008) have been quite pedantic in their listing. Accordingly, Wagner and colleagues referred to e- learning as “the use of the Internet, intranets/extranets, audio- and videotape, satellite broadcast, interactive TV, and CD-ROM” (p. 26). Finally, Sangra et al. (2012) undertook a project involving experts around the world to have them agree on a definition of e-learning. The experts’ preliminary definition was: E-learning is an approach to teaching and learning, representing all or part of the educational model applied, that is based on the use of electronic media and devices as tools for improving access to training, communication, and interaction and that facilitates the adoption of new ways of understanding and developing learning. (p.152) The varied definitions of e-learning may pose a challenge for researchers. However, e-learning is a circular process as all definitions, arguably focus on a similar objective which is to reach students and provide learning experiences, although the use of technologies may differ in many instances. Simply put, “e- learning is the application of Information Technology in the teaching and learning process” (Madar & Willis, 2014, p. 235). Wagner et al. (2008) sought to simplify the various configurations of e-learning which can be seen in Table 2.1. Variation in the use of e-learning technology is shown in Table 2.1. Wagner et al. (2008) posit that configurations of e-learning can be described through the attributes outlined in Table 2.1. These attributes are classified into dimensions and any e-learning course component will have any one of the two attributes from each dimension. For example, a course dimension synchronicity could be asynchronous, dimension location could be distributed, dimension independence could be collaborative and dimension mode could be electronic. Wagner et al.’s (2008) definition of blended learning can be restricted to supplementing classroom learning. However, recent research concurs that all types of education that include face-to-face and online learning are called blended learning (Hrastinski, 2019). 31 Table 2.1 Dimensions of E-Learning Dimension Attribute Meaning Example Synchronicity Asynchronous Content delivery occurs at a different time than receipt by the student Lecture module delivered via email Synchronous Content delivery occurs at the same time as receipt by the student Lecture delivery via webcast Location Same place Students use an application at the same physical location as other students and/or the instructor Uses a Group Support System (GSS) to solve a problem in a classroom Distributed Students use an application at various physical locations, separate from other students and the instructor Uses a GSS to solve a problem from distributed locations Independence Individual Students work independently from one another to complete learning tasks Students complete e- learning modules autonomously Collaborative Students work collaboratively with one another to complete learning tasks Students participate in discussion forums to share ideas Mode Electronically only All content is delivered via technology, there is no face-to-face component An electronically enabled distance learning course Blended e-learning is used to supplement traditional classroom learning In-class lectures are enhanced with hands-on computer exercises From “Who is responsible for E-Learning success in higher education? A stakeholders’ analysis,” by N. Wagner, K. Hassanein, and M. Head, 2008, Educational Technology and Society, 11(3), p. 27. 32 Table 2.1 explains the configuration of the e-LHSPP, with components that were designed as follows: 1. Dimension synchronicity was both asynchronous and synchronous; 2. Dimension location was the same place because there were connectivity issues with the Central Repository for Educational Materials; 3. Dimension Independence could not be ascertained because the reports about the e-LHSPP did not provide information on students but the instructional methods used in schools would support both individual and collaborative attributes, and; 4. Dimension Mode was blended. The e-LHSPP included these dimensions. The issues surrounding them will be further developed and discussed in Chapter 5. E-learning has evolved over the years with related concepts such as Computer Assisted Instruction (CAI), Computer Based Education (CBE), Computer Assisted Learning (CAL), and Computer Assisted Education (CAE), which were used before e-learning itself received recognition (Aparicio et al., 2016). As technology became more advanced, and the World Wide Web more ubiquitous, many institutions started using the Internet for Web Based Training (WBT) thus ushering in the era of e-learning. 2.2.2 E-Learning Benefits and Challenges Governments worldwide have sought to capitalise on the benefits of e-learning and have invested millions of dollars into projects they believe can meet the growing needs of education (Qureshi et al., 2012). Hence, this issue is not unique to Jamaica. The high expectations of improved learning outcomes seem elusive although there are some peripheral benefits, with many lessons learned in different ways. In the case of Jamaica, the massive investment in the e-LHSP was recorded as US$176,000 per school at the secondary level (Crawford, 2011). As a result of this outlay, it is important to highlight some of the perceived benefits of e-learning to provide a context as to the reasons the Jamaican government has persisted in the adoption of e-learning systems in schools. E- learning has been accepted as an alternative or complementary to the traditional way of education and is seen by 21st-century educators to provide a more efficient way to teach and for students to learn (McConnell, 2006). The commencement of the e-LHSPP was identified as a successful way for students to learn. 33 Jamaica’s three main goals for the e-LHSPP have been identified as (a) Improving the quality of education; (b) Enhancing the learning experience, and (c) Ensuring high levels of passes in the CSEC Examinations (Peart, 2011). Some of the major benefits of e-learning are summarised in Table 2.2. Table 2.2 Benefits of E-learning Advantages Sources Authors Individualised instruction Provides for learner-centred needs, abilities, learning styles, interests and resources at their hands. It allows for self-pacing for those who are slow or quick learners. Algahtani (2011) Behera ( 2013) Easy access Provides easy access to huge amounts of data at any time, place, and distance. Algahtani (2011) Behera (2013) Khan (2005) Smedley (2010) Interactivity Improved interactivity between teacher and student during lesson delivery. Wagner et al. (2008) Enable communication Students can communicate and dialogue with teachers and students anywhere in the world. Al-Adwan and Smedley (2012) Singh (2001) Zeitoun (2008) Evaluation and feedback Teachers can give students instantaneous feedback for tests and other assessments. Behera (2013) Brown et al. (2001) E-learning provides many benefits to secondary schools including improvement in the quality of education, albeit there are some challenges. Most of these challenges relate to the disparities between technological and educational perspectives. Some researchers contend the major challenge to e-learning is the change of interaction between students and teachers and limited collaboration among peers. In an article by Young (1997), one interviewee, Mary Bugan complained of being “afraid that administrators see technology as a cheap, quick fix for a complex problem” (p. 27). Another contributor, Janice Newson, an 34 associate professor of sociology stated that technology created for the classroom can discourage critical thinking when students move quickly through screens full of information, and ’bugs’ and glitches can waste valuable teaching time. Jones et al. (2009) also advanced statements about e-learning challenges, as they observed: Firstly, e-learning students require a high level of information communication technology competence, motivation and self-discipline. Secondly, students need to be informed regarding the nature of the experience. Thirdly, university admissions systems must include an assessment of the candidate’s ICT competence, motivation and consequent suitability for undertaking an on-line course. Finally, induction programmes must meet student needs in terms of academic level, flexibility and content. (p.1) E-learning can also affect communication skills. Students may have excellent academic knowledge but are unable to impart the knowledge to others (Arkorful & Abaidoo, 2015). E-learning might also be inappropriate for some disciplines as Arkorful and Abaidoo (2015) argued medicine and engineering required practical skills by doctors and engineers, respectively. In developing countries such as Jamaica, the lack or maintenance of equipment is a costly and serious issue. This was evidenced in the case of the Tablet in Schools Project in Jamaica where the project was adversely affected because of poor maintenance (Onyefulu et al., 2015). Nevertheless, the benefits of e-learning outweigh the challenges and this is evident in the proliferation of e-learning applications worldwide. One of the major benefits that the government anticipated from the e-LHSPP was its effectiveness. Accordingly, sourcing articles that determined the best approach to define and measure effectiveness was a focus of this review. To this end, Noesgaard and Orngreen’s (2015) study sought to answer two research questions related to effectiveness: 1. How is the effectiveness of e-learning defined? 2. How is the effectiveness of e-learning measured? To arrive at these answers they conducted a structured analysis of library databases to obtain a broad foundation of high-quality papers. After the analysis, they identified 19 different ways (41% of the articles examined) to define effectiveness as can be seen in Table 2.3. Based on the contents of the table, the 35 most frequent definition of e-learning effectiveness identified was “Learning outcome” (56%; 29/52). The learning outcome was defined by Noesgaard and Orngreen (2015) as participants’ acquisition of new understandings as a result of the e-learning initiative that is often measured by pretest-posttest experimentations. Table 2.3 Definitions of Effectiveness Higher education Work-related learning Total Number of Papers 52 40 92 Distribution of Papers Learning outcome 29 9 38 Transfer (application to practice) 3 15 18 Perceived learning, skills or competency 11 6 17 Attitude 8 3 11 Satisfaction 8 3 11 Skills acquired 5 5 10 Usage of product 4 5 9 Learning retention 4 4 8 Completion 0 5 5 Motivation and engagement 3 2 5 Organisational results 0 5 5 Application to simulated work practice 0 4 4 Self-efficacy 0 4 4 Confidence 1 2 3 Cost-effectiveness 1 2 3 Connectedness 1 1 2 Few errors 2 0 2 Raised Awareness 0 2 2 Success of (former) participants 1 0 1 Undefined effectiveness 10 2 12 From “The effectiveness of e-learning: An explorative and integrative review of the definitions, methodologies and factors that promote e-learning effectiveness,” by S. S. Noesgaard, and R. Orngreen, 2015, The Electronic Journal of e- Learning, 13(4), p. 281. Noesgaard and Orngreen concluded that the quantitative approach was the most common way to measure effectiveness through pre and post-testings as can be seen in Table 2.4. 36 Table 2.4 Research Study Method Mixed Qualitative Quantitative All abstract with… 9 5 37 Comparative studies applying… 0 1 18 From “The effectiveness of e-learning: An explorative and integrative review of the definitions, methodologies and factors that promote e-learning effectiveness,” by S. S. Noesgaard, and R. Orngreen, 2015, The Electronic Journal of e- Learning, 13(4), p.282. Noesgaard and Orngreen (2015) also concluded that most of the studies investigating e-learning defined effectiveness as “Learning Outcome” and the research approach was mainly quantitative. Accordingly, this research uses a similar approach to measure learning outcomes and define effectiveness by using a quantitative pre and post-testing approach to measure the effects of the e-LHSPP on students’ academic performance. The next section explains the two theoretical frameworks that guided this study. The first was the Logical Framework Approach (LFA) or Logical Framework Analysis (LFA). This was instructive for the analysis of the results and provided a structure that allowed the analysis of ICT/e-learning projects with other countries. The second theoretical framework enabled a broad understanding of e-learning systems. The literature was instructive to guide through the e-learning components under investigation, areas for data collection, and connecting the components necessary for the analysis. 2.3 Theoretical Frameworks 2.3.1 Logical Framework Approach/ Logical Framework Analysis The Logical Framework Analysis (LFA) was developed by the United States Agency for International Development (USAID) in the late 1960s for the specific task of assisting in the planning, management, and evaluation of projects (Coleman, 1987). The LFA guides the systematic and logical analysis of the important elements that are interrelated to create a well-designed project (The World Bank, 2005b). The LFA is a 4 X 4 matrix where the rows show the level of project objectives and what is needed to achieve them. Each column shows how the achievement of the objectives will be verified (see Table 2.5). 37 Table 2.5 Logic Framework Matrix Narrative Summary (1) Objectively Verifiable Indicators (2) Means of Verification (3) Assumptions (4) Goal Measurement of goal achievement Sources of information. Methods used. Assumptions affecting Purpose-Goal linkage Purpose End of project status Sources of information. Methods used. Assumptions affecting Output-Purpose linkage Output Magnitude of outputs. Planned completion date Sources of information. Methods used. Assumptions affecting Inputs-Outputs linkage Input Nature and level of resources necessary. Cost. Planned starting date Sources of information. Methods used. Initial assumption(s) about the project. From “Logical framework approach to the monitoring and evaluation of agricultural and rural development projects,” by G. Coleman, 1987, Project Appraisal, 2(4), p. 252. The four columns in the LFA matrix highlight the requirements to achieve a successful project. In the first column, the narrative summary describes the intervention and the goal links the project to the programme (Coleman, 1987). For example, in the case of Jamaica, there was a government programme for secondary schools to improve academic performance by creating a project called the e-LHSP. Jamaica’s LFA, found in the Feasibility Study for e-Learning Project (MCST & MoEYC, 2005), sets out the goal, purpose, output, and Input for its e- LHSP. The goal of the e-LHSP is to “Contribute to the creation of an educated and knowledge-based society that is internationally competitive” (p. 28). The narrative summary also identifies the purpose of the project and the inputs required to produce the outputs (Coleman, 1987). Based on its LFA, Jamaica’s e- LHSP purpose is to “Improve the quality of education in the high schools.” According to The World Bank (2005b), the second column in Table 2.5 identifies 38 the performance indicators and targets that can be verified for each level of the narrative summary; the third column pinpoints the source of the data that can be verified for the performance indicators at each level of the narrative summary while the fourth column describes the assumption condition(s) that should exist for the project to be successful. The purpose, output, and input related to the project undertaken. For Jamaica LFA, one of the Objectively Verifiable Indicators is the pass rate in Regional CXC/CSEC and the Means of Verification is the CXC/CSEC examination results. 2.3.2 E-Learning Theoretical Framework Figure 2.1 shows the holistic e-learning systems theoretical framework. This theory was chosen because it conflates the two main components, i.e., learning and technology. Accordingly, an understanding of both components is important to acquire an in-depth understanding of how the cognitive processes involved in learning are enabled by technology. The Framework has its genesis in cognitive load theory, which expanded to multimedia learning and was subsequently developed by educators such as Richard Meyer and Allan Pavio. Researchers such as Aparicio et al. (2016) developed the concepts further and designed a guide for e-learning studies. In Figure 2.1 Aparacio and colleagues designed the main information systems dimensions for e-learning systems. As can be seen, the holistic e-learning systems theoretical framework uses the main information systems dimensions which are people, technologies, and services. In other words,” e-learning is about connecting people, technologies and services, to fulfil educational objectives” (Arafat et al., 2019, p. 479). 39 From “An e-learning theoretical framework,” by M. Aparicio, F. Bacao, and T. Oliveira, 2016, Educational Technology & Society, 19(1), p. 302. Figure 2.1 Holistic e-learning systems theoretical framework The e-learning systems' theoretical framework dimensions are connected in the shape of a triangle with an inner triangle showing how the e-learning systems are connected to the dimensions. The first e-learning systems dimension people extend into the theoretical framework as E-learning Systems Stakeholders. The people who are affected by e-learning are considered the stakeholders (Wagner et al., 2008). Stakeholders are classified in the framework as customers, suppliers, professional associations, boards, and shareholders. The second e- learning systems dimension of services is expanded into the theoretical framework as E-learning Activities. The e-learning activities according to the framework, are divided into pedagogical models and instructional strategies. The pedagogical models of e-learning according to Behar (2011) can be understood as a system of theoretical principles that will represent, explain and guide the way the curriculum is approached and strengthen pedagogical practices and the interaction of teachers-students-object of study. The instructional strategies 40 facilitate learning and if a teacher desires positive outcomes the instructional strategies should be the most appropriate for the subject (Adunola, 2011). The third e-learning systems dimension of technologies is expanded into the theoretical framework of e-learning technologies and is divided into content, communication, and collaboration. E-learning technologies are considered a digital medium that facilitates the exchange of information between teachers (knowledge source) and their students (recipients) (Hsieh & Cho, 2011). The holistic e-learning systems theoretical framework proposes that people who are the e-learning systems stakeholders use the e-learning systems anywhere and at any time to achieve educational outcomes. E-Learning technologies provide the tools necessary for the stakeholders to communicate directly or indirectly using collaboration tools. The services enable e-learning activities to integrate the pedagogical models and the instructional strategies that are appropriate for the interaction between stakeholders by using e-learning technologies. The holistic e-learning systems theoretical framework is widely cited in e-learning research publications. For example, Arafat et al. (2019) used the framework to highlight e-learning as “informal (situated) learning as a general phenomenon” (p.479). I used the holistic e-learning systems theoretical framework to assist in the development of this study and to provide a pathway to proceed through the study. Accordingly, I classified the stakeholder groups and their connection to the e-LHSPP technology components as follows: People (stakeholders). Stakeholders are those concerned with e-learning in Jamaica. The framework provided a comprehensive list of which one category is customers represented by students at the secondary level of the education system in the regions under investigation. The students’ school leaving examination results will form the basis on which to determine the effects of the e-LHSPP. Standardised test scores are perceived as an index to show students’ improvement or lack of improvement in their school (Marston et al., 1983). Another stakeholder category referred to as suppliers, consisting of teachers, content providers, accreditation bodies, educational institutions, and technology providers all provided vital information for this study. The teachers prepared the students for the examinations and were involved in the production of the e- 41 learning material for the e-LHSPP. The CXC, which is the accreditation body for the Caribbean school leaving examinations, administered the standardised test for the CSEC to the schools. Then there was the educational institution—namely, high schools used in the pilot and non-pilot schools in the study, and finally e- LJam, the agency through which the e-LHSPP took place. This showed the linkages involved in the inner working of the supplier category. The final category in the e-learning systems stakeholder’s category used in this study was the Board, which in this case was the MoEYI that provided the policy framework for the e-LHSPP. Researchers Keramati et al. (2011) recognised the social factor of government (rules and administrative instructions) as one of the important readiness factors to determine the effects of e-learning. Government documents were used to obtain a better understanding of their role in the context of the intervention. Services (E-Learning Activities). Services related to the e-learning learning activities are based on pedagogical models which can be open and/or distributed learning, learning communities, communities of practice, and knowledge-building communities. To determine which pedagogical model was used in the e-LHSPP, Behar’s (2011) pedagogical model elements shown in Figure 2.2 was a helpful guide. From “Constructing pedagogical models for E-learning,” P. A. Behar, 2011, International Journal of Advanced Corporate Learning, 4(3), p.18. Figure 2.2. Element of a pedagogical model for e-learning 42 The organisational aspects consisted of materials from a variety of sources needed to discover the goals of the programme, the roles of stakeholders, and reporting relationships. The content related to the subjects that were piloted and the instructional material developed to assist students in the preparation of their examinations and educational software were supplied by e-learning. The methodological aspects allowed me to investigate the techniques, procedures, and e-learning resources used in the classroom. It allowed me to collect, analyse the administrative data and select the most appropriate evaluation technique to measure the effects. Finally, the technological aspects focused on the e-learning infrastructure available to the schools to ascertain their functionality and suitability for implementation. Technologies (E-Learning Technologies). E-learning technologies facilitate the use of content and connect people regardless of their location and time. Two main areas are highlighted in the framework. These are communication and collaboration. Rogers (2003) defined communication channels as “a process in which participants create and share information with one another in order to reach a mutual understanding” (p.18) and a “channel is the means by which a message gets from the source to the receiver” (p. 204). The communication provided the space in a virtual environment for people to collaborate. The space can be a discussion area, forum, and chat using asynchronous technologies. The use of computers was not new to Jamaica before 2006. What is new, however, is the introduction of the e- learning scale, scope, new technologies, pedagogies, people involved, and the coordinated effort that was centralised to allow all public schools to participate and benefit from the initiative. In the next section, I analyse e-learning/ICT interventions in other countries using the LFA. This determines what they did, if their projects worked, the inputs and outputs, and the results including failure or success. 2.4 Analysis of E-Learning in Other Countries Several countries around the world have been introducing e-learning/ICT in schools to improve students’ attainment. The use of ICT in the classroom as a 43 means to improve students’ attainment in secondary school is well established (Wagner, 2005). In this section, reviewed studies will primarily focus on how countries achieved this objective by interrogating their methods, analyses, and overall results. Studies examined occurred in Lebanon, South Africa, Argentina, Nigeria, Australia, the United Kingdom, India, and Jamaica. These studies were instructive to demonstrate what was done in each jurisdiction and the lessons learned to inform future studies. The LFA was used as a guide to analyse the various projects and determine their results. 2.4.1 ICT in Lebanon Nasser (2008) conducted a study on private and public Lebanese schools in the academic year 2005/2006; this was one year before the official scheduled start of the e-LHSPP in Jamaica. The Lebanese study sought to evaluate the effectiveness of ICT on student school performance. This study surveyed schools using predefined sets of ICT artifacts to understand the relationship between input (ICT) and output in student achievement. The ICT artifacts included such things as computers, printers, servers, and email access. However, the focus will be on the variables measuring students’ performance and how the study was conducted. The secondary data for this project came from one source, the Centre of Educational Development and Research (CEDR) which is a public research centre in the Ministry of Education in Lebanon. All Lebanese students sit four basic strands for the baccalaureate; these are humanities, sociology and economics, general sciences, and life sciences. According to Nasser (2008), ICTs are considered a resource that will determine whether differences in the resources in schools are associated with student achievement. He posits that ICT use tends to reproduce a resource differential and therefore he was able to assess the difference between public and private schools in terms of student output. The analysis of interest is the comparison between private and public schools and levels of ICT on student performance. The analysis revealed the percentage of students from both private and public schools who received a passing grade in the baccalaureate. Information was gathered on the amount of ICT artifacts resources in schools and personal computers (PC) per student. Two main analyses were done, one relating to ICT in public or private schools, and the 44 other compared private and public schools and levels of ICT on student performance. The PC level use was divided into two groups, high PC-Levels, and low PC-Levels. The analysis concerning students’ performance had two analyses, the first used the baccalaureate national exams to compare performance based on PCs and an aggregate sum of ICT. The analysis using the PCs usage level crossed by the school type (private/public) was passed through a 2x2 factorial Analysis of Variance (ANOVA) design on the percentage of passing on the baccalaureate exam measure of success based on the four strands taken in the exam. The factorial design was used to analyse each strand separately to ascertain PC use and the school output measure. The result revealed that there were no significant differences in all baccalaureate strands (high/low PC-Levels) per student. In other words, the difference between low and high PC-Levels usage was not significant. In the second analysis, ICT artifacts levels per student crossed by the type of school (public/private) also used the 2x2 ANOVA. These results also revealed that “neither computers nor ICT as an aggregate sum of all the 8 artifacts produced a significant main effect on the percentage of passing in the baccalaureate exam in all four tracks based on whether the school is private or public”(Nasser, 2008, p. 72). This was so even when there was a greater ratio between students and computers in private schools and ICT artifacts. This study concluded that there is no ICT effect on students passing the baccalaureate in secondary schools. 2.4.2 ICT in South Africa In their study conducted in the Education Management and Development Centre (EMDC) east education district of Cape Town, South Africa, Smith and Hardman (2014) sought to ascertain the impact of computer immersion on the performance of school leavers' senior certificate mathematics scores. The intervention, known as the Khanya Project, supplied computers, computer laboratories, numeracy and literacy software, ICT teacher training, and technical support to schools. The rationale for the study was the crisis South Africa faced in mathematics education at both the primary and secondary levels from 2001 to 2013. They compared the performance of two groups of students, where one group (experimental) was immersed in the use of computers and software in mathematics as early as 2001 45 while the other group (control group) started using ICT in 2006. The assumption made was that the experimental group would be more immersed in ICT which would compensate for low-capacity teachers. They hypothesised that computers would positively impact student performance. Their conceptual framework drew on Vygotsky’s notion of learning tools, in this case, computer software and hardware that impact cognitive development. Many research questions were tendered but the main research question was: “Have Matric Mathematics results in EMDC East high schools improved since the beginning of the Khanya intervention?” This question provided me with an understanding of another way to analyse ICT/e-learning effects on students’ attainment. The quantitative approach used by Smith and Hardman included a collection of secondary data categorised and accessed via the Western Cape Education Department database (WCED) and Khanya. The secondary data collected from WCED included Matric Mathematics results for each school in the sample which was divided into high and standard grades. The secondary data collected from Khanya included a list of all the EMDC East schools, the rollout phases, and the installation dates of the computer laboratories and software. To calculate the mean student score (MSS), Smith and Hardman used a system adopted from the University of Cape Town (UCT) that assigned symbols to the metric results and converted grades to points as shown in Table 2.6. Table 2.6 Grades Conversion to Points: UCT’s Admission Rating System for the South African School Leaving Qualification ACADEMIC LEVEL Admission Points by Symbols A B C D E F Higher Grade 8 7 6 5 4 3 Standard Grade 6 5 4 3 2 1 Table 2.6 was used to convert grades to points and then calculate the average score for each student by selecting each year and different phases of the ICT intervention multiplied by the number of A grades, B grades, C grades, etc., received by the students in each of the groups according to the UCT points allocation. The results were then summed and the total was divided by the total 46 number of students who wrote Matric Mathematics resulting in the mean student score. In response to the research question that is of interest to this research, they compared Matric Mathematics results (2003) before and after (2007) the Khanya intervention of all 11 schools by using a paired samples t-test. These 11 schools were representative of the entire population of schools that had long-term exposure to ICT. The paired samples t-test was conducted to evaluate the impact of the Khanya intervention on the MSS. Their analysis revealed no statistically significant change in the MSS before Khanya (M = 0.955; SD = 0.382) and after Khanya (M = 0.827; SD = 0.486), t (10) = 0.958, p = 0.361). The effect size calculated using Cohen’s d (1998) criteria was 0.084, which indicated a very small effect. An analysis of the results indicates that the Khanya intervention has not brought about a significant improvement in the overall Matric Mathematics results for the EMDC East high schools. 2.4.3 ICT in Argentina In 2010, Argentina implemented its ICT and education programme called the Conectar Igualdad (“Connecting Equality”) to improve the quality of education in secondary schools. The Conectar Igualdad programme was expected to guarantee ICT access and use by distributing laptop computers to all students and teachers at secondary schools, special education schools, and government- run teacher training institutions. Researchers Alderete and Formichella (2016) sought to identify improvements in terms of academic achievement accruing to the students who benefited from the Conectar Igualdad programme compared with their non-beneficiary peers. They opted to use a quasi-experimental design to avoid isolating participants from certain characteristics specific to the schools, especially how individuals are assigned to a particular group (treatment group). Secondary data was obtained from the PISA test administered every three years since 2000 to assess the competencies of 15-year-old students in various countries. Their study drew on the 2012 data set which contained information after the implementation of the Conectar Igualdad programme. A total of 1,922 students were selected but only 949 participated in the Conectar Igualdad programme. The control group had 392 while the treatment group had 557 students. 47 The researchers selected the propensity score matching (PSM) technique of Rosenbaum and Rubin (1983) which they considered a well-known methodology used to analyse impact evaluation for treatment and control groups. The PSM is a method for addressing selection bias and moving toward more causal estimates. Alderete and Formichella (2016) summarised the PSM as the probability that a student will receive treatment. This is estimated and expressed as the student’s score, and then the sample is divided into the treatment group and the control group. Next, each individual in the treatment group with a similar score (probability or propensity) in the control group is identified and matched or paired. In addition, an individual in the control group is matched with more than one individual in the treatment group. The difference in the levels of academic achievement in each pair and the average difference for the sample as a whole is calculated and the result is known as the average treatment effect (ATE). Finally, the standard error of the difference between each pair will allow a t-test to contrast the null hypothesis of null ATE. The result will determine if the hypothesis is rejected or accepted. Results from their study revealed that there was a statistically significant difference in average academic achievement between the groups of students who participated in the Conectar Igualdad programme (received the treatment) and those who did not. Though the result is significant its average size is small. There were limitations to this study. The researchers did not document the usage rate of the computer in the treatment group or the teachers’ competencies. It is not enough to count the number of computers available or the access to ICT but the quality time spent on its use is important. 2.4.4 ICT in Nigeria A study done in Nigeria sought to discover the effects of ICT on secondary school students’ academic performance in Christian Religious Studies. According to Ikwuka and Henry (2017), the selection is important in that it “tries to foster peace and unity among diverse cultures in the society as well as enhance growth and development in general” (p. 377). The problem faced by educators was the poor performance of students as reported by the West Africa Examination Council which they attributed to the lack of use of ICT and primarily the constant use of the conventional (traditional) method of teaching. Two research questions were put forward but the focus will be on the one that is of importance to this study, 48 that is, “Is there any difference in the academic performance of SS2 (Grade 11) students who were taught CRS using ICT and those taught with conventional method?” Their hypothesis was, “There is no significant difference in the academic performance of students who were taught CRS with ICT and those taught with conventional method” (p.379). The researchers used a quasi-experimental pretest-posttest, and control group design on an intact class. A sample of 73 SS2 students was selected from two of the 14 public secondary schools. The two schools from which two intact classes were chosen were selected using the simple random sampling technique. The experimental group consisted of 35 students and the control group of 38 students. The two intact classes were first pre-tested with the Christian Religious Studies Achievement Test and their mean and standard deviation were calculated. The results revealed the mean scores of 13.14 and 12.68 for the experimental and control groups and their standard deviation scores were 2.28 and 2.40, respectively. The study lasted for six weeks which was the normal school timetable period allotted for the CRS. The experimental group was taught using an ICT instructional package and the control group the conventional method of teaching. The items used for the pre-test were rearranged and used for the post-test using t-test to test the hypotheses at a 0.05 level of significance. The post-test comparison between the experimental and the control group results revealed a t-value of 3.14 greater than the critical t-value of 1.93 at a .05 significant level. They rejected the hypothesis and concluded that students who were taught CRS with ICT performed better and made higher achievement gains than those taught with the conventional method. This study was instructive for the gains made by the experimental group. The experimental group scored an average of 13.14 on the pre-test and 16.20 on the post-test which result in a mean difference of 3.06. The control group scored an average of 12.61 on the pre-test and 14.79 on the post-test which result in a mean difference of 2.18. When the difference is calculated between the two groups it works out to be 0.88 in favour of the treatment group. One of the major recommendations put forward by the researchers to the government is to provide sufficient funds to secondary schools to enable Christian Religious Studies teachers to use ICT tools for teaching CRS. A 0.88 increase is a small increase and is consistent with the other studies showing very small increases. 49 2.4.5 ICT in Australia Researchers Chandra and Lloyd (2008) conducted and reported on a study at a co-educational state secondary school in Queensland, Australia to determine students’ performance in a blended e-learning environment among other questions. This review took place during the same year the e-LHSPP was implemented in Jamaica and serves as a good source of comparison between the two. Specifically, the section of the study dealing with students’ performance was directly related to this study. The study was conducted over two years using successive cohorts of Year 10 science students whose ages ranged from 15 to 16 years. The research used both quantitative and qualitative approaches but the quantitative results will address the research question. Two cohorts consisted of Cohort 1 (n= 210) and Cohort 2 (n= 232) where Cohort 1 was the control group and Cohort 2 the treatment group studied in a blended e-learning environment. Both cohorts studied chemistry in Term 1 using the traditional pedagogy and were assessed the same way. In Term 2 both cohorts studied physics but Cohort 1 continued using the traditional pedagogy while Cohort 2 adopted the blended e- learning environment. Performance data at the end of each term was used to compare the cohorts and the pedagogical approaches used. The means from the tests were compared using a paired sample t-test after which the researchers further rank-ordered the boys’ and girls’ results from the Term 1 unit and then divided the results into quartiles. The Term 1 results were then compared with the results from Term 2 using a paired sample t-test. The overall results for boys showed an improvement in the mean scores from 70.1 to 75.2 with a mean difference of 5.1 and a standard deviation of 21.3 to 17.5 for the treatment group while the control group experienced a decline from 66.4 to 64.1 with a mean difference of -2.3 and standard deviation of 23.1 to 21.8. The results for the girls were similar in that though both groups experienced a decline the treatment group results for girls on the physics test were higher. The performance of boys and girls in each quartile of the treatment group performed better overall as a result of the ICT intervention. The result showed improvement overall but not equal for each student and the improvement was shown differently for each group in the quartile. The discussion surrounding this research shows how problematic it is to isolate the e-learning environment from other potential 50 variables that could influence the results. There could be the “halo” effect when students and teachers are part of a programme of study and become enthusiastic about learning. Studies like these are difficult to generalise because the learning environments are complex. 2.4.6 ICT in the United Kingdom Several studies conducted in the UK sought to determine the impact of ICT on students’ learning and attainment. Not many of these studies sought to determine the causal relationship between ICT use and students’ attainment in terms of test results or standardised tests. Some provided information on improved educational outcomes such as motivation, interest, attitude, effectiveness, and meaningfulness while some based their results on the teachers’ or students’ perceptions. Since the interest was in measurable outcomes, the focus will be on those studies that provided quantitative results. A review of the literature on ICT and attainment done by Cox et al. (2003) for the British Educational Communications and Technology Agency (Becta), on behalf of the Department for Education and Skills (DfES), revealed substantial evidence of the positive effects of ICT on attainment in almost all curriculum subjects especially mathematics, English, and science at all key stages. The UK carried out a very comprehensive investigation into the impact of ICT on educational attainment between 1999 and 2002 involving 2,179 students in 60 schools in England of which 30 were primary, 25 were secondary, and 5 were special schools spread across urban, suburban, and rural areas. That study known as the ImpaCT2 was reviewed by Harrison et al. (2002) who focused on Strand 1 – “Analysis and interpretation of national test data in relation to school rating for ICT” (p.6 ). According to Harrison et al. (2002), one of the aims of the ImpaCT2 study was to determine whether or not the ICT intervention impacted the educational attainment of students aged 8–16 years (at Key Stages 2, 3, and 4). The method used by the ImpaCT2 study to identify the impact of ICT on the attainment of National Tests and GCSEs was prediction using baseline data, and then the actual results were analysed. Each student’s actual achievement with their predicted achievement derived from their baseline scores was used. The comparison produced a “relative gain score” for each student which can be interpreted as negative if worse, zero if the student did as predicted, and positive 51 if the student did better than expected. Several benefits have been cited for the use of this method. Noted among them is that relative gain scores could then be set against different indicators of ICT that can capture the relationship between the use of ICT and performance in National Tests. The results of the ImpaCT2 study revealed improvement in Key Stage 2 (statistically significant positive association between ICT and National Tests for English, while positive associations were also found for mathematics), Stage 3 (statistically significant positive association between ICT and National Tests for science), and Stage 4 (statistically significant positive association between ICT and GCSE science and in GCSE design and technology but the positive association in GCSE modern foreign languages and geography did not reach statistical significance) (Harrison et al., 2002). These results reveal that ICT use can have a positive impact on subject-based learning. Key Stage 4 was the interesting level because it was in the school years 10 and 11, and ages 15 to 16 when students were sitting GCSEs similar to Jamaica’s CXC and CSEC examinations. A closer look at the Key Stage 4 subjects showed improvement but how much of an improvement is in question? The relative gain scores were converted to the equivalent GCSE for each subject. The scale used was A*= 8, A = 7, B = 6, C= 5, D = 4, and so on. Table 2.7 shows the relative gain scores for high ICT users and low ICT users. When the difference was matched against the GCSE scale it did not translate into a grade level movement for the schools that had high ICT use. The GCSE grade for both high and low ICT use would remain the same. Though some may consider this an improvement it would not move the students’ GCSE Grade 1 level up. Table 2.7 Mean Relative Gain Score for High and Low ICT Users by Subjects ICT Use English Maths Science Geog. History MFL D&T High ICT 5.19 4.84 5.19 5.42 5.3 5.21 5.07 Low ICT 5.06 4.82 4.63 5.05 5.27 4.39 4.66 Difference 0.13 0.02 0.56 0.37 0.03 0.82 0.41 Researchers Machin et al. (2007) also conducted a study in the UK to identify the causal impact of ICT expenditure on student outcomes. They reported that the UK government believed that ICT in schools was crucial to raise standards and 52 that ICT should be widely used across the whole curriculum and in all public schools. Considerable investment in ICT by the UK government in secondary and primary school to the tune of 108 million pounds in 1998/9 to 349 million pounds in 2002/3 was outlined leading the researchers to ask whether it was a good use of public money. The survey group consisted of 25% secondary schools and 6% primary schools. The major focus of the research was “how a change in the rules governing ICT funding led to changes in ICT investment and subsequently change the educational outcome” (p. 1146). The focus was on the section dealing with ICT investment and educational outcomes. To conduct the study, Machin and colleagues (2007) relied on administrative data at the level of the Local Education Authority from 1999 to 2003. To protect against what they termed “endogeneity problems” relating to ICT and students’ achievement the researchers used a policy change that took place in 2001 and created an instrumental variable (IV) strategy that would identify the causal impact of ICT expenditure. They used a quasi-experiment setting to estimate the effect of given treatment status. Students in K2 (11 years) in English, Maths, and Science were included in the study. They used a quasi-experiment setting to estimate the effect of given treatment status. The result revealed that ICT expenditure has led to significant improvement in students aged 11 in English and Science tests but not in Mathematics. The no effect in mathematics is concerning given that the same expenditure was made for the other subjects and there was no explanation as to why this could have occurred. 2.4.7 ICT in India A study by Banerjee et al. (2007) showed that the use of ICT can positively impact students’ attainments. The study evaluated ways to improve the quality of education in urban slums. Their study presented supporting evidence using a randomised quasi-experiment in the slums of Vadodara, India which showed that ICT was designed to remedy numeracy skills among children who were lagging in their studies. The study was quite effective. Computer Assisted Learning (CAI) was introduced from 2002 to 2003 called Year 2 and continued in 2004 called Year 3. The students in Grade 4 were offered two hours per week of computer time during which they solved maths problems by playing a variety of educational computer games at their ability levels. At the time of the study, Banerjee and colleagues opined that very little rigorous evidence on educational outcomes 53 existed, nor was there any reliable evidence for India and other developing countries. Studies from other developing countries, that existed, were not always encouraging. To determine whether the intervention resulted in any improvement in the students’ attainment levels, their learning was measured annually using pre-tests at the beginning of the school year and post-tests at the end of the term. Pretest- Posttest scores for students in the treatment group were normalised. The difference-in-difference (DiD) a regression of test score gain on a dummy for treatment school, controlling for initial pre-test score was used to arrive at the results. The CAL programme had a strong effect on maths scores for both years 0.378 points. The years 2002 to 2003 (Year 2) had a strong effect on maths scores of 0.366 points above the control group and in 2003 to 2004 (Year 3) of .0443 points above the control group. The DiD results were obtained from Table 8 of the researchers’ study. The researchers concluded that the CAL program had a strong effect on maths scores in the short term. 2.4.8 ICT/e-learning in Jamaica Morrison’s (2016) study entitled: “e-Learning High School Project Evaluation: Final Report” was downloaded from the MoEYI website. Morrison conducted a summative evaluation to assess the efficiency and effectiveness of the implementation of the project. One of the specific tasks of the consultant was to review the effects of the e-LHSP on the performance of the students' aggregate scores in the subjects selected for the project in the school-based examinations at each grade including the CXC and CSEC School leaving examination. A multi- method approach of quantitative and qualitative approaches was used with a sample size of 45 project schools inclusive of 5 schools from the pilot. These schools were selected based on size, type, and location. Altogether, there were 23 traditional high schools and only 21 non-traditional high schools selected in addition to one community college. To ascertain the performance of sample schools in external examinations over the project period, data from CXC passes per sample school, course test results, sector reports, and internal policy documents on e-learning were used. The quantitative data were analysed using simple comparative analyses across schools, trend analysis, and the use of measures of central tendency and dispersion. The qualitative analysis included 54 the use of cause-effect analysis by chronologically assessing the activities to ascertain the root causes of component performance and their impacts on beneficiaries. The report revealed that between the periods 2006 to 2014 the sample schools selected showed a significant increase in passes in Mathematics by 32 percent and English language by 20 percent. The schools were divided into traditional and non-traditional high schools and all 11 subjects showed annual increases of 2.6 percent for non-traditional and 1.2 percent for traditional high schools. The results seemed to indicate that non-traditional high schools had higher increases than traditional high schools. The overall annual increase in sample schools was 8.73 percent over the project period from 2006 to 2014. His report only gave specific percentage passes for Mathematics and English language for the period 2007 to 2014. The Mathematics and English language figures between 2008 and 2010 were useful for the period of this study. Based on calculations from Chart 7, Mathematics received approximately 36 percent passes in 2008, approximately 39 percent in 2009, and approximately 48 percent in 2010. English language results in 2008 were approximately 59 percent, approximately 68 percent in 2009, and approximately 70 percent in 2010. Morrison’s results were problematic as they failed to separate and account for the two major periods of the project, firstly the e-LHSPP and secondly the implementation phase of the e-LHSP. Also, there was no comparison of the pilot schools’ performance with the non-pilot schools’ performance to determine the impact of the e-LHSPP, and only a small sample of 5 pilot schools was included. Instead, the results were merged and presented as one outcome. An interrogation of the results from the report during the period 2008 to 2010 also revealed that other factors that may have an impact on the result were not taken into account so it would be difficult to claim that the improvements were solely the results of the e-LHSP. 2.5 E-learning in Developing Countries E-learning has been introduced in developing countries to meet the demands of governments to provide better education but there has always been a shortage of resources to adequately meet the needs of schools. The major challenges faced by developing countries are usually a lack of expert teachers in ICT, weakness in content delivery and learning materials, and inadequate network infrastructure 55 (Aung & Khaing, 2016). Africa was explored because outside of the Caribbean, the continent is closest to Jamaica in demographic, economy, politics, and culture. Africa is considered an emerging market for e-learning but the continent continues to lag behind developing countries (Nhando, 2015). Nhando identified three key challenges in implementing e-learning in Africa. The first was Internet access/connectivity. According to Nhando (2015), the UN Broadband Commission reported that 8 of the 10 countries with the lowest levels of internet availability in the world are in sub-Saharan Africa (Ethiopia, Niger, Sierra Leone, Guinea, Somalia, Burundi, Eritrea, and South Sudan) and internet penetration is less than two percent of the population. Like Jamaica, it is very expensive to provide students with internet access, especially in the rural areas of Africa. Secondly, there is the issue of the availability of locally developed content and curriculum online. The majority of textbooks used in tertiary institutions are from the USA and the UK resulting in no consistent drive to develop local content. Similar to Jamaica, most of the textbooks, particularly on e-learning/ICT, are from the US and the UK. In Africa, there is a similar lack of local content developed by Africans and this creates language barriers, especially for the younger population. In Jamaica, the situation is the same. Hardware, software, and digital media are all sourced from overseas. Finally, with regard to training and professional development, it is reported that teachers in Africa are exposed to limited technology so it is difficult for them to utilise technology to engage and support learning. Efforts are being made, according to the study, to train teachers through private partnerships but it has proven to be difficult. Jamaica had the same issue during the e-LHSPP where a massive effort was undertaken to train thousands of teachers in ICT skills but it has proven difficult to get the teachers to use other technologies apart from PowerPoint. My IFS study in 2016 revealed that student teachers from three categories of teacher training institutions were mainly using PowerPoint to deliver their lessons and in their teaching practicum, the trained teachers were doing the same thing. The COVID-19 pandemic has forced teachers in Jamaica to utilise technology in the classroom in new ways. In the next section, I will discuss e-learning/ICT and educational outcomes. 56 2.6 E-learning/ICT and Educational Outcomes Despite its widespread use, the debate is ongoing as to whether the adoption of e-learning for educational purposes in schools can improve students’ educational outcomes, particularly attainment in school subjects. There is a dearth of literature that either proves or disproves that the adoption of e-learning has changed educational outcomes in Jamaica and by extension the Caribbean. Balanskat et al. (2006) have provided insight as to how we can view an intervention. They contend that “impact is the overall achievement of an intervention on the educational system and can be described by a variety of quantitative indicators such as ‘improvements in national test’ results or ‘improved learning in schools’ depending on the policy target” (p.24). According to them impact is also “the end-point of an intervention involving input, process, output, and outcome, and to isolate the variable that caused the impact is problematic in education” (p.24). A search for studies to ascertain the effects of e-learning in education revealed that impact studies did not explicitly address impact. A review of studies on ICT impact on schools in Europe was carried out by Balanskat et al. (2006) during the time the e-LHSPP was in its infancy. One aim of their study was to provide an overview of impact studies and the areas where the impact has been shown. The methodology used to conduct the study was to first identify recent studies carried out at the national and European levels relating to measuring and demonstrating the impact of ICT. Secondly, studies were selected based on a selection criterion. Thirdly, procedures for reviewing the research to ensure a systematic and relevant approach were established. Finally, specific thematic issues were agreed on for examination. The aspect of interest in this study is “Impact on learners and learning outcomes.” The study found that of the seventeen impact studies and surveys carried out at national, European, and international levels only three considered the concept of “impact” on student attainment explicitly. This study showed that around the time the e-LHSPP was in its initial stages in 2006 only three projects in Europe had studied “impact”. The first study in Balanskat et al. (2006) report was the ImpaCT2 project discussed in the section “Using ICT in United Kingdom.” The second study in their report was the University of Newcastle’s 2002 study to evaluate the 57 “Embedding ICT in the Literacy and Numeracy Strategies” pilot project. The University of Newcastle’s 2002 study by Higgins et al. (2005) showcased the pilot project in Year 5 and Year 6 classes in 12-15 schools in each of six Local Education Authorities (LEAs) ran from the autumn of 2002 to the summer of 2004. The focus of this study was related to the impact on students’ attainment. The researchers wanted to find out the extent to which interactive whiteboards (IWB) project classes were performed compared with a sample of similar schools using attainment data from Y6 national tests in 2003 and 2004. The method of analysis at the school level was the use of descriptive statistics: To compare the mean progress of IWB and non-IWB schools, the 2002 scores for all schools in the six LEAs were used to predict scores for 2003 and 2004. Standardised residuals for the two groups of schools (measuring how far each school’s results differ from the prediction) were then compared by t-test. (p. 10) The aggregate results in 2003 showed that there was a small gain in English, mathematics, and science with a small effect size (Cohen, 1988) of 0.09. The small gain made by the IWB pilot schools was not sustained in the second year— 2004 of implementation. The IWB aggregate results showed marginally less progress than in the non-IWB schools with an effect size of -0.10. The conclusion drawn was that the non-IWB schools made marginally more progress in English, Maths, and Science than the IWB schools. This led to questions being asked about the contribution of ICT to learning because some teachers claim that it was good teaching rather than technology alone. The third project mentioned was another UK study the “Test Bed Project,” conducted between 2002 and 2006 which sought to investigate how the sustained and embedded use of ICT in learning spaces can improve learner outcomes. The University of Newcastle only supplied results for 2002 and 2004. Another study by Somekh et al. (2007) gave a complete evaluation between 2002 and 2006. This was used to present the results of the Test Bed Projects at the secondary level since that is the education level of interest in my study. The findings revealed that the impact of ICT on attainment levels was less in secondary schools and greater in primary schools. A major reason given is the timetabling and room-changing structure brought on by the curriculum in secondary schools which are mainly determined by external authorities, examination boards, and central strategy creators. The study by Balanskat et al. 58 (2006) showed that in Europe up to the time of the e-LHSPP only three studies matched the definition of “impact” relating to students’ attainment and these studies did not show any large increase in attainment as was expected even though different methods of analysis were used in each study. In some cases, the margin of progress declined during the intervention. A common trend was that primary school interventions were more favourable than at the secondary level. Other international studies highlighted mixed results for ICT intervention. For example, Michael Trucano, the World Bank's Senior Education & Technology Policy Specialist and Global Lead for Innovation in Education analysed the impact of large-scale tablet and laptop initiatives in the USA, Uruguay, Thailand, Peru, Kenya, Rwanda, Turkey, India, Argentina and Portugal (Trucano, 2013). The results revealed a small or undetermined impact. Up to 2014, studies in sub- Saharan Africa showed little evidence of positive effects of ICT intervention, and what little evidence existed suggested that ICT programs were not often effective (Piper, 2014). Nevertheless, other researchers have successfully identified the link between the use of ICT and improved academic performance, as ICT intervention did improve students’ performance in terms of test scores (Chandra & Lloyd, 2008; Taylor et al., 2007). The next section will look at a few studies carried out in Kenya. 2.6.1 ICT Context in Kenya Like Jamaica, Kenya is classified as a developing nation based on the World Bank classification. Similarity can also be found in the education system starting from primary to tertiary and the push to improve the ICT infrastructure of the country. At the time Kenya was implementing its ICT policy in 2006 (Farrell, 2007), Jamaica was rolling out its e-LHSPP in secondary schools. The Government of Kenya saw the need to include ICT in Kenya and developed a National ICT Policy in January 2006 (Farrell, 2007). The section of the policy about the objectives of ICT in education sought to encourage “… the use of ICT in schools, colleges, universities, and other educational institutions in the country to improve the quality of teaching and learning” (Farrell, 2007, p. 3). By June 2006 the National ICT Strategy for Education and Training was introduced in Kenya. A few studies carried out in Kenya between 2007 and 2017 were examined to show how Kenya had evolved in ICT in education and gain insights 59 into possible issues that could also affect Jamaica. While Jamaica had decided to reform education with the use of ICT (Paulwell, 2005), the country never had the opportunity to evaluate existing projects and learn from them. Instead, a feasibility study was done and on that basis, Jamaica launched its e-LHSP. At around the same time, in 2005, Wims and Lawler (2007) were conducting their study in Kenya with an overall aim to ”evaluate the implementation of ICT projects in selected educational institutions in Kenya with a view to making recommendations on how such projects should best be deployed and supported” (p.10). They argued that up to this time, despite the growth of ICT in Kenyan schools, there had been little evaluation of their effectiveness. The sample included three educational institutions, two secondary schools, and one agricultural college. Wims and Lawler used a multi-method research design of quantitative and qualitative methods using a survey, interviews, questionnaires, and observation. They scrutinised the programmes of teaching and learning with ICT using data from personal observations, documentary analysis, and key informant interviews. The key finding of interest for this study was the impact of exposure to ICT in schools. Wims and Lawler did not provide any quantitative results regarding impact but focused on tangible benefits such as students acquiring short-term work in the area of IT and career choices. What the researchers found surprising was the fact that some students chose careers in computing that was not part of the syllabus content. The authors blamed the students’ choices on the lack of Internet access in schools. The issues facing the schools were the need for staff training, access to ICT across the curriculum, more computer equipment for staff and students, and the development of relevant software and Internet access. Similar issues were facing Jamaican schools simultaneously but neither country included a comprehensive plan for students’ engagement in ICT which is imperative for success. Mingaine (2013) found that the high acquisition and maintenance cost of ICT equipment impeded the adoption and integration of ICT in schools in Kenya. Access to internet services was expensive for computer users in schools. Some school management did not prioritise ICT in school, and there was an inadequate supply of trained teachers to implement ICT in schools. Another study done by Kamau (2014) substantiated the fact that despite the many years since the massive investment in ICT in Kenya, the level of competence of teachers in the 60 use of ICT as a pedagogical tool was still low. Kamau indicated the “need for technology training for Kenyan teachers to refine their technical skills, technology pedagogical knowledge, perceptions, attitudes, and confidence to adopt technology” (p. 28). Francis et al. (2017) investigated the factors that influenced the implementation of ICT in a sub-county in Kenya and found that factors related to schools’ vision of ICT policy had the highest negative impact on implementation. The second factor was related to the cost of the ICT infrastructure and in particular the cost of computers and installation of the internet in schools. While these studies suggest that Kenya might not have experienced the intended impact of ICT in education they provided valuable information when investigating the e-LHSPP effects in Jamaica. Moreover, Kenya’s new draft National ICT Policy 2016 was published with positive vision and mission statements that envisioned the country as “a prosperous and competitive ICT-driven Kenyan society” geared towards improving “the livelihoods of Kenyans by ensuring the availability of accessible, efficient, reliable, affordable and secure ICT services” (Republic of Kenya, 2016). The challenges that Kenya encountered were similar to those encountered in the e-LHSP up to the time of the Morrison (2016) report on Jamaica. It appeared that introducing ICT/e-learning, particularly in secondary schools with the expectation of significantly improving students’ attainment is a difficult thing to do. 2.7 Summary Studies around the world have yielded mixed results regarding the benefits of ICT/e-learning but when the impact on students’ attainment is studied particularly in developing countries, the result shows little or no improvement. Developing countries find it difficult to implement e-learning primarily because of a lack of funding, leadership in schools, resources to include teacher training and infrastructure, and opposition from educators who prefer teacher-centred learning instead of student-centred learning. Despite the difficulties to implements e- learning there are tangible benefits to gain from its implementation. The more prominent benefits are associated with students’ learning such as individualised instruction, improved student interest, easy access to learning materials, heightened interaction with teachers, and instantaneous feedback and evaluation. 61 The success of any e-learning implementation programme will be dependent on an effective way of measuring its outcome and studies have shown that the most common definition of effectiveness is learning outcome. Studies have shown differences in the applicability of e-learning for educational purposes and several studies that were said to me measuring impact, really showed other factors such as students’ motivation and interest. Schools and learning environments are different and complex which makes it difficult to draw global conclusions from any setting. The methodological problems that beset research into e-learning in education especially as it pertains to students’ attainment are shown in the literature review. Several methodologies were described to show the various ways researchers are seeking to prove or disprove that ICT/e-learning can positively impact learning outcomes. Most of the methodologies used the quantitative approach. Examples of the type of analysis used were (a) using secondary data and passing it through 2x2 factorial Analysis of Variance (ANOVA), (b) use of secondary data, assigning symbols to the results, and then analysing these results using non-parametric Mann-Whitney U test, (c) paired samples t-test, (d) Propensity Score Matching (PSM) technique of Rosenbaum and Rubin (1983), (e) use student’s “baseline” scores than their actual achievement to produce ‘relative gain scores’ for each student,(f) difference-in- difference, (g) Instrumental Variable Strategy and (h) cause-effect analysis. The various approaches that are used to establish a causal relationship between the use of e-learning and educational outcomes show the challenges faced by researchers. Using the difference in difference model in comparison to the others provided a more robust approach to investigating e-learning effects on learning outcomes at the macro level. 62 Chapter 3 Research Methodology This study investigated the effects of the government’s e-Learning High School Project Pilot (e-LHSPP) on students’ attainment in their school leaving examination. This chapter addresses the research questions, hypotheses, research design, population, sample size, data collection, key variables, data analysis, e-learning technologies used, and ethical issues. The Jamaican government implemented the e-LHSPP in a set of public and bursar-paid schools.1 Pilot schools for e-LHSPP were selected based on the maximum variation purposive sampling technique. The maximum variation purposive sampling technique is used to select organisations, people, and events that can have a wide range of attributes and situations to gain greater insights about a project from all angles (Rai & Thapa, 2015). The sample high schools were chosen according to the e-LHSPP because three of the parishes (St. Andrew, Kingston, and St. Thomas) were close to facilitate ease of logistics in the implementation of the activities. In total, there were 6 eligible schools in St. Catherine, 13 in St. Andrew, 5 in Kingston, and 2 in St. Thomas making a total of 26 eligible schools for the pilot programme. Further selection criteria included schools that are from rural and urban areas, inner city and uptown, boys only and girls only and coeducational, traditional and upgraded and technical, performing well and average performing (e-LJam, 2006). This selection criterion would include all the possible school types in the country. The schools' information was taken from the Jamaica Directory of Educational Institutions 2006-2007 from the parishes identified (MoEYI 2008). The selection process for the pilot schools sought to eliminate some bias by using a criterion consisting of a combination of school types, gender, location, social stratification, and academic performance that would represent all schools in the region of the e-LHSPP. 1 The bursar paid schools are schools that have greater autonomy over their administration but the government pays the administrative staff, students’ tuition fees and teachers’ salary and benefits. Students’ admission is regulated by the government to some extent. Catholic Church run schools where the church has control over the school affairs are examples of bursar paid schools. 63 To determine the representativeness of the sampled schools, I used the Jamaica Directory of Educational Institutions 2006-2007, the schools' websites and the CXC/CSEC exam results for both pilot and non-pilot schools four years before the e-LHSPP. The Jamaica Directory of Educational Institutions 2006-2007 confirmed the sample schools, parish, region, and address/location while the schools' website provided information about the schools’ gender and social stratification. The CXC/CSEC exam results provided the information regarding the academic performance of the sampled schools and I was able to match them against the categorization of performing well or average performing. The sampled schools were indeed representative of the population of schools in the regions. There were 26 pilot high schools: 6 were from the parish of St. Catherine, 13 from St. Andrew, 5 from Kingston, and 2 from St. Thomas as shown in Table 3.1. Table 3.1 Population and Sample Schools Participants Public Schools Parish Population (N) Sample (n) St. Catherine 22 6 St. Andrew 27 13 Kingston 13 5 St. Thomas 6 2 Total 68 26 3.1 Overview Administrative archival data spanning five years between 2006 and 2010 for five subjects English language, Mathematics, Chemistry, Biology, and Information Technology that was piloted in secondary schools during the period 2008 to 2009 were examined under the government’s e-LHSPP in Regions 1, 2, and 6. Details of the government’s e-LHSPP can be found in Chapter 1. This study sought to answer two research questions and a sub-question from Research Question 2 as shown in Table 3.2. 64 Table 3.2 Research Questions and Hypotheses Research Questions Hypotheses Analysis 1. What are the effects on students’ attainment when the e-LHSPP was implemented in schools? H0 = There is a statistically significant difference between the treatment (pilot) group and the control group. Difference in Differences (DiD) with linear regression. 2. Were the components and design appropriate for the successful piloting of the e- LHSPP? Document Analysis 2A. What were the reported issues affecting students’ attainments during the e- LHSPP? Document analysis 3.2 Research Design Given the nature of the research questions, I used the quantitative approach and document analysis. First, I deployed a quantitative approach with an evaluation design to answer RQ1. This study uses the quantitative approach which allows educational practitioners to objectively inquire, assess, analyse, predict and draw conclusions about the effectiveness and outcomes of educational projects. The approach enables testing how well findings generalise to a similar setting beyond the confines of the data. The data collected was administrative archival quantitative indirect data. Administrative data is a form of secondary data that is collected mainly for administrative purposes but can be accessed by researchers for scholarly activities (Figlio et al., 2015) while quantitative indirect data is a form of secondary data because it was not produced with a research project in mind (Clark, 2014). The design was chosen to estimate the effect of the e-LHSPP on students’ performance. Creswell (2008) noted that quantitative research allows the researcher to decide “what to study; asks specific, narrow questions, collects quantifiable data; analyses these numbers using statistics, and conducts the inquiry in an unbiased, objective manner” (p. 46). A quantitative study is based on testing hypotheses about population parameters from samples. The evaluation aspect of the design is best suited for this study because according to Summer (1977, as cited in Waddell, 1991, p. 255) “evaluation research is characterised by the formulation of hypotheses, the manipulation of variables, and the study of relationships, and has its purpose the generation of new knowledge. I was able to compare the before and after “means” of two groups of schools. The two groups 65 consist of the treatment and the other control group and the comparison was done without bias to answer the research questions and meet the research objectives. I used the “Before”-“After” Evaluation Design with the Control Group which allowed me to measure both the treatment and control group at the beginning and end of the e-LHSPP to produce the effects/outcome (Sedgwick, 2014). Both control and treatment groups had similar characteristics and experiences in the same environment except for the intervention received by the treatment group (Sedgwick, 2014). Secondly, to answer RQ2, I used pre-existing archival documents as data also called indirect data which is a form of secondary data (Clark, 2014). The data was collected from a variety of document sources and I used document analysis to analyse the various documents which highlighted the context in which the e- LHSPP took place. The information provided me with a descriptive account of some of the contributing factors that affected the e-LHSPP. 3.2.1 Methods Researchers such as Hakam (as cited in Smith, 2008) used secondary data analysis to mean: Secondary data analysis is any further analysis of an existing dataset which presents interpretations, conclusions or knowledge additional to, or different from, those produced in the first report on the inquiry as a whole and its main results. (p. 4) Therefore, this thesis in “its broadest sense is the analysis of data collected by someone else” (Boslaugh, 2007, p. ix) to answer new research questions with better statistical techniques by using old data (Glass, 1976). The use of secondary data analysis was advantageous, the data already existed and was quicker to collect, involved less travel time and was at minimal cost (Gorard, 2003). I was able to make considerable progress given the time remaining to complete this study. Given my work commitments, and the cost of telephone calls, printing, Internet, and travel it would have taken to access and collect primary data, it was worth using secondary data analysis when most of the datasets were at a minimal administrative cost. However, secondary data analysis used in research has its critics. According to Smith (2008), one of the major criticisms is that it often involves the analysis of data that has been 66 collected with a very different purpose in mind. Another criticism identified by Smith (2008) is “that it is full of errors; that it cannot be used to make useful comparisons; and that secondary data, and official data, in particular, are not value-neutral but are controlled by those in power” (p. 22). Although the data I collected had a different purpose I was able to reorganise the data to suit my study. The results I used were official examination results and were accepted by the governments of the Caribbean as authentic and without errors. Other researchers such as Yorke (2011) argue that data collected from the previous researcher(s) may not capture what the researcher would have preferred to capture. In such cases, the researcher will have to make do with what is available and quickly decide if the dataset will produce good findings. The data was well- preserved and contained all the variables I needed. Yorke (2011) also critiqued the poor data quality that may arise from data entry errors and other problems with the categorisation of incompatible datasets. How I handled the potential data entry errors is explained further in the Procedure Section. The lack of familiarity with the data is a concern of Bryman (2001). Unfamiliarity with the data was a potential obstacle because I did not collect the data but I spent time familiarising myself with the datasets. I did not underestimate any dataset and quickly got to grips with the range of variables, the variable's coding patterns, and the various ways in which the data was organised. Regardless of the pitfalls identified, secondary data analysis was the most appropriate way to answer the research questions, complete this study on time, and overcome many of the obstacles in Jamaica associated with an empirical study of this nature, providing the study with high-quality data and reanalysis that led to new interpretations (Dale et al., 1988). 3.2.2 Ethical Consideration I received ethical approval from the University College of London (UCL). I did not collect any student individual data. Instead, the data collected consisted of school averages for the different subjects. As a result, there was minimal to no risk to students. Government organisations in Jamaica do not provide datasets on their websites but they do provide research and summary reports that the public can access. Jamaica’s Access to Information Act (2002) gives citizens the legal right to obtain access to a government document that is not on the exemption list. The 67 provisions of the Access Act allowed me to have access to written documents and administrative data for research purposes. There are many ethical principles available to guide researchers. This research will be guided by the ethical guidelines taken from the British Educational Research Association (BERA; 2018) and the University College of London ethical guidelines. I was aware that the students being studied are not directly involved but there are still ethical concerns such as privacy and confidentiality that I had to address. The administrative data that I used was collected by another researcher and therefore I knew that I had an ethical responsibility not to distort or misuse the information and bring the original researchers and agencies into disrepute. Where sensitive information was collected I maintained the confidentiality of the authors. Some of the information I collected are social, economic, and political products that were collected with a particular focus. I maintained objectivity and prevented any bias by seeking advice from my supervisors when I was in doubt. Using secondary administrative data in this study eliminated privacy and according to Bulmer (1987), privacy is “the claim of individuals, groups or institutions to determine for themselves when, how and to what extent information is communicated to others’’ (p.113). Nevertheless, I ensured that potentially vulnerable participants and institutions were protected and any information received was not used to put at risk any participant and/or researcher. BERA guided privacy and specifically stated that researchers should be aware of the possible consequences when participants, and in this case, the schools that are indirectly involved, are identified by association or inference. BERA further recommended that the researcher try to avoid the identification of the participants and use the approach to fictionalise or change identifying features as this would protect them from the readers identifying them. Confidentiality was another concern for me. Bulmer (1987) states that confidentiality is concerned with the “condition, safeguard and security under which the person collecting the research data keeps that data” (p.113). BERA supports the view that researchers should accord institutions and participants their rights to confidentiality. I did not name any school, principal, teacher, or student, nor did I disclose any information from the institutions that could be traced back to them. I am pleased that I was given written permission from the 68 OEC to copy and use the administrative data in my study, see Appendix B for the authorisation letter. This study does not require personal data but the General Data Protection Regulation (2018) cited in BERA cautions researchers about the careless use of personal data and data use. I have applied the principle of data protection to the schools in this study and the administrative data I captured about each school in Excel will not be made public or shared with anyone and will be destroyed or safely stored after consultation with my supervisor. Ethical considerations were a concern. I received ethical clearance when I was successful in my thesis upgrade examination. A major advantage of secondary data analysis especially in a pandemic such as COVID-19 with all its variants is the heightened safe collection of the data, in that, there was no personal contact with the participants and their schools. None of the schools were named. My thesis only reports aggregate statistics such as means, regression results, and themes that emerged from documents. This meant that there was no consent condition that I had to abide by from the original administrative dataset (Heaton, 2008). The data was not shared with anyone except my supervisors. 3.3 Estimating the Impact of e-LHSPP on Attainment 3.3.1 Dataset To determine the appropriateness and quality of the potential dataset for my thesis, I followed the steps detailed in Smith (2008) and Stewart and Kamins (1993). See the summaries in Table 3.3. Table 3.3 Evaluation Steps 1 What was the purpose of the data? 2 Who collected the data? 3 How was the data collected? 4 How relevant are the data to my research question? 5 Do the variables match? 6 Do I have the resources to cover the cost to retrieved data? 7 Are the data of good quality? 8 Who was the information collected from? 9 How precise are the data? 10 Which school is missing from the data? 69 After reviewing the options I received, I drew on administrative archival quantitative indirect data (administrative data) school attainment records to evaluate the attainment impact of the e-LHSPP. Administrative data are grey literature collected mainly for administrative purposes but can be used for research (Figlio et al., 2015). Administrative data can be used to study population-level data or assess the heterogeneous effects of educational policies and practices. The administrative data collected was of high quality, although retrospective data collection, e.g., through a teacher or head-teacher survey would have yielded less precise information given the time since the e-LHSPP rollout (Figlio et al., 2015). A major advantage of administrative data during the COVID-19 pandemic was the lack of contact with participants and their schools. I collected the CSEC School leaving examination results to measure attainment. Data was collected from the Jamaica Overseas Examination Commission (OEC). I obtained permission from the OEC to digitise the school records (see Appendix B). The OEC is mandated by the government of Jamaica to act as a broker between the CXC. The CXC examining body has been in existence since 1972 and is the examining body that provides educational certifications in 16 English- speaking Commonwealth Caribbean countries and territories. The OEC receives the CSEC results for all schools in Jamaica where it is held in the same printed format as the other Caribbean countries. Each school reports the total number of students registered for the exam, the total number of students who sat the exam, the total number of boys and girls, and the total number and percentage of boys and girls obtaining Grades (I, II, III, IV, V, VI) were I, II and III are passing grades similar to GCSE A, B, and C. Table 3.4 summarises the grading scheme. 70 Table 3.4 CXC/CSEC Grading Scheme Grade Level Profile Grade I shows a comprehensive grasp of concepts, knowledge, skill, and competencies Grade II shows good grasp of concepts, knowledge, skill, and competencies Grade III shows fairly good grasp of concepts, knowledge, skill, and competencies Grade IV shows a moderate grasp of concepts, knowledge, skill, and competencies Grade V shows a limited grasp of concepts, knowledge, skill, and competencies Grade VI shows very limited grasp of concepts, knowledge, skill, and competencies From “About the Council,” by Caribbean Examination Council, 2016, December, http://www.cxc.org/examinations/csec/ The CXC uses a criterion reference system where performance is measured against specific objectives, set criteria, or standards that do not measure against the cohort (Figlio et al., 2015, p. 8). The grading system ranges from Grade Level I to VI where Grade I is the highest. Each grade level has an explanation of the level of competence achieved as shown in Table 3.4. Each grade level carries profile grades (knowledge, skills, and attitude) and letters are assigned to them based on competence reached. The letters are A- Outstanding, B- good, C-Fairly Good, D- Moderate, E- Weak, and F- Poor. See Appendix C for a sample of anonymous student results and subject profiles which includes the subjects piloted. I am relying on the assumption that the system measured each student according to their ability or at least that the error in assessing students’ ‘true’ ability was similar across schools and did not change differentially. The other section of the examination is marked by selected subject teachers who are employed at CXC for the period of marking. This is done through table marking. The markers around the table are “standardised” meaning that all participants mark against the criteria set for the paper and there is a second 71 round of marking for each paper followed by the moderation of the papers by the table leader. Table marking also ensures reliably that the same grade means the same thing across all markers regardless of the marking centre or country. The act of table marking is sometimes called intermarker reliability. Intermarker reliability establishes content validity because the rubric used to mark the exams incorporates the knowledge of the content covered in the various syllabi and adequately samples the content (Goldhaber et al., 2017). There were no changes in the syllabi for the subjects included in this study between the academic years 2008 to 2009 according to the Caribbean Examination Council (CXC) Report 2008 (CXC, 2008) and the Caribbean Examination Council (CXC) Report 2009 (CXC, 2009). I perused published reports of the CXC examining body which showed that there were no changes in the syllabus or format of the examination up to that time (CXC Annual Report 2009). The only change was the Information Technology syllabus in 2010 which affected all schools in the same way as outlined in the Caribbean Examination Council (CXC) Report 2010 (CXC, 2010). My study of several reports such as the National Employment Report 2008 – 2012 (Ministry of Labour & Social Security, 2015), and Jamaica Youth Activity Report 2016 (International Labour Organization & Statistical Institute of Jamaica, 2018) which included indicators of standard of living, and Jamaica Survey of Living conditions 2008, 2009 and 2010 (Planning Institute of Jamaica & Statistical Institute of Jamaica, 2008, 2009, 2010) did not discover any significant changes that might have affected specific schools and thus potentially invalidate my analytical strategy described below. The administrative data had to be cleaned and then organised. To achieve this, I manually inspected all the results that were published in paper format and digitalised them for analysis. The data for the years that were analysed were on paper records. I scanned the records and entered them into a Microsoft Excel spreadsheet. The transfer from digital pictures to database-type fields into Excel cells had to be done manually. An optical character recognition device was not available in the library or locally for me to use. The manual entry was tedious, prone to errors, and time-consuming. To ensure that the spreadsheet was free from errors I adopted the code inspection methodology recommended by Panko (2015) who recommended that another person reviews the data entered into the 72 spreadsheet. One member of my supervisory team assisted in reviewing the data. The final step was to import the Excel files to SPSS for analysis. Other areas of the data collection came from the Statistical Institute of Jamaica and the Planning Institute of Jamaica which provided data about Jamaica’s demographic, social, and economic conditions during the periods under investigation to determine if any intervening circumstances could have affected the examination results. The MoEYI provided data regarding public schools’ profiles, the Grade 11 curriculum, government policy regarding e-learning in schools, and government-commissioned evaluation reports. 3.3.2 Variables The independent variable in this thesis is the e-LHSPP pilot assignment through the e-LJam agency. The dependent variable is CXC CSEC examination results at the school-level average for each subject studied and not at the individual students’ outcome. Each pilot and non-pilot school’s average pass score per subject were averaged to a subject Grade Point Average (GPA). The GPA was converted to Z-scores. Z-scores were also used to provide a ranking of a score to determine its standard deviation points above or below the mean (Pettitt, 2010). The effect size measures the magnitude of the difference the e-LHSPP had on the subjects piloted. Table 3.5 summarises the constructed dependent, explanatory, and control variables. 73 Table 3.5 Codebook: Variables and What They Measure Types variables Measurement Summary Male pass percent Measures the total mean percentage pass for males per year, school, and subject. Female pass percent Measures the total mean percentage pass for males per year, school, and subject. Total pass percent Measures the total mean percentage passed for both males and females per year, school, and subject. Constructed GPAFM Measures the grade point mean for females per year, school, and subject GPAML Measures the grade point mean for males per year, school, and subject GPA Measures the grade point mean for both males and females per year, school, and subject ZGPA The Z-score standardised by subject Comparative Year 08, Year 09, Year 10 These are variables that are assigned 0/1 distinguishing in the year between pilot and non-pilot schools. (2008, 1/0 variable, 2009, 1/0 variable, 2010 1/0 variable) IT year 08, IT year 09, IT year 10 Variables represent the pilot schools before, during, and after the intervention. ( IT-pilot X 2008, IT-pilot X 2009, IT-pilot X 2010) ITpilot, Indicator variable for 1 = pilot and 0 = other Location The area where schools are located; rural = 1, urban 2, urban uptown =3, urban inner-city = 4. Gender-sch The gender of the school. Boys=1, Girls = 2, Co-ed = 3 School-type The type of school. Traditional = 1, Newly upgraded = 2, Technical = 3 74 3.3.3 Outcome Variable The major outcome variable used in the analysis was the GPA which is a constructed variable from the schools’ percentage mean scores per subject. The grade point average is a continuous scale that ranges from 0 to 4 where 0 means failure to perform and 4 means exemplary performance (Marsh, 2018). In this study, the grades for the CSEC Examination range from 1 to 6 where 1 is the highest score. The GPA results for the analysis were converted to be interpreted the same way but denoting 0 means failure to perform and 5 means exemplary. This was done to reverse the CSEC format of grading where 1 is the highest score and 6 is the lowest to keep the original meaning of GPA scores. The GPA is an international scale used to measure students’ academic performance and it is used by most high schools and universities around the world. The cumulative High School GPA is often used by colleges or universities to determine students' readiness for higher education. The GPA is not only used in education but among job recruiters who are searching for the top students. It is the practice at the University of Technology, Jamaica where I work, for lecturers to enter percentage scores into the grading system and the final score is converted into a GPA 0-5 scale. Each module has several pieces of assessment which are entered as percentages, and the application program converts the total percentages into a final GPA score for each module. The student receives a final overall GPA score when all the modules are combined. The GPA scores enhance our graduates' employability around the world. It was used in this study because it is an easily interpreted score for academics and the results in this thesis can easily be interpreted on the GPA scale. A major concern is that when the percentage score is converted to GPA, there could be a loss of data. This is a valid concern, but the use of the GPA has survived for centuries despite its criticism by some academics and the practice of converting from percentages to GPA is widespread in Jamaica. To have an overview of the performance of all schools in the study during the e- LHSPP 2009 and 2010 examination period, a combined summary of average examination results per subject per school for pilot and non-pilot schools was generated. 75 3.3.4 Control Variables Control variables were included that would remain the same during the collection and analysis of the results. The control variables, School Location, School Type, and School Gender were chosen because it was felt that they could influence the outcome of the results if not accounted for and controlled. 3.3.4 The Analytical Strategy To answer RQ1, I used a quasi-experimental, Difference-in-Difference (DiD) approach. Quasi-experiment according to Campbell and Stanley (cited in Robson, 2011) is defined as “an experimental approach but where random assignment to the treatment and comparison groups has not been used” (p. 109). A DiD design can answer the question; “What difference does a policy or program make?” (White & Raitzer, 2017). The DiD can deal with selection bias more effectively than the others (Lechner, 2011). It is one of the most popular and frequently used methods in economics, management, public policy, and health to analyse the effects of an intervention. The DiD is suited for my thesis because it uses non-randomised administrative data that can be used to study causal relationships (Meyer, 1995). It can compare “means” of treatment and control groups before and after an intervention. My thesis uses repeated cross-sectional students’ attainment data to construct a panel of attainment by subjects from the participating schools over time while the DiD uses the data to estimate the difference in the changes in the outcome between a group that receives treatment and a comparison group that is controlled over time (White & Raitzer, 2017). The panel of pooled cross-sectional data in this thesis refers to each academic subject data captured over two years at a time, 2008 – 2009 and 2008 – 2010 for different individuals' attainment in their schools sitting the same subjects. Although panel data is a subset of longitudinal set, the longitudinal set would follow the same individuals doing the same subjects over time. Hence, using the panel of pooled cross-sectional data is the preferred choice of data to calculate students' performance in their high stakes school leaving examinations. The model involves comparing outcome variables (examination results converted to GPA) in pilot schools with those in the 76 non-pilot schools before and after the e-LHSPP. The DiD with regression control for confounding variables that could explain the outcome difference other than the e-LHSPP itself. The DiD is well established and as early as the middle of the nineteenth century was used by John Snow in his study to show that cholera was transmitted through the water supply and not the air (Snow,1855). Snow showed that the change from one source of water to another caused, which was contaminated, increased the death rate to increase significantly. Lester (1946) is another early study and was concerned with the effects of wages on employment in northern and southern US states firms. His study compared employment levels before and after an increase in the minimum wage. Other researchers such as Card and Krueger (1994) used DiD to analyse the effect of a minimum wage increase in New Jersey on employment in fast-food restaurants while Pennsylvania did not implement the minimum wage and was the control group. Card (1990) used the DID to analyse the effects of Cuban immigrants on employment and wages in Miami. Though most of the studies are from economics DiD is used in education as well. Here I present a few examples of studies in education that use the DiD. The Excellence in Cities (EiC) programme which dealt with school attainment after the UK’s government EiC intervention is one such educational study (Machin et al., 2004). Researchers Deschacht and Goeman (2015) used the DiD in education to investigate the effects of blended learning on course persistence and the performance of adult learners. They discovered that blended learning improved exam results. Another study in education sought answers using DiD to find out if students doing economics courses would improve their analytical skills (Dendir et al., 2019). The result revealed that students in the economics classes improved their analytical skills over the other student in non-economics classes. Many of the very early use of the DiD, for example in the case of Snow (1855) did not include a regression model because the use of linear regression models to make inferences was still being developed by Gauss and others (Seal, 1967). I used the DiD with regression to compare outcomes (means) in the pilot schools with those in the non-pilot schools before and after the e-LHSPP was introduced. This analysis calculated the change in GPA over time in the pilot schools that benefitted from the e-LHSPP compared with the other non-pilot schools that did not benefit from the e-LHSPP. The major advantage of this approach is that it 77 would “difference out” the effect of time-constant factors that could explain the performance in GPA of pilot schools changes other than the e-LHSPP. The form of DiD is the case with two groups in two time periods (Wing et al., 2018). The DiD estimate can be written as shown below: DiD= (YTr,Post – YTr,Pre) – (YC,Post – YC,Pre) (1) In equation 1, YTr, Post is the mean outcome variable for the treatment group after the intervention and YTr, Pre is the mean outcome variable for the treatment before the intervention. YC, Post is the mean outcome variable for the control group after the intervention, and variable YC, Pre is the mean outcome variable before the intervention. The basic format can be calculated manually but it does not account for covariates (confounding variables) and standard errors (sampling variability) The DiD with linear regression model accounts for confounding variables and standard error (Lechner, 2011). The DiD with linear regression model is more robust and is usually implemented as an interaction term between time and treatment group dummy variables, and accounts for covariates, significant level, and standard error. The DiD with linear regression model is shown below. i= person (group) in treatment or not t= time treatment or not Y= outcome Yit = student grade outcome in time E = This is the error term. It captures all other influences on Y. α = Alpha (coefficient) Treat = a dummy if the observation is in the treatment group Post = Post-treatment dummy Yit = α0 + α1(treati) + α2(Postt) + α3 (treati * Postt) + ß’*Z_it + E(treati)(Postt) (2) The estimate coefficients in Equation 2 have the following interpretation: α0: the mean outcome of the control group (untreated) at the baseline. α0+α1: the mean untreated outcome of the treated group in period 1 78 α2: the single difference between the treated and the control groups at the baseline. α0+α2: the mean untreated outcome of the treated group in period 2 α0+α1 + α2+α3: the mean outcome of the treated group in period 2 α3: the DiD estimate (The coefficient of interest) Z is a vector of covariates and ß the corresponding coefficient. In Equation 2, the DiD with linear regression framework is obtained by generating a dummy variable treati which is equal to 1 if a person (group) is in the pilot group and 0 otherwise. The dummy variable Postt is observations for Period 2 (treatment year) and is equal to 1 for the pilot group and 0 otherwise. The product of treati * Postt produces the treatment variable. In the first year, the treatment variable is 0 because the variable representing the period is 0 and when multiplied by both treatment (1) and otherwise (0) the result is 0. In the second year, the treatment variable is 1 for the pilot because both the group variable and period variable are 1. The coefficient α3 is an estimation of the treatment effect when the common trend holds true. The error term E(treati)(Postt) captures all other influences on the outcome. The covariates' location, school type, and school gender were added to the DiD with linear regression because these factors are likely to affect the schools’ performance, and controlling for these covariates would produce better results. Critically, DiD assumes that the outcome in both treatment and control groups would follow the same time trend in the absence of the treatment (“Common Trends” or “Parallel Trends”) (Lechner, 2011). The parallel trends assumption posits that if the treatment (any omitted variables) was absent from the pilot, the pilot, and non-pilot schools would follow the same trends. Wing et al. (2018) state that an empirical test can be done to prove a parallel trend with two groups observed in two time periods by comparing the outcome trend between the two groups which should be parallel before the policy was implemented. The observations of the outcome trend over many time points before the intervention can show if the lines are parallel or not. Producing the parallel trends for the period 2005 to 2008 satisfies the internal validation for the DiD estimate, in that, all subjects under investigation followed the same trend line for both pilot and 79 non-pilot schools before the e-LHSP pilot. See Appendix D for charts showing the parallel trends for the subjects studied and the years 2005 to 2008. Regarding the parallel lines in common trend assumption, Sommers et al. (2015) who studied ‘low-income adults in states that expanded Medicaid and in states that did not expand Medicaid’, showed in their study that parallel lines do not have to be linear and differ by fixed amounts in every period. In other words, following the same trend line does not mean that they must have the same statistical mean of the outcome. The lines can move up or down or across from period to period. Common trend variables that are likely to affect the treatment and control groups in this study are teachers’ attitudes and expertise, students, school administration, and government policies (Zhao & Frank, 2003). There is no evidence that these likely common trend variables affected the variables used to determine the effects of the e-LHSPP and the unobserved counterfactual outcome trend is negated by using the two-group two-period DiD model. The control variables that were included also prevented the common trend assumption from being violated. Besides parallel trends, changes unrelated to treatment in one group and not the other would confound the DiD coefficient. Ashenfelter’s dip or “pre-programme dip” occurs when there is a fall in participants’ scores just before the programme and not the others. The Ashenfelter’s dip could cause bias if the dip is transitory. There was no such change that occurred. 3.4 Document Analysis Approach: Technologies and Educational Resources Used in e-LHSPP Document analysis was carried out to provide a better understanding of the quantitative results. This was done by reviewing different types of documents such as published reports, government documents, academic presentations, newspaper articles, studies, and journal articles which gave me new insights into the research question (Merriam, 1988). To gain a rich understanding of the documents written about the e-LHSPP, document analysis proved to be the most appropriate way because it is a systematic procedure for reviewing or evaluating documents that are in printed and electronic format (Bowen, 2009). I followed the steps recommended by Bowen which entailed checking the documents for (1) 80 authenticity; (2) credibility; (3) accuracy and representativeness of documents to the study. Authenticity was achieved because I could establish the origin of the document and the source was genuine. To further establish authenticity I chose documents that existed around the time of e-LHSPP to maintain reliability and to prevent the perception that older documents are less reliable (Caulley, 1983). The documents were government official documents and a fair assumption was made that they were free from errors and distortion thus establishing high credibility. I relied on the importance of the documents and whom it was prepared for the authors and the agency that produced the documents to determine the accuracy of the documents. The evidence produced in the documents was clear and I could understand it (Bryman, 2012). Only documents that provided information about the e-LHSPP were included. 3.4.1 Procedure to Analyse the Documents To analyse documents Bowen (2009) posits that it involves skimming which is a superficial examination, and reading that ensures thorough examination and interpretation. To achieve this iterative process thematic analysis is appropriate. For this research, the documents were analysed using Braun and Clarke's (2006) six steps thematic analysis. Step one required that I familiarised myself with the documents which I did. Step two required that I gathered the initial codes. The codes I selected were ‘data-driven’ and I worked through the documents with the research question in mind. The third step was to search for the broad themes that emerged from the list of codes I identified. A thematic map assisted me with narrowing down the themes. During the fourth step, I further refined the themes and in step five I defined and named the themes. These themes provided a better understanding of the issues and the data extracts supported the themes. In step six as I wrote the report, the themes created a better understanding of the data and the analysis conducted. The analysis of the documents yield data that were organised into the major themes and when combined with the quantitative data expanded my understanding of the research questions. The E-learning technologies dimension in the holistic e-learning systems theoretical framework highlights the areas of content, communication, and collaboration as areas of focus when evaluating e-learning. Technologies and learning resources were supplied to the pilot schools. Each pilot school received 81 two computer labs fully equipped with computers and network connections. Schools had to provide their own Internet connection. Teachers’ Instructional Materials (TIMs) and Students’ Instructional Materials (SIMs) were delivered to pilot schools on CDs. The list of these resources supplied to the pilot schools can be found in Appendix E. Two teachers I spoke to about the E-LHSPP said that for the most part students doing the pilot subject were assigned classes in the computer labs to complement their traditional lessons where they used the resources supplied facilitated by the subject teacher. The technologies and learning resources when matched against the holistic e-learning framework (E- learning Technologies) were adequate at the time. 82 Chapter 4A Quantitative Results The purpose of this chapter is to analyse the administrative archival quantitative indirect data collected during this thesis. This chapter will focus on research question one, hypothesis, the findings of the analysis, and the summary. Table 4.1 summarises Research Question 1, the hypothesis, and the analysis used to arrive at the results. The context for the study can be found in Chapter 1. The findings for Research Questions 2 and 2A can be found in Chapter 4B. 4.1 Research Question 1 Table 4.1 Research Question One, Hypotheses, and Analysis Research Questions Hypothesis Analysis 1. What are the effects on students’ attainment when the e-LHSPP was implemented in schools? H0 = There is a statistically significant difference between the treatment (pilot) group and the control group. Difference in Differences (DiD) with linear regression. 4.1.1 Descriptive Statistics Table 4.2 shows the number of schools, mean and standard deviation (SD) of the dependent variable. The statistics show an improvement in total weighted average GPA over the period 2009 to 2010 for both pilot and non-pilot schools when combined. Table 4.2 Summary Statistics for all Assessment (GPA) Subjects 2008 2009 2010 (N) Mean SD (N) Mean SD (N) Mean SD English language 68 1.51 0.83 68 1.78 0.82 68 2.16 0.87 Mathematics 68 1.1 0.81 68 1.06 0.9 68 1.28 0.98 Chemistry 54 1.62 0.67 54 1.9 0.69 54 1.74 0.73 Biology 52 1.99 0.88 52 1.96 0.74 52 2.03 0.75 Information Technology 68 2.46 0.65 68 2.61 0.63 68 2.36 0.78 Weighted Average GPA 1.72 1.855 1.915 83 Table 4.3 provides the GPA for the base year 2008 when both pilot and non-pilot weighted aggregate GPA scores are generated. Table 4.3 shows the sample (n), mean GPA, and standard deviation. This information provides the starting point for further analysis and is included in determining the impact of the e-LHSPP on students’ aggregate performance outcomes per subject. Despite the selection criteria to avoid bias, pilot schools outperform non-pilot schools in all subjects except Information Technology in the base year. The total weighted average GPA was highest in the pilot schools. An independent sample T-Test was carried out to determine the statistical difference between the pilot and non-pilot schools. The results reveal that there is no statistical significant difference, t(8) = .650, p = .534, despite pilot schools attaining higher scores than non-pilot schools. Table 4.3 Summary Statistics for Base Year 2008 GPA Subjects Pilot Non- Pilot (n) Mean SD (n) Mean SD English language 26 1.75 0.89 42 1.37 0.77 Mathematics 26 1.26 1.02 42 1.02 0.65 Chemistry 19 1.74 0.85 35 1.56 0.55 Biology 20 2.23 1 32 1.83 0.76 Information Technology 26 2.36 0.77 42 2.52 0.57 Weighted Average GPA 1.857 1.654 The pilot and non-pilot schools are similar in many respects. They both have teachers with similar qualifications as outlined in the employment policy of the Ministry of Education. All schools sat the same examination and under the same examination conditions. Though they are differences between the pilot and non- pilot schools' baseline scores, the (“Common Trends” or “Parallel Trends”) shown in Appendix D explain the differences. The differences between the pilot and non- pilot schools' average GPA scores before the e-LHSPP per subject was below 1 GPA point, and the 2008 baseline GPA difference did not exceed .40. This meant that for each subject, the pilot and non-pilot schools started with the same letter grade in GPA calculation. An independent sample T-Test was carried out to determine the statistical difference between the pilot and non-pilot schools. The results reveal that there is no statistical significant difference in the base year scores between pilot and non-pilot schools. 84 4.1.2 Mean Grade Point Average (GPA) 2009 - 2010 The weighted average GPA for the pilot year 2009 and one year after 2010 are shown in Table 4.4, which reports the average GPA per subject in pilot and non- pilot schools in 2009 and 2010. Both pilot and non-pilot schools showed improvement in most subjects over the base year. To show the changes per subject for both pilot and non-pilot schools, the base year 2008 data in Table 4.3 was subtracted from 2009 data in Table 4.4 to give the following results. The results revealed higher movements in English for the non-pilot schools of 0.31 average GPA points relative to 0.19 average GPA points for the pilot schools. The mathematics scores improved by 0.03 average GPA points for pilot schools relative to a reduction of -0.11 average GPA points for non-pilot schools. Chemistry improved by 0.3 average GPA points for the pilot schools and a larger increase of 0.26 average GPA points for non-pilot schools. Both pilot and non- pilot schools showed reductions in biology with the pilot school recording a bigger reduction of -0.17 average GPA points relative to the non-pilot school an increase of 0.06 average GPA points. The pilot schools recorded a bigger improvement in IT of 0.28 average GPA points over the non-pilot schools of 0.08 GPA points. The preliminary analysis revealed that pilot schools performed better in Mathematics and IT than the non-pilot schools. Table 4.4 GPA by Subject in Pilot and Non-Pilot Schools, 2009 and 2010 Subjects Pilot Non-Pilot (N) Mean 2009 Mean 2010 (N) Mean 2009 Mean 2010 English language 26 1.94 2.3 42 1.68 2.07 Mathematics 26 1.29 1.4 42 0.91 1.2 Chemistry 19 2.04 1.86 35 1.82 1.67 Biology 20 2.06 2.01 32 1.89 2.03 Information Technology 26 2.64 2.41 42 2.6 2.32 Total Weighted Average GPA 1.987 2.00 1.77 1.855 Table 4.4 shows the summary statistics for the year 2010 one year after the e- LHSPP. The pilot schools showed higher improvement in GPA averages in all subjects except Biology. However, when the average changes per subject are calculated for the base year 2008 in Table 4.3 to one year after the pilot 2010 85 shown in Table 4.4 the results show higher movements in English for the pilot schools of 0.55 average GPA points relative to 0.7 average GPA points for the non-pilot schools. The math scores for pilot schools improved by 0.14 GPA average points relative to a higher increase of 0.18 average GPA points for non- pilot schools. Chemistry improved by 0.12 average GPA points for the pilot schools and 0.11 average GPA points for non-pilot schools. Both pilot and non- pilot schools showed approximately the same reductions in biology with the pilot school recording reduction of -0.22 average GPA points relative to the non-pilot schools of -0.2 average GPA points. Finally, the pilot schools recorded a bigger improvement in IT of 0.05 average GPA points while non-pilot schools recorded a reduction of -0.2 average GPA points. This preliminary analysis reveals that pilot schools performed better in English language, Chemistry, and IT than the non- pilot schools. Table 4.4 shows that the pilot schools' total weighted averages were greater than the non-pilot schools for the years 2009 and 2010. However, it is conceivable that the e-LHSPP was rolled out to better-performing schools. Further statistical tests were done to determine if the results in Table 4.4 for both years were statistically different. The results for 2009 reveal that there is no significant effect for the e-LHSPP, t(8) = .621, p = .552. Similarly, the results for 2010 show no statistical significant difference where t(8) = .523, p = .615. The following section provides the results from a difference-in-differences (DID) analysis. DID controls for the non-random selection of schools to the e-LHSPP. Besides selection, the DID models will include covariates to control for school, type, gender, and gender mix. This conditions out potentially systematic differences in student performance. 4.1.3 Grade Point Average (GPA) in DiD with Linear Regression The DID with linear regression using the mean GPA per subject with the covariates (school location, school type, school gender) was added to produce the results as shown in Tables 4.5 to 4.9. The reference categories for the covariates are School Location = urban inner-city, School Gender= Co-ed (mixed), and School Type = technical. These reference categories are used to compare with the other categories in the group. 86 The results for Table 4.5 reveals that English language attainment in the pilot schools decreased by -0.129 and -.15 GPA average points in 2009 and 2010 respectively in non-pilot schools after the introduction of the programme. The difference is not statistically significant. Table 4.5 (DID) GPA Attainment Outcome English Language 2009 and 2010 Variables Year 2009 Year 2010 DiD Std. Error P-value -0.129 .176 .466 -0.150 .197 .447 Treatment effect Std. Error P-value .077 .130 .552 .101 .145 .490 School location Rural Std. Error P-value -.311 .150 .040 -.235 168 164 Urban Std. Error P-value .156 .133 .169 .267 .136 .037 Urban Uptown Std. Error P-value .610 .166 <.001 .625 .186 .001 School Gender Boys Std. Error P-value .148 .189 .435 .090 .212 .672 Girls Std. Error P-value .443 .156 .005 .450 .174 .011 School Type Traditional Std. Error P-value .763 .185 <.001 .755 .201 <.001 Newly Upgraded Std. Error P-value -.081 .157 .607 -.076 .176 .667 R-Square .674 .653 87 The results for Table 4.6 reveals that Mathematics attainment in the pilot schools increased by 0.123 and decreased by -0.05 GPA average points in 2009 and 2010, respectively in non-pilot schools after the introduction of the programme. The difference is not statistically significant. Table 4.6 (DID) GPA Attainment Outcome Mathematics 2009 and 2010 Variables Year 2009 Year 2010 DiD Std. Error P-value 0.123 .203 .544 -0.050 .238 .833 Treatment effect Std. Error P-value -.059 .149 .691 -.037 .175 .834 School location Rural Std. Error P-value -.323 .172 .063 -.177 .203 .384 Urban Std. Error P-value .157 .130 .230 .253 .153 Urban Uptown Std. Error P-value .825 .191 <.001 .877 .224 <.001 School Gender Boys Std. Error P-value .190 .218 .384 .232 .255 .366 Girls Std. Error P-value .265 .179 .141 .307 .210 .147 School Type Traditional Std. Error P-value .594 .213 .006 .548 .250 .031 Newly Upgraded Std. Error P-value -.147 .181 .420 -.124 .213 .560 R-Square .584 .488 The results for Table 4.7 reveals that Chemistry attainment in the pilot schools increased by 0.073 and 0.059 GPA average points in 2009 and 2010, respectively in non-pilot schools after the introduction of the programme. The difference is not statistically significant. 88 Table 4.7 (DID) GPA Attainment Outcome Chemistry 2009 and 2010 Variables Year 2009 Year 2010 DiD Std. Error P-value 0.073 .224 .744 0.059 .236 .803 Treatment effect Std. Error P-value -.094 .170 .583 -.081 .182 .659 School location Rural Std. Error P-value .199 .218 .363 .239 .224 .288 Urban Std. Error P-value .255 .152 .142 .234 .157 138 Urban Uptown Std. Error P-value .348 .204 .090 .319 .215 .141 School Gender Boys Std. Error P-value -.051 .215 .815 .001 .231 .996 Girls Std. Error P-value .293 .177 .101 .278 .189 .146 School Type Traditional Std. Error P-value .617 .227 .008 .571 .237 .017 Newly Upgraded Std. Error P-value -.029 .197. .882 -.073 .203 .720 R-Square .416 .346 The results for Table 4.8 reveals that Biology attainment in the pilot schools decreased by -0.200 and -0.428 GPA average points in 2009 and 2010 respectively in non-pilot schools after the introduction of the programme. The difference is not statistically significant. 89 Table 4.8 (DID) GPA Attainment Outcome Biology 2009 and 2010 Variables Year 2009 Year 2010 DiD Std. Error P-value -0.200 .309 .518 -.428 .307 .166 Treatment effect Std. Error P-value .270 .231 .247 .272 .229 .237 School location Rural Std. Error P-value -.427 .307 .168 -.178 .303 .558 Urban Std. Error P-value -.047 .214 .827 .165 .210 .433 Urban Uptown Std. Error P-value .023 .283 .936 .118 .280 .676 School Gender Boys Std. Error P-value .147 .299 .624 .109 .298 .714 Girls Std. Error P-value .308 .245 .212 .377 .244 .126 School Type Traditional Std. Error P-value .244 .334 .466 .269 .333 .421 Newly Upgraded Std. Error P-value -.100 .298 .738 -.146 .296 .621 R-Square .169 .192 The results for Table 4.9 reveals that Information Technology attainment in the pilot schools increased by 0.19 and 0.237 GPA average points in 2009 and 2010 respectively in non-pilot schools after the introduction of the programme. The difference is not statistically significant. 90 Table 4.9 (DID) GPA Attainment Outcome Information Technology 2009 and 2010 Variables Year 2009 Year 2010 DiD Std. Error P-value 0.19 .177 .266 0.237 .216 .275 Treatment effect Std. Error P-value -.348 .133 .009 -.296 .159 .064 School location Rural Std. Error P-value -.239 .152 .118 -.346 .184. .063 Urban Std. Error P-value .007 .114 .955 .110 .139 .428 Urban Uptown Std. Error P-value .339 .167 .018 .391 .203 .056 School Gender Boys Std. Error P-value -.139 .186 .454 .147 .225 .514 Girls Std. Error P-value .220 .154 .154 .210 .188 .267 School Type Traditional Std. Error P-value .772 .188 <.001 .600 .228 .009 Newly Upgraded Std. Error P-value .254 .160 .114 .214 .194 .271 R-Square .443 .323 4.1.4 Effect Size An effect size is a standardise measure of the size of an effect. To measure the strength of the effect size of the difference between the pilot and non-pilot schools per subject during the pilot year 2009 and one year after 2010, I used Cohen’s d method of analysing effect size. According to Cohen’s d (1988), d = .20 is a small effect, d = .50 is a moderate effect, and d = .80 or greater is a large effect. Cohen’s d looks at the differences in the means of the two groups based on the standard deviations. To calculate the effect size, I used SPSS to calculate 91 the effect size. The effect size is the mean difference divided by the (pooled) standard deviation of the data within the groups. Table 4.10 provides a summary of the results for all subjects combined. From Table 4.10 the results for the effect sizes for all subjects for the years 2009 and 2010 can be interpreted as the e- LHSPP having a small effect. Therefore, even though the results of the DiD for all subjects are not statistically significant, the effect sizes are small. The results can be interpreted as a small effect as a result of chance. The negative sign for the effect size does not make any difference in interpretation. The result is determined by the order of the groups entered and which group had the larger or smaller mean when it was entered first. Table 4.10 Effect Size for all Subjects 2009 and 2010 DiD Outcome Year 2009 Effect Year 2010 Effect Effect Size 0.248 Small 0.157 Small Table 4.11 reveals that for the years 2009 and 2010, each subject's effect size was small. In other words, the effect of the e-LHSPP on the subjects piloted was small and Table 4.11 reveals that all the results were below 0.50 according to Cohen’s d. The null effects of the e-LHSPP are important because it shows that the hypothesis cannot be supported. The final results reveal that there is not a statistically significant difference between pilot and non-pilot schools. Though they were small improvements in mathematics, chemistry, and Information Technology, Cohen’s method of analyzing effect size resulted in a small effect size. This meant that the improved performances of students in mathematics, chemistry, and Information Technology were by chance. To explain the null effects, document analysis was used to gain a better understanding of the intervening issues. The results of the null effects and document analysis of the e- LHSPP should not be seen as a failure of the intervention, but as lessons learned which can determine how future pilot projects of this type are effectively implemented. 92 Table 4.11 Effect Size per Subject for 2009 and 2010 Subject 2009 Effect 2010 Effect English language d= 0.312 small d= 0.269 small Mathematics d= 0.420 small d= 0.202 small Chemistry d= 0.312 small d= 0.246 small Biology d= 0.231 small d= -0.031 small Information Technology d= 0.121 small d= 0.115 small Interpretation based on Cohen’s d 1988 4.1.5 Covariates Covariates Results in the Base Year 2008. Table 4.12 provides the GPA scores for the schools located in various localities. When all the subjects are combined the aggregate GPA for urban uptown schools was the highest in the year 2008 and lowest in rural schools for both pilot and non-pilot schools. Pilot schools outperformed non-pilot schools in the rural, urban uptown, and urban inner-city localities. The total weighted average GPA was highest in the pilot schools. Table 4.12 Summary for Locality GPA 2008 Locality Pilot Non-Pilot Freq. Mean SD Freq. Mean SD Rural 16 Std. Error 1.3 .197 0.79 21 1.2 .197 0.9 Urban 42 Std. Error 1.67 .110 0.71 114 1.75 .077 0.83 Urban Uptown 35 Std. Error 2.61 .139 0.82 20 2.2 .109 0.48 Urban Inner-city 24 Std. Error 1.47 .231 1.13 38 1.38 .138 0.85 Total Weighted Average GPA 1.859 1.44 93 Table 4.13 provides the GPA scores for the school types. When all the subjects are combined the aggregate GPA for traditional schools was the highest and technical schools the lowest in 2008. Non-pilot outperformed pilot schools in the newly upgraded technical schools. The total weighted average GPA was the highest for the pilot schools. Table 4.13 Summary of School Type GPA 2008 School Type Pilot Non-Pilot Freq. Mean SD Freq. Mean SD Traditional 68 Std. Error 2.27 .104 0.86 57 2.27 .086 0.65 Newly Upgraded 41 Std. Error 1.3 .140 0.89 117 1.49 .075 0.82 Technical 8 Std. Error 1.08 .200 0.56 19 1.45 .151 0.66 Total Weighted Average GPA 1.848 1.716 Table 4.14 provides the GPA scores for the school gender. The result reveals that the pilot girl schools and co-ed schools outperformed the non-pilot schools. The total weighted average GPA was the highest for the pilot schools. Table 4.14 Summary of School Gender GPA 2008 School Gender Pilot Non-Pilot Freq. Mean SD Freq. Mean SD Boys 10 Std. Error 1.9 .211 0.66 16 2.56 1.59 0.63 Girls Std. Error 20 2.53 .172 0.77 26 2.42 .094 0.48 Co-ed 87 Std. Error 1.69 .107 1 151 1.44 .063 0.78 Total Weighted Average GPA 1.851 1.664 94 Covariates Results in the Pilot Year 2009 and 2010. English Language. The covariate for school location in Table 4.5 reveals that when rural schools are compared with urban inner-city schools the average GPA decreased by -.311 in 2009 and by -.235 in 2010. Both urban and uptown schools performed better than urban inner-city over the two years but uptown schools showed statistically significant improvements in GPA average when compared with urban inner-city for both 2009 and 2010. The covariate for school gender reveals that when boys' and girls' schools are compared with co-ed schools the average GPA increased in 2009 and 2010. Girl schools showed statistically significant increases in GPA average when compared with co-ed schools for both 2009 and 2010. The covariate for school type reveals that newly upgraded schools when compared to technical schools showed that there was a decrease in average GPA but when traditional schools are compared with technical schools, traditional schools recorded statistically significant increases in GPA average of .763 in 2009 and GPA average of .755 in 2010. Mathematics. The covariate for school location in Table 4.6 reveals that when rural schools are compared with urban inner-city schools the average GPA decreased by -.323 in 2009 and by -.177 in 2010. Both urban and uptown schools performed better than urban inner-city schools over the two years but uptown schools showed statistically significant improvement in GPA averages of 0.825 and 0.877 when compared with urban inner-city for both 2009 and 2010 respectively. The covariate for school gender reveals that when boys' and girls' schools are compared with co-ed schools the average GPA increased in 2009 and 2010. The results for both boys' and girls' schools are not statistically significant. The covariate for school type reveals that newly upgraded schools, when compared to technical schools, showed that there was a decrease in average GPA but when traditional schools are compared with technical schools, traditional schools recorded statistically significant increases in GPA average of 0.594 in 2009 and GPA average of 0.548 in 2010. 95 Chemistry. The covariate for school location in Table 4.7 reveals that all schools showed improvements over urban inner-city schools but the results were not statistically significant. The covariate for school gender reveals that when boy schools were compared with co-ed schools the GPA average decreased in 2009 but barely increased in 2010. Both results were not statistically significant for boy schools. Girl schools showed improvements in both years over co-ed schools but the results are not statistically significant. The covariate for school type reveals that newly upgraded schools, when compared to technical schools, showed that there was a decrease in average GPA but when traditional schools were compared with technical schools, showed statistically significant increases in GPA average of 0.617 in 2009 and GPA average of 0.571 in 2010. Biology. The covariate for school location in Table 4.8 reveals that when rural schools are compared with urban inner-city schools the average GPA decreased by -.427 in 2009 and by -.178 in 2010. Urban schools declined in 2009 but made gains in 2010 when compared to urban inner-city schools while uptown schools performed better in both years recording 0.023 and 0.118 in 2009 and 2010 respectively over urban inner-city. The two years improvement for uptown schools was not statistically significant. The covariate for school gender reveals that when boys' and girls' schools are compared with co-ed schools the average GPA increased in 2009 and 2010. Both boys' and girls' school increases were not statistically significant when compared with co-ed schools for both 2009 and 2010. The covariate for school type reveals that newly upgraded schools, when compared to technical schools, showed that there was a decrease in average GPA but when traditional schools are compared with technical schools the results increased and were not statistically significant. Information Technology. The covariate for school location in Table 4.9 reveals that only rural schools did not show any improvement over urban inner-city schools and the results for urban uptown showed statistically significant improvements over urban inner-city schools of 0.339 and 0.391. The covariate for school gender reveals that in both 96 boys' and girls' schools when compared with co-ed schools the GPA average increased over the two years but was not significantly significant. The covariate for school type reveals both technical and newly upgraded schools when compared to technical schools showed that there was an increase in the GPA average. The traditional schools when compared with technical schools showed statistically significant increases in GPA average of 0.722 in 2009 and GPA average of 0.600 in 2010. 4.2 Summary The e-LHSPP is a response from the government to improve attainment in the school leaving examination for Jamaican students. This study provides the preliminary results of the policy in the short run for English language, Mathematics, Chemistry, Biology, and Information Technology during the pilot phase from August 2008 to July 2009 and the spillover year 2010. The findings are aggregate school results for the subject piloted. The subjects piloted in 2009 showed positive increases in points for Mathematics, Chemistry, and IT for pilot schools over non-pilot schools but smaller increases in English language and Biology when compared to non-pilot schools. The spillover year 2010 showed smaller positive increases in points for Chemistry and IT for pilot schools when compared to non-pilot schools. The pilot schools recorded smaller increases in 2010 for English language, Mathematics, and Biology when compared to non- pilot schools. All the DiD results for both pilot and non-pilot schools were not statistically significant. The effective size based on Cohen’s d for all subjects over 2009 and 2010 was small. The e-LHSPP had little effect on piloted subjects. The results imply that the e-LHSPP had an effect in 2009 on Mathematics, Chemistry, and IT for pilot over non-pilot schools. The results were not statistically significant. There were systematic inequalities in school performance by location, type, and gender mix. Schools located in urban uptown, i.e., in the affluent areas performed better than schools in other locations when compared to urban inner-city. Girl schools outperform boys and co-ed schools in all subjects over the 2009 and 2010 period. Traditional high schools outperformed better than newly upgraded high schools and technical high schools in all subjects. A profile can be drawn for the results to show that girl schools located in urban uptown and traditional high schools perform the best in both years 2009 and 2010. 97 Chapter 4B Document Analysis This chapter will answer Research Question 2 and sub-question 2A. Research Questions 2 and 2A are as follows: RQ2. Were the components and design of the e-LHSPP appropriate for successful piloting? RQ2A. What were the reported issues affecting students’ attainments during the e-LHSPP? The results in this chapter were obtained from the analysis explained in Chapter 3 and serve to give possible explanations for the quantitative data. These secondary archival documents as data in the form of documentary reports were sourced from official government websites, government departments, and published reports. Chapter 4B will report the findings of the document analysis. In Chapter 3, I justified using document analysis for analysing the various secondary archival documents as data. The primary research activities records were non-existent and according to Heaton (2008) using secondary analysis of qualitative data requires revisiting and reworking the actual data. In the case of this thesis, document analysis is the most appropriate. This chapter will provide a greater understanding and develop further insight into the period under investigation. Data Analysis to Answer RQ 2 and 2A Why and how the secondary archival documents data were analysed is outlined in Chapter 3. RQ 2 and 2A as discussed previously were a result of document analysis utilising thematic analysis including coding into themes similar to how interview transcripts are analysed. 4.3 Research Question 2: Are the components and design appropriate for the successful piloting of the e-LHSPP? To answer the second research question, I utilised the concept of the critical success factors (CSFs) to provide me with insight into the factors that had an 98 impact on the e-LHSPP and the initial codes to perform the thematic analysis. Still using Bowen's (2009) approach, I used the predefined codes from the CSFs because document analysis was supplementary to the quantitative analysis. In other words, the codes used in the CSFs were applied to the documents that were to be analysed. According to Frimpon (2012), CSFs are “variables that are fundamental to the success of the implementation, and an organization must handle these CSFs well to have a successful implementation” (p.118). Researchers have proposed a large number of CSFs variables that demonstrate the complexity of identifying the impact of CSFs on e-learning projects. However, Frimpon (2012) simplified the CSFs and placed them in their respective role which means a container for holding specific CSFs (see Figure 4.1). The roles highlight the success factors that influence e-learning deployments. From “A re-structuring of the critical success factors for e-learning deployment,” by M.F. Frimpon, 2012, American International Journal of Contemporary Research, 2(3), p.125. Figure 4.1 The Critical Success Factors (CSFs) of E-learning Several documents were analysed and matched against the CSFs presented in Figure 4.1 which generated the codes and later the themes. The document analysis findings of the e-LHSPP are presented using the themes that emerged. Three major themes emerged from the analysed data: 1. Technological support for success 2. Key stakeholders' involvement and outcome 3. Institutions' contribution to the e-LHSPP Each of these interrelated themes will be presented below. 99 4.3.1 Technological Support for Success The e-Learning Jamaica (e-LJam, 2012) provided a listing of the technology resources supplied to each school see Appendix E. The resources are adequate for the most part for the scope of the project at that time as mentioned in the literature review which highlighted the resources necessary for e-learning implementation. The interrogation of another e-LJam (2007-2008) report for the period April 2007 to March 2008 revealed that equipment and networks were installed in the 26 pilot schools. A later report from e-LJam (2008-2009) for the period April 2008 to March 2009 still did not mention the establishment of a standardised e-learning platform at the beginning of the e-LHSPP. There was also the delivery of furniture and networks to the pilot schools. It can be assumed that the quality of the technology was very high and of a good standard since they were all new equipment and the purchasers followed the government’s procurement guidelines. In addition, suppliers of the technology had entered into a framework contract with the government to supply the latest technology over two and a half years. The e-course maintenance was well established through the development of a Central Repository for Educational Material (CREM) that was designed by Instructional Technology Experts. They made recommendations regarding the structure required for the management of the programmatic and technical aspects to develop the specifications for an appropriate Learning Content Management System (LCMS/LMS). However, full implementation came late as shown in the e-LJam (2009-2010) report which reveals that the “Instructional Technology Experts were still advising on the standards and specifications for the content on various media, and the structure and operations of the CREM” (p.9). In another e-LJam (2010-2011) report published for the period April 2010 to March 2011, it was stated that they were still waiting for the CREM infrastructure to be completed and this delay lead to a temporary Moodle site set up near the May/June 2009 examination. The late temporary Moodle site would have been problematic for teachers to adjust and use so close to the high- stake examination. 100 4.3.2 Key Stakeholders' Involvement and Outcome The analysis of the e-LJam (2008-2009) and (2009-2010) reports provided substantial information about the pilot period and Table 4.15 summarises the analysis. Table 4.15 Faculty CSF for e-LHSPP Faculty CSF codes Outcome My Comments e-Mindset Principals, teachers, school administrators’ education officers, and tertiary institutions sensitised about the e- LHSPP. No evidence of students’ involvement was found in any of the documents. This could produce adverse examination results if students are not mentally prepared for e-learning. Principals and HoDs needed to improve leadership. No evidence of project management training. Technical Competency 5,139 teachers trained in ICT skills and 180 system administrators trained in network management. Technology integration training for teachers came late in May 2009. This would not give teachers adequate time to implement for May/June 2009 examinations. Course Development Jamaica acquired Teachers' Instructional Materials (TIMs) and Students' Instructional Materials (SIMs) for the 5 pilot subjects delivered to pilot schools and teachers oriented by April 2008. See Appendix E. 9,000 items were written and placed on CDs and delivered to schools in March 2009. A temporary Moodle- adopted database was also set up at e-LJam with log in to schools that would assist in the May/June examinations. Contract signed in October 2008 to build Jamaican capacity to develop the content for 5 pilot subjects to be owned by the government over the long term. The capacity building to develop content appears to be not completed and so the material for the piloted subject had to be sourced. It is not clear the source of the material but it is conceivable that the material was not locally developed. During the time of the pilot, the acquired material was still being evaluated up to March 2009 one month before the May/June 2009 examinations. This suggests the teachers were evaluating the materials while they were using them which could be problematic. 101 Table 4.15 (Cont’d) Faculty CSF codes Outcome My Comments Evaluation and Assessment. The Ministry of Education (MoE) subject experts and subject Advisory Groups (SAGs) were established to ensure standards and quality assurance in materials acquired and developed. The instructional technology experts advised on standards and specifications for content on various media including the structure and operation of the CREM. School coordinators and Implementation officers were hired by e-LJam to monitor the e-LHSP implementation in schools. There was no evidence of a committee reporting on examination results in pilot schools to determine the success or failure of the pilot. Also, there was no report from the teachers on students’ performance during the pilot. The official reports did not mention any reference to other reports where such information could be found. In the absence of such reports, this research was needed to highlight the student's performance and make recommendations to improve students’ performance e-Learning environment A temporary Moodle- adopted database that would assist in the May/June examinations was also set up at e- LJam with log-in to schools in March 2009 The virtual e-learning environment which should have provided teachers and students with digital solutions to enhance their learning experience came late in the pilot. Students would have had only one month of online experience with Moodle which is not an easy environment to maneuvre. 4.3.3 Institutions' Contribution to the Project On the matter of subject experts, it was documented and made public in the e- LJam (2008-2009) and (2009-2010) reports that two of the top tertiary institutions in Jamaica had been contracted in October 2008 to develop TIMs and SIMs for the pilot subjects. The institutions were the University of the West Indies/Joint Board of Teacher Education (UWI/JBTE) and the University of Technology, Jamaica (UTech, Ja). The report also documented and made public that The 102 University of Plymouth in the United Kingdom was also contracted to assist in the development of TIMs and SIMs for mathematics and Information Technology. The Human Employment and Resource Training/ National Service Training Agency (HEART/NSTA) Trust provided training up to March 2009 to teachers and MOE Education Officers with training in ICT skills and systems administrators in network management. The Mico University College received cabinet approval in March 2009 to train teachers in technology integration and this critical part of their training commenced in May the same month the official external school leaving examination in May/June 2009 commenced. It can be inferred that the students did not benefit from the teachers’ technology integration training or the ICT skills training given the short time before the examinations. On the matter of intellectual property, all instructional materials purchased and developed were Government property. The pilot schools received support from the government through e-LJam and MOE supervised schools’ implementation, infrastructure development, instructional materials for both teachers and students and training of teachers and subject experts. Strategies put in place to maintain institution sustainability included: the establishment of a Schools e-Learning Implementation Committee (SEIMC) that was established in all schools to oversee the implementation and ensure buy-in and ownership; school principals and MOE officials signed an agreement to ensure that the equipment was safe and teachers attended the training and used the technology in their teaching, inter alia. The equipment had distinctive markings that could identify the equipment in the event of theft; there were no issues raised about the financial standing of the e-LHSPP but the financial performance reported in the e-LJam (2008-2009) report for the period showed a budget amount of J$2.6 billion and actual spending of J$1.2 billion. Concerns were raised by 10.6 percent of teachers about sustainability in a report regarding alternative energy (solar energy) and the replacement cost for equipment that created challenges for schools (Morrison, 2016). Teachers were also concerned about the inadequate investment in broadband to make the project viable in the long run. Finally, codes were developed for Students but they did not appear in the official documents and reports reviewed for the period. This was an unfortunate and problematic omission as students are essential to the success of the e-LHSPP. 103 4.4 Research Question 2A: What Were the Reported Issues Affecting Students’ Attainments During the e-LHSPP? An analysis of the e-LJam Report 2008-2009 and 2009-2010 revealed two major themes that emerged which are: 1. Supervision of the project 2. The resources available to the e-LHSPP 4.4.1 Supervision of the Project Regarding RQ 2A, in reply to Morrison (2016), it was stated that the majority of the school representatives felt that during the pilot phase, the required equipment was properly installed. On the other hand, a small group had two major concerns regarding the apparent lack of discussions with teachers and the lack of wireless options at the pilot stage. The documents supervision of the project in particular governance featured significantly and beginning with the initial board appointments in 2005 but this was short-lived as the board changed in 2007 and again in 2009 due to political changes. Morrison (2016) asked the sample schools, “Do you believe the e- Learning project office set up to implement the project was a properly organised entity?” Morrison found that 78.9 percent of the respondents agreed that it was properly set up but 21.1 percent did not agree. They cited initial delays and lack of discussions with teachers as the major disagreement. There were high praise for the e-LJam representatives because the respondents felt they responded quickly to their concerns. However, The Auditor General’s Report (2014) under section HEAD 5600 cited that though there was a feasibility study that outlined in detail the components and how the company will govern the project, there was a lack of a project management plan from e-LJam which they believed could have prevented the delays in completing the project on time. 4.4.2 Resources Available to the e-LHSPP Resources in this context are interpreted to mean the technology, instructional materials, and teacher training provided to the schools during the e-LHSPP. Though my analysis of the documents revealed that resources were for the most part adequate for the e-LHSPP I had concerns relating to the teacher training in ICT skills and integration combined, online access to databases for both students 104 and teachers, and the instructional material delivery time. These important resources came late in March and May 2009 in the year of the May/June 2009 examinations. The conclusion that can be drawn is that except for the technology and infrastructure, the other resources were adequate but late delivery could have significantly impacted the e-LHSPP. 4.5 Summary This chapter sought to answer RQ 2 and 2A. The analysis for RQ 2 produced the themes: 1. Technological support for success; 2. Key stakeholders' involvement and outcome and 3. Institutions' contribution to the e-LHSPP. The analysis for RQ 2A produced the themes: 1. Supervision of the e-LHSPP, and 2. The resources were available to the e-LHSPP. Regarding RQ2, the analysis of government documents and commission reports showed gaps instead of the e-LHSPP leading to the full implementation of e- LHSP. The analysis revealed that for the most part, the design and components were appropriate for the e-learning needs of the schools during the e-LHSPP. Technology and infrastructure were available to all schools, and key stakeholders were sensitised but there is no record that training in project management was provided which is a required skill to successfully manage a project. The UWI/JBTE, UTech, Ja, HEART/NSTA, and Mico University College were contracted to provide training and develop the instructional materials. In most cases, contracts were signed too late, and instructional materials had to be sourced because locally developed materials were tardy for the e-LHSPP. In regards to RQ2A, a tightening of governance was needed during the e-LHSPP to avoid delays which were primarily because of political changes. The committee set up to implement the e-LHSPP needed to spend more time with the teachers discussing their concerns. Critical components such as teacher ICT training and instructional materials did not adequately support the e-LHSPP. One major component of the CSFs Student was missing from the officially published documents and as such, no evaluation of students' input into the e-LHSPP can be assessed. Codes were developed from the CSFs for students but no themes emerged because of a lack of information. 105 Chapter 5 Discussion In this chapter, I will discuss the findings resulting from Chapters 4A and 4B stemming from the two research questions and sub-question reported in Chapter 3. The discussion will link and interrogate the literature, and integrate the theoretical framework and analysis structure, providing clarification and explanations from the data collection, interpreting and making sense of the data, and adding to the extant body of knowledge. In Chapter 4A the analysis is quantitative and in Chapter 4B document analysis, both chapters will be combined to answer the research questions. The following discussion will be presented using an introduction followed by the research questions. 5.1 Introduction The purpose of this study was to determine the effects of the e-LHSPP on students’ performance in their high-stakes CXC/CSEC school leaving examinations in English language, Mathematics, Chemistry, Biology, and Information Technology. Jamaica is not different from other countries worldwide that have sought to use e-learning/ICT as one way to improve educational outcomes at all levels of the education system. These countries include developed and developing countries but all seek the same benefit of interactivity between teacher and student during lesson delivery (Wagner et al., 2008) and to be able to measure learning outcomes to determine students’ attainment (Noesgaard & Orngreen, 2015). Jamaica is considered a developing country and according to Aung and Khaing (2016), the major challenges faced by developing countries' e-learning programmes are a lack of resources specifically learning materials, weakness in content delivery, the inadequacy of ICT training for teachers, and inadequate infrastructure. Chapter 3 provided information about computers, other technologies, learning materials, and infrastructure resources received by the pilot schools and while these were found to be adequate in some cases for the scope of the e-LHSPP there were serious concerns about their implementation and use especially the ICT integration training for teachers. The training for teachers will be discussed during the analysis of the themes identified during the document analysis utilising thematic analysis. The analysis of the 106 thesis findings was focused on the school leaving examination results supplemented with information from published reports outlining the issues that affected the e-LHSPP. To assist in the planning and analysis, I used both the holistic e-learning systems theoretical framework and the LFA as a guide to evaluating the e-LHSPP and other similar international projects. The holistic e- learning systems theoretical framework provided the lens to approach this study and the areas needed to cover for the document analysis. The starting point was to identify the e-learning system stakeholders and the role they played based on the e-learning systems' theoretical framework guidelines. The major stakeholders were the Ministry of Education, e-LJam, and the schools participating in the e-LHSPP. The e-LHSPP was managed by e-LJam and monitored by the Ministry of Education in 26 schools representing all the school types and geographical locations in Jamaica. The other 42 schools in the study were also drawn from the same geographical areas and represented the different school types in Jamaica. The school leaving examination grades for the subjects were converted to GPA and further re-ordered to match the standard GPA scores of 4 being the highest and 0 failure to perform. Trends of the school leaving examination were analysed during the periods 2005 to 2008 to establish that the quantitative analysis met the “Common Trend or Parallel Trend” assumption. The explanation can be found in Chapter 3. The preliminary statistics that I had to work with for the pilot and non-pilot schools can also be found in Chapter 3. Using higher GPA points to mean better results meant that the results for the pilot schools if higher than non-pilot schools will be interpreted as pilot schools doing better. The analysis in Chapter 4A provided the results for discussion in RQ 1. 5. 2 Research Question 1: What are the effects on students’ attainment when the e-LHSPP was implemented in schools? To evaluate the effects of the e-LHSPP, the decision had to be taken as to what framework would be needed to get a better understanding of the e-learning components and the areas to focus on. The holistic e-learning framework by Aparicio et al. (2016) provided the road map for me to evaluate the e-LHSPP by investigating the areas of E-learning Systems Stakeholders, E-learning Activities, and E-learning Technologies. To answer this research question I focused on E- learning Systems Stakeholders which included institutions and in this case high 107 schools and e-LJam. The national CXC/CSEC examination results for the e- LHSPP schools and the other schools provided the administrative archival quantitative indirect data through the OEC for the quantitative analysis. The LEA provided the model for analysing the e-LHSPP and the research results in other countries. Researchers such as Noesgaard and Orngreen (2015), based on their study, found that most measures of e-learning effectiveness will be defined as “Learning Outcome” and this would most likely be quantitative. The results for the subjects piloted in the e-LHSPP were first captured in the original format in percentages and then converted to GPA for easy analysis. Each school received computer-generated printouts from the OEC showing the subjects taken by students and the total percentage pass or failures for each CXC CSEC subject on a grading scale of 1 to 6, where 1 is the highest and 6 is the lowest. The computer printout also showed the total number of males and females receiving passing or failing grades on the same scale 1 to 6. The GPA is a standard measure of students’ performance used worldwide especially when entering college. In this case, higher GPA scores meant better results for example 4 is better than 1. This was done to keep the same numerical order as is standard around the world. Two periods were evaluated to get a better understanding of the e-LHSPP in the examination pilot year 2009 and 2010 the spillover of the pilot examination year. The LFA assisted me to investigate the goal, purpose, output, and input of e- LHSPP and the verifiable indicators along with sources of information, and assumptions made about what is needed for success. This research question focused on the output section of the LFA for the analysis which included magnitudes of outputs (planned completion date), sources of information and methods used, and assumptions affecting inputs-output linkages. Chapter 3 covered the information I had to work with including the holistic e-learning framework (E-learning Activities) to carry out the analysis at the output stage of the LFA. The output for the e-LHSPP is the school leaving examination results, the verifiable indicator used was CXC/CSEC examination results and the source of information was the administrative archival quantitative indirect data collected for each pilot subject for each school in the study. 108 The CXC/CSEC examination results were converted to GPA scores and used as input for the quasi-experiment. When the combined GPA averages for all subjects piloted were computed before the DiD with regression was applied, the results revealed that the pilot schools outperformed non-pilot schools in both years. The results in Table 4.4 shows that in 2009, pilot schools obtained a higher weighted average GPA in all subjects of 1.987 compared to non-pilot schools of 1.77. And in 2010, pilot schools scored 2.00 compared to non-pilot schools of 1.855. Although there are improvements, further statistical testing reveals that there are no statistically significant differences between the scores of pilot and non-pilot schools. It is interesting to note that although the pilot schools' weighted average GPA scores were higher in both years non-pilot schools made bigger gains in some subjects. The results of the quasi-experiment using the DiD with linear regression and adding covariates location, school gender, and school type when each subject was analysed between 2009 and 2010 reveal the following. English language recorded decreasing GPA average points of -0.129 for 2009 and -0.150 for 2010 while Mathematics recorded an increased GPA average point of 0.123 for 2009 but decreasing GPA average points of -0.050 for 2010. Chemistry recorded an increased GPA average point of 0.073 for 2009 and 0.059 for 2010 while Biology recorded decreasing GPA average points of -0.200 for 2009 and -0.428 for 2010. Information Technology recorded an increased GPA average point of 0.19 for 2009 and 0.237 for 2010. The result reveals that pilot schools made gains over the non-pilot schools in mathematics, chemistry, and IT in 2009 while in 2010 made gains in chemistry and IT. The increases for pilot schools are less than 1 GPA average point increase over non-pilot schools and the results are not statistically significant. The effect size for all subjects was small for both years according to Cohen's d scale which can be interpreted to mean that the outcome is not decisive. An analysis of the covariates of school location, school gender, and school types provided further clarification of the DiD results. When the covariate for schools’ location was examined it was discovered that urban uptown schools outperformed urban inner-city schools in all subjects where English language, Mathematics, and IT results were statistically significant. Urban schools outperformed urban inner-city schools in all subjects except Biology and the 109 results were not statistically significant. Rural schools performed below urban inner-city schools in all subjects except chemistry and the results were not statistically significant. The covariate for the schools’ gender was analysed which revealed that boys' and girls' schools outperformed co-ed schools in all subjects except in Chemistry where the boys' schools performed lower than the co-ed schools. The only statistically significant result was the English language for girl schools when compared to co-ed schools, all the other results were not statistically significant. The covariate for school type reveals that traditional schools outperformed technical schools in all statistically significant subjects. The newly upgraded high schools performed below the technical schools in all subjects except IT for both 2009 and 2010. None of the results were statistically significant. The results of the covariates can be interpreted as girl schools located in urban uptown and traditional high schools performing the best in both years 2009 and 2010. Rural schools that are co-ed schools and newly upgraded performed below the others in 2009 and 2010. Jamaica is not unique in these findings. A look at other countries that sought to improve attainments using e-learning /ICT produced similar results as that of Jamaica. The improvements were small in some cases and not statistically significant. Using the theoretical framework LFA described in Chapter 2 as a method of analysis is produced in Table 5.1. Table 5.1 LFA Analysis of Countries Countries Goal Method Analysis Input Output Lebanon The study sought to evaluate the effectiveness of ICT on student school performance Survey with factorial design 2x2 factorial Analysis of Variance (ANOVA) Predefined sets of ICT artifacts No ICT effect on students passing the baccalaureate in secondary schools South Africa Studied the impact of computer immersion on performance. secondary data analysis quasi- experiment (treatment and control groups) paired samples t- test School leavers' Senior Certificate mathematics scores No significant improvement in the overall Matric Mathematics results for the EMDC East high schools. 110 Table 5.1 (Cont’d) Countries Goal Method Analysis Input Output Argentina Identified improvements in terms of academic achievement accruing to the students who benefited from a programme compared with their non- beneficiary peers. Secondary administrative data analysis quasi- experimental design Propensity score matching (PSM) technique of Rosenbaum and Rubin (1983) Program for International Student Assessment (PISA) test scores The result is significant between groups but its average size is small. Nigeria Studied the effects of ICT on secondary school student's academic performance quasi- experimental pre- test, post-test, and control group design on intact class t-test to test the hypotheses at a 0.05 level of significance Christian Religious Studies Achievement Test scores The mean difference between the two groups is 0.88 in favour of the treatment group. A small difference. Australia Determine Students’ performance in a blended e- learning environment Quantitative analysis Paired sample t- test Year 10 Science scores The result showed improvement overall but not equal for each student and improvement was shown differently for each group in the quartile. United Kingdom (UK) Investigated the impact of ICT on educational attainment Secondary administrative data analysis Prediction using ‘baseline’ data, and then their actual results analysed. National Tests and GCSEs for Key Stage 4 A very small difference between high and low ICT users by subjects means relative gain score. India Evaluated ways to improve the quality of education in urban slums Computer Assisted Learning (CAI) Regression of test score gains on a dummy for treatment school; control for initial pre- test score for results. Computer maths games scores Strong effect on maths scores in the short term but programme ended after one year; in the third year the gains were lost 111 Studies in Sub-Saharan Africa up to 2014 suggested is little evidence of positive effects of ICT intervention and what little evidence does exist suggests that ICT programs are not often effective (Piper, 2014). Other researchers have provided evidence to show the link between the use of ICT and improved academic performance (Chandra & Lloyd, 2008; Taylor et al., 2007). Wims and Lawler (2007) after evaluating the impact of ICT in Kenyan schools found tangible benefits to students as a result of exposure to ICT. The tangible benefits they recorded were not concerning improved academic performance but career and job opportunities. The literature in general reviewed shows some improved performance in test scores but most were not statistically significant as is the case in Jamaica. In research question two I will discuss whether the components and design of the e-LHSPP were appropriate enough to be successful. 5.3 Research Question 2: Are the components and design appropriate for the successful piloting of the e-LHSPP? To answer RQ 2, I used the E-Learning Technologies and E-Learning Activities components from the holistic e-learning theoretical framework as a guide and compared what the literature recommended as needed for a successful pilot of the e-LHSPP outcome. The document analysis produced the themes: 1. Technological support for success; 2. Key stakeholders' involvement and outcome and 3. Institutions' contribution to the e-LHSPP. These themes came out of coding the Critical Success Factors (CSFs). The literature surrounding the successful implementation of e-learning proposes that e-learning systems should include the Critical Success Factors (CSFs) which are Student, Faculty, Technology, and Institution (Frimpon, 2012). 5.3.1 Technology Support for Success There were no issues recorded about the technology supplied to the pilot schools as these were the technologies available at that time and they were new. The pilot schools' infrastructure was renovated and other resources including CDs, furniture, and networks were provided for the computer labs. The schools had to provide their Internet service which they were already doing. At the beginning of the e-LHSPP, there was no mention of the establishment of a standardised e- learning platform. This meant that schools were left on their own to navigate the 112 Internet and choose an e-learning platform if any at all, that best suited their needs. The lack of a standardised e-learning platform connected to a central monitoring system would make it difficult to assess the successes and challenges encountered by the pilot schools. Information of this nature would be vital for proper evaluation. There was designing of the Learning Content Management System (LCMS/LMS) by technology experts but it was not ready and so a temporary Moodle site was set up near the time of the May/June examinations. The setup of the Moodle site close to examination would not be very effective, especially for the teacher since Moodle is not easy to traverse. 5.3.2 Key Stakeholders' Involvement and Outcome Researchers such as Naveed et al. (2020) list the CSFs dimension as Students, Instructors, Design and Contents, System and Technology, and Institutional Management. Though the studies are about adult students studying at a distance with online courses, the dimension of Student can also be adopted at the high school level even if students are in school and using their computer labs as is the case in Jamaica. I could not find any evidence that the CSF Student was adequately engaged or prepared for the e-LHSPP. I combined both Frimpon (2012) and Naveed et al. (2020) CSF Student to produce Table 5.2. Students’ training was not documented in any of the reports or studies on the e-LHSPP. To adequately prepare students for e-learning, the CSFs have to be taken into consideration and adequate time should be given for preparation and a rubric designed to capture the CSFs. Table 5.2 CSF Student Dimension CSFs Student Discipline Computer Competency eAttitude Participation and involvement Students’ motivation General Internet self-efficacy Interaction with other students Commitment toward e-learning 113 It is difficult to imagine e-learning implementation in schools without adequate preparation for students. A study about e-learning in secondary schools was conducted by Boulton (2008) in the UK around the same time the e-LHSPP being been implemented in Jamaica. It took place over two years with students aged 14-16 (Key Stage 4) similar to Jamaica who was preparing for their GCSE. The study highlighted many challenges faced by the students and concluded that there needed to be an awareness that students who are pursuing full-time compulsory education require training in using e-learning materials and developing independent learning skills before implementing e-learning. The ICT skills training of the teachers was well documented and a total of 5,139 teachers and 180 system administrators were trained. The question about the teacher's ICT competency was not in question but another CSF relating to Faculty revealed that training in integrating technology came late for teachers in the pilot phase. Technology integration training for teachers started in May 2009 less than a month before the start of the examinations. Integrating technology into the schools’ curriculum is extremely important for the successful implementation of e-learning projects. How the teacher used the technology and learning resources can be found in Chapter 3 where I explained the discussion I had with two teachers. Technology integration in the curriculum constitutes the foundation for teaching and learning in 21st-century classrooms. According to Schoepp (2005), both technology standards for teachers and curriculum integration are essential components of a technology integration plan. The technology standard for teachers established as early as 2000 covers six broad categories (International Society for Technology in Education [ISTE], 2000). The categories that would be missing from the teachers’ training and that would have assisted e-learning integration in the classroom are: (a) planning and designing learning environments and experiences (b) teaching, learning, and the curriculum, and (c) assessment and evaluation. The successful integration of technology in schools hinges on the ability of the teachers to infuse technology into the curriculum (Rakes & Casey, 2002). It is the responsibility of the teachers to support the CSF Student and if the teachers are unable to successfully integrate technology into the curriculum then it can be assumed that the students will lack the CSFs as shown in Table 5.2. As a result of the lack of technology integration training by the teachers, it would appear that both teachers and 114 students according to the two teachers I spoke with were using the technologies supplied by the e-LHSPP but there was the absence of the pedagogy related to technology integration as recommended by the holistic e-learning theoretical framework (e-learning activities). Though the school leaving examinations were primarily paper and pencil with the science subject having laboratory components, the inability of the teachers to properly integrate e-learning pedagogy into lessons could account for the insufficient improvements in passes for the e-LHSPP pilot schools over the other schools. Another important component of the e-LHSPP that was missing was the focus on school administration regarding technology integration. Yu and Durrington (2006) agree that school administrators play an important role in technology integration. While the e-LHSPP project component 3 relating to teacher training included Principals’ Awareness Orientation session designed to sensitise principals about the project. I could not find any other document showing training or evaluation of the principals’ leadership regarding e-LHSPP in their schools. The Technology Standard for School Administrator Collaborative (2001) provided a guideline for administrators to facilitate the integration of technology in their schools. The guideline covers six areas: (a) leadership and vision, (b) teaching and learning, (c) productivity and professional practice, (d) support, management, and operations, (e) assessment and evaluations, and (f) social, legal, and ethical issues. Morrison (2016) in his recommendations for the e-LHSP emphasised that leadership from the principals and heads of departments needed to be demonstrated at the schools for the ICT investment in e-learning to be viable and reap the rewards from e-Learning. Kusluvan (2003) agrees that it is the responsibility of the leadership to direct the group. The Auditor General’s Report (2014) cited a lack of a proper project management plan which would account for the gaps in the direction of the leadership of e-LHSPP. 5.3.3 Institutions' Contribution to the Project The contributions made by the contracted institutions such as UWI/JBTE, UTech, Ja., HEART/NSTA, The University of Plymouth, and Mico University College never really affected the outcome of e-LHSPP. The UWI and UTech, Jamaica, were still testing locally written materials up to the time of the May/June examination 2009 which led to the acquisition of materials for the five piloted 115 subjects. The HEART/NSTA completed ICT training but the Mico University College Technology Integration Training of teachers could not create an impact because the training was completed too close to the May/June examination 2009. In the final research question 2A, I will discuss the issues affecting the e-LHSPP. 5. 4 Research Question 2A: What are the reported issues affecting students’ attainments during the e-LHSPP? The document analysis using thematic analysis of the reported issues from the various government documents and publish reports revealed two themes affecting the e-LHSPP which were classified as Supervision of the project and the resources available to the e-LHSPP. The supervision of the project was primarily government officials and so focusing on governance was important. Governance is often referred to as the relationship between a country’s government and its citizens but when it refers to a company’s management structure and its relationship with its shareholders it is called corporate governance (Benn & Dunphy, 2006). When governance is applied to education according to the Network of Experts in Social Sciences of Education and Training (2018) it refers to: “Education governance is concerned with how the funding, provision, ownership, and regulation of education and training systems is coordinated, and at what level; local, regional, national and supranational” (p.1). In the case of the e-LHSPP, it was the government that appointed the e-learning governance boards to govern the pilot. The first public board was appointed in 2005 but was short-lived because of political reasons. Another two boards were appointed in 2007 and 2009. The detail of board changes can be found in the e- Learning annual report for 2008 – 2009. The instability in governance boards can have adverse effects on projects. The Funnel Model shown in Figure 5.1 for e- learning implementation shows the important relationship between governance and finance, technology and delivery, and materials development and instructional design. It shows that e-learning is much more than technology and the materials developed but a lot is dependent on governance and finance for funding and general administration of the e-learning project. The sustainability of any e-learning system depends on its management and is the central axis because of the high cost and resources needed to develop a successful system (Suryawanshi & Suryawanshi, 2015). Applying the Funnel Model to the E-LHSPP 116 shows that governance and finance should have been well integrated with material development and instructional design, and technology and delivery to ensure the full realization of the E-Learning outcome. Although some of the pitfalls in governance were unavoidable such as the change of government causing contracts and deliverables delay, consideration in the planning should have been given to these possibilities, and mitigations strategies put in place. From “Fundamentals of e-learning models: A review,” by V. Suryawanshi, and D. Suryawanshi, 2015, IOSR Journal of Computer Engineering, p.110. Figure 5.1 Funnel Model for implementing e-learning Representatives of the sample schools were asked by Morrison (2016), “Do you believe the e-Learning project office set up to implement the project was a properly organised entity?” The responses from his study revealed that 78.9 percent of the respondents agreed that it was properly set up and the required equipment properly available but 21.1 percent did not agree. Those who disagreed cited initial delays and lack of discussions with teachers as the major disagreement. The Auditor General’s Report (2014) cited a lack of a project management plan from e-LJam which they believed could have prevented the delays in implementing some key components and overall completing the project on time. Finally, the Auditor General’s Report (2014) documented no evaluation of the Board of Directors for the period February 13, 2012, to March 31, 2014, suggesting that there was no previous evaluation done either. Good governance in education promotes the existence of standards, provides performance 117 information, creates incentives for good performance, and ensures accountability (Lewis & Pettersson, 2009). The theme resources available to the e-LHSPP refer to the technology, instructional materials, and teacher training provided to the e-LHSPP schools. The major concern was related to Technology integration training for teachers as discussed in question two and linked to the CSF Student. Another piece of a critical resource that was affected was the temporary Moodle-adopted database which was to help in the May/June 2009 examinations and set up at e-LJam with log-in to schools in March 2009. Over 9000 items for the e-LHSPP were written and placed on CDs and delivered to the schools but the access to the Moodle- adopted database came late, three months before the scheduled examinations in June 2009. I believe that this late access to online resources by the schools could have affected the student's performance in their June 2009 examinations. Finch and Jacobs (2012) stated that online access can increase opportunities for users to collaborate with expert professionals, provide students with the flexibility to have access to courses, and allow adjustments to subjects and their contents as needed. Earlier access to the database could have provided subject teachers with the opportunity to collaborate across schools and give the students access to the material at any time convenient to them. The possibility also existed where adjustments to the contents could have been done in real-time and all teachers would have access to the material. My explanation based on the data for the lack of short-term gains in some subjects could be insufficient teacher training with the new technologies, learning materials that were not well adopted for computer use, and students' lack of ICT skills because it would take some time for students to learn how to use ICT efficiently in an educational setting especially when there is used to the traditional way of teaching as is adopted by all schools. 118 Chapter 6 Conclusions, Implications, and Recommendations This chapter presents the conclusions, implications, and recommendations for this thesis. In addition, I have included the contribution of this study to knowledge and professional contribution to education. 6.1 Conclusions Over the past several decades, governments and institutions around the world have sought to improve the quality of education at all levels of the education system through electronic means. These electronic means have taken different formations and evolved into what we see today in ICT 2022. Though they are called by different names under the umbrella term ICT, the underlying purpose when used in secondary schools in most instances is to improve the performance of students. A study of the literature revealed greater success at the primary level than at the secondary level of education. It has proven even more difficult at the Key Stage 4 (years 10 and 11) ages 14 to 16 because of the high-stakes external examinations that require more infrastructure, especially for science subjects. The BECTA (2001) reporting on the ImpaCT2 study, highlighted issues with the curriculum that affected the study. Some of these issues are: (1) the extent to which ICT is embedded in subjects in various schools varies; (2) most secondary schools teach ICT as a separate subject; (3) the examination pressures on the curriculum in Key Stage 4 create difficulties for ICT use and (4) pressure on school timetables for ICT and subjects separately. Jamaica like many other developing countries adopted the e-learning approach to solving its educational challenges which had its genesis in 2006. This in particular was to facilitate improvement in the high-stakes school leaving examinations at the secondary level. While this study is based on an e-learning project over a decade ago it was Jamaica’s only national programme to date at the secondary level of the education system that sought to use e-learning as a means to improve student performance on a national level. Subsequent short-term programmes in 2019-21 emerged out of it and have sought to encourage teachers to make use of technology. There is also the call from the present Minister of Education, Youth, and Information, Hon. Fayval Williams, to school leaders to make greater use of 119 technology to ensure that students are not left behind in the rapidly changing technological learning environments. This thesis's purpose was to answer the following questions on how the pilot worked. The findings show that the e-LHSPP pilot did not produce the expected increase in the five subjects piloted in 2009 and the spillover year in 2010. What occurred was a small increase but not statistically significant of less than one point for pilot schools over non-pilot schools in mathematics, chemistry, and IT in 2009 and chemistry and IT in 2010 using the GPA average points as the measure of performance. These results were similar in the other countries reviewed in this study. For some persons, even a small increase would be welcome news given the large investment in the e-LHSPP schools and an increase is an increase no matter if others might consider it a failure of the e-LHSPP. How the results are interpreted by the stakeholders or different interest groups will depend on their motives. Researchers such as Trucano (2013) and Piper (2014) have found little evidence of a significant impact on students' performance when ICT is used, especially in developing countries. What they found was that impact remained small providing evidence of positive effects. The covariates location, school gender, and school type produced results that show the importance of including them in determining schools’ performance. The results reveal that urban uptown, traditional girl schools, and other traditional schools are likely to do better than the others, and when combined, urban uptown girl schools which are traditional will emerge on top. The documents that were analysed revealed that the components and the design were appropriate for the most part but serious concerns emerged about the implementation of the e-LHSPP components. There were issues concerning the readiness of school administrators to manage such a project, the technical competency of teachers to integrate technology in teaching at such short notice, the locally developed material was still being evaluated while being used, and the digital environment (Moodle) came late and was asynchronous rather than the typical synchronous use in schools. The reported issues are summarised into two themes that are imperative to e-learning. They are the supervision of the project and the resources available to the e-LHSPP. Accountability of leadership from the board level to the school's administration was lacking in key areas and this affected the teachers. Teacher training is an essential part of the resources 120 needed to succeed and a critical piece of the training, technology integration came too late and may have affected the delivery of the content provided to the pilot schools. The discussion surrounding the Funnel Model for implementing e- learning by Suryawanshi and Suryawanshi (2015) highlights the lack of integration between the key components of the e-LHSPP which caused the setting of an unrealistic timeframe to achieve the deliverables. A critical CSF Student was not featured in the plan. It appeared that the focus and measurement of success of the e-LHSPP were essentially dependent on the teachers receiving the ICT training and passing on the information to students in the usual teacher-centered approach. The technologies supplied to the schools favoured a teacher-centered approach. My own experience as a teacher at the secondary level and interactions with trainee teachers during the time of the e- LHSPP would provide evidence that the success of the e-LHSPP rests with the teachers. Most if not all interventions in Jamaican schools to improve students’ academic performance are teacher-centered. The success of the e-LHSPP and any other project of this kind should pay particular attention to students’ involvement as set out in the CSF Student. The literature shows that teachers of e-learning in secondary school must be aware that adequate preparation of students is paramount. As a teacher educator for twenty years, I am aware that IT is taught as a separate subject from Grades 7 to 9 and students specialise in IT from grades 10 to 11. In Grades 10 and 11, ICT was not integrated into the curriculum during the e-LHSPP and there is no evidence that the students received any training in ICT and the pedagogy associated with learning. My own IFS study looked at several teacher training institutions in Jamaica and it was discovered that up until 2016 the ‘teacher-centered’ approach was the dominant approach. The teacher-centered approach positions the teacher as being responsible for imparting knowledge, the subject expert, while the students are the recipients. The teaching approach had not changed and the teachers were mainly using PowerPoint presentations as the preferred choice in the classroom during their practicum. The PowerPoint Presentations were not interactive and were used mainly to disseminate notes to students. When the e-LHSPP was implemented in 2008 one year after its scheduled implementation, there were no national ICT policy guidelines directing the use of ICT in schools but by 2013 Jamaica had charted a new course. A national ICT plan was developed to 121 provide a framework to guide the transformation of teaching and learning by equipping educators with skills and tools to harness the power of ICTs for the design, delivery, and assessment of relevant curricula (MoE, 2013). Figure 6.1 gives an illustration of the strategic perspective for ICT in education policy. From “Information and communication technology in education policy: Leveraging ICT, transforming citizens,” by Ministry of Education, 2013, p. 15. Figure 6.1 Strategic perspective Coming out of the ICT in Education Policy (MoE, 2013) was the Jamaica National ICT in Education Competency Framework for Teachers (MoE, 2015) which is part of the strategic objectives defined in the ICT in Education Policy. The 2015 Framework developed from the UNESCO ICT in Education Framework adopted the three competency approaches of technology literacy, knowledge deepening, and knowledge creation. Six modules were adopted which include understanding ICT in education, curriculum and assessment, pedagogy, ICT, organisation and administration, and teacher professional development. A more recent document the Information and Communication Technology – Competency Framework for Teachers (ICT-CFT)( MoEYI, 2017b) further refined and renamed the modules to Philosophy of ICT in Education, Fundamentals of ICT, ICT in Curriculum and Assessment, ICT in the Learning Process, The Digital Approach to Classroom 122 Organization and Administration and The Teacher: A Digital Practitioner. These changes further highlighted the importance of the introduction of ICT in the curriculum for teachers to improve educational outcomes. Teacher training institutions in collaboration with the MoEYI commenced the discussion of implementing the ICT-CFT in the academic year 2018. Emphasis was placed on teacher training institutions that prepare teachers for the secondary level because of the urgency to improve educational outcomes (ICT-CFT, 2017). As mentioned in Chapter 1, my involvement in the review of the document resulted in the implementation of the ICT-CFT at my university, and it has now become an official part of teacher training at that institution. The ICT-CFT is being piloted by my colleagues and this study will serve as a valuable additional source of information to strengthen the analysis and evaluation of the pilot. Recently, a new ICT in Education Policy (2022) was launched. One of its main goals is, “Enabling effective teaching and learning to improve outcomes of education, leading to a knowledge-driven citizenry” (p.12). The Government of Jamaica has committed to implementing strategies such as “Promote and encourage the use of digital materials in the Increase access to ICT resources – for example, ICT tools, content, and connectivity” and “Provide ICT appropriate opportunities and resources to align educational practices to the emerging expectations of the outcomes of education, as well as the teaching and learning”(p. 12). What is missing from the strategies is the space for independent researchers to have access to information collected by the Education Management Information System (EMIS) and the Management Information System (MIS) to evaluate and assess learning outcomes especially when ICT is used. This thesis has provided an opportunity to begin the discussion surrounding measuring the impact of ICT interventions at both the school and macro levels of the education system. Jamaica is now in a better position to conduct such projects as the lessons learned from this thesis and other studies mentioned here can prevent some of the pitfalls that are associated with implementing e-learning/ ICT systems in secondary schools. Secondary schools preparing students for high-stakes examinations tend to find it more difficult than primary schools to meet the high expectations of e-learning/ICT systems because the e-learning material produced has to provide the content of the examination body and structured programmes 123 must be put in place for students not only to learn to use the tools of ICT but learn the content. The e-learning systems tend to focus on the teachers, ICT resources, and materials but not enough on the students. The results of e-LHSPP will serve as a baseline study for future studies where comparisons can be made between then and now to map the progress of e-learning in Jamaica. Some of the present problems being faced in e-learning can only be understood by examining the e-LHSPP which provides a greater understanding of the foundation e-learning structure. For example, many of the technologies used in the past are now obsolete and new ones are being introduced in schools, especially during the COVID-19 pandemic. Now that the pandemic is over schools are returning to face-to-face teaching but whether the school leadership culture will change to make greater use of ICTs in the classroom remains to be seen. It is anticipated that the lessons learned will inform future e-learning projects. Jamaica still struggles to get acceptable passes in the high-stakes CXC/CSEC school leaving examinations after implementing the e-LHSP over a decade ago. The poor performance which continues has raised serious concerns in the education community. These concerns caused me to take a closer look at the piloting of the e-LHSP to determine if it was properly done. I am partly responsible for the teaching practicum at the university where I work, and every year for the past ten years, students have been complaining about the lack of ICT resources and access to online learning resources in the schools across Jamaica where they are placed. Immediately after the e-LHSPP (2008 -2009), the full project was rolled out in 2011 in 164 schools including the pilot schools, and ended 2013. During that time, 11 subjects were covered in total. In 2013 the project was handed over to the MoE by e-Ljam. Several studies conducted after the e-LHSPP cited some of the same pitfalls mentioned in my thesis, which have contributed to the continued poor performance. Charlton-Laing and Grant (2012) found: (1) that there was no preparation of students for e-learning, but although the students had some technical skill, other skills such as communication and comprehension was deficient which created a challenge for them using the Internet; (2) Some teachers needed customized training and some older academic staff were not interested in e-learning. Some teachers were frustrated with the training because of the uneven pairing of teachers’ abilities, and significant weakness of the e- 124 learning project was that there was not sufficient awareness of the transformation that should occur in the way a teacher operated; (3) There was evidence that principals did not fully understand their role and most of them did not participate in the entire training; (4) There did not appear to be a clear vision for technology in all schools; (5) There were limited IT human resources in most schools as the government did not provide for such a position in the schools. A dedicated system administrator was paid by the school and not by the government and this further depleted the school’s budget; (6) High-quality written learning content was not present in all schools. Morrison's (2016) study sampled 45 high schools including 5 pilots. According to him, he has seen improvement in passes from 2007 to 2014. Close analyses of his results show that mathematics did not pass 50% from 2008 to 2013. His statistical measures included simple comparative analyses (e.g. comparison across time and across schools) and trend analysis. The statistics he used were descriptive, which is not an effective way to measure cause and effects, inferential statistics would have provided realistic results. The official mathematics results reveal downward trends from 56% passes in 2014 to 37.2 passes in 2022, showing a reversal in gains. He cited many pitfalls, some of which are the pilot period being too short, delays in resources, internet access, teacher mastery and attitude, leadership, and management. Pitter (2017) found that trainee teachers in the schools favoured using multimedia presentations in particular PowerPoint, text, and picture. Many student teachers during their Teaching Practicum complained that these were the only resources available and teaching resources were in short supply. As Programme Director for Education, it is my responsibility through research to inform my students about the happenings and expectations when they pursue their Teaching Practicum. My final thoughts are that it is about trained teachers using technology with appropriate pedagogies and not merely using it. It did not work because of administrative bundling resulting in delays of key components, teachers not properly prepared to integrate technology in learning, students not prepared to use technology, not enough time for teachers and students to practice and the pilot period was too short. The mistakes were that too many schools were included in the pilot, and they tried to do too much in a short time especially the contractors to write the learning materials. The project was very ambitious. 125 6.2 Thesis Contribution 6.2.1 Thesis Contribution to Knowledge It is envisioned that this thesis will contribute to the body of knowledge that exists regarding the analysis of e-learning outcomes for students’ attainment in their school leaving examination in secondary schools in developing countries like Jamaica. In Chapters 3 and 4, the case was made to use administrative archival quantitative indirect data and pre-existing archival documents as indirect data to analyse and evaluate the e-LHSPP. The analysis was done by using the school leaving examination results at the macro level in Jamaica and documents related to e-LHSPP. A literature search on Jamaica found no evidence of research involving a pilot project about e-learning using administrative archival quantitative indirect data and pre-existing archival documents as indirect data for analysis. A second contribution that this study will make to knowledge is the usefulness of a model using the quasi-experiment DiD with linear regression to examine e- learning effects on national school leaving examinations in Jamaica and possibly the Caribbean. The literature shows that the DiD is primarily used in economics and not widely used in education at the national level. When educators are analyzing the impact of e-learning on students’ performance in schools it is usually confined to the classroom or at the school level. Chapter 3 justifies the use of the DiD in this thesis which can be duplicated in other jurisdictions. Thirdly, this study highlights the difficulties countries face, especially developing countries when trying to implement e-learning to improve students’ academic performance in high-stakes schools leaving examinations at the secondary level. Finally, the thesis improves on the evaluation done by Morrison (2016) at the pilot stage by shedding new light on the efficacy of e-learning in secondary education worldwide. This study sought to contextualise the quantitative analysis with document analysis which has not been done so far, through analyzing all the available resources for the e-LHSPP to get a better understanding of its impact and the document analysis contributed to and supplemented our understanding. 126 6.2.2 Professional Contribution This thesis will seek to influence government research policy concerning making primary datasets available for use in administrative secondary data analysis. When primary administrative secondary data are available it gives researchers access to complex data without having to do primary data collection and I hope this will encourage researchers to produce more research. Under the Access to Information Act (2002), I have the legal right to obtain access to government documents that are not on the exemption list. The challenge is to find the primary data and the data collection instruments after the government documents are published. Sometimes these data collection instruments and the primary data are kept by the contractors employed by the Ministry of Education who are required to just produce the research results. In my academic position, I have had the opportunity to work on government projects and was required to produce reports. In Jamaica, the process to access information is not centralised so a researcher has to determine which government ministry possesses the information needed for the research and then follow their protocol to obtain the information. This process can take up to six weeks as it was in my case. I am suggesting and calling for a culture shift in which all documents produced in any government research including datasets and collection instruments that are paid for by the government should be handed over to the government and stored in a database and made available to researchers. The use of data in this way is called open data where anyone with an Internet connection can have access to the dataset. There are many government examples around but Safarov (2019) whose results concur with Open Data Barometer, ranked the UK as the best-performing country in the world in terms of open data readiness, implementation, and impact. The study reveals that the UK utilises a central approach to open government data (OGD) and provides institutional support through the Cabinet Office. Other institutions such as the “Open Data Institute and Open Knowledge Foundation systematically study and provide support to increase public awareness”(p.324). The study concluded that the centralization approach in OGD governance yields better results and accelerates the level of OGD implementation. To conclude, this thesis contributes to the ongoing research on the e-learning system in secondary schools to provide an understanding of the most effective ways to measure learning outcomes using quantitative methods and document 127 analysis. To connect the document analysis to the quantitative results, the researcher would have to identify and isolate all variables in the document relating specifically to the variable(s) in the quantitative analysis that could impact the quantitative results. For example, students’ results can be impacted by the teacher or the student themselves. The next step is to find out what the documents say about teachers and students in the project and how these might affect the results. Administrators in secondary schools who are implementing e- learning can use the model to help them in their research to ascertain students’ performance and efficacy. Stakeholders in the education system who are responsible for e-learning will have the relevant information necessary to pilot and implement successful e-learning systems, particularly in developing countries. Finally, the aim is to promote the use of administrative archival indirect data and pre-existing archival documents as indirect data, also called administrative secondary data analysis, in educational research in Jamaica and by extension the Caribbean region and particularly my university here in Jamaica that focuses on empirical studies. 6.3 Implication for Practice Jamaica, like most countries around the world, is experiencing unprecedented disruptions in education because of the COVID-19 pandemic. New ways are being sought to deliver educational content through the use of e-learning. The technology has improved when compared to the implementation of the e-LHSPP but the issues are not so much about the acquisition of technology as shown in this study and other studies, it is how the technology with the accompanying pedagogy will be used and the appropriate piloting and testing of the new e- learning systems that will make the difference. Not adequate piloting is carried out when technologies are introduced in schools and some believe that the training of teachers alone is adequate preparation for technology integration in schools. Greater focus should be paid to students as outlined in the CSFs Student. The lessons learned from this study and the recommendations should be used as a guide going forward to mitigate the pitfalls experienced during the e- LHSPP. 128 6.4 Recommendations The recommendations will provide implementers of e-learning systems in secondary schools with the lessons learned and the practical solutions to avoid pitfalls when undertaking projects of this nature. It must be noted that the pilot phase of a project like this is the most important phase of the project and full implementation should not commence until there is evidence that the goals of the pilot are achieved. 1. Two years should have been allotted for the pilot. A smaller number of schools should have been included in the pilot in the first year to test and refine the e- learning methodology. In the first year, one school representing each school type should have begun the pilot and the teachers and students trained. Sufficient time should have been given in the second year to develop and test the locally produce TIMs and SIMs. 2. Critical to the success of e-learning systems in schools are the students who cannot be forgotten. The official reports published about the e-LHSPP did not mention students though they might have been mentioned in other documents showing the focus of the training on teachers. The students who are to be involved in an e-learning pilot should receive extensive training in e-learning at their school from e-learning experts and not necessarily teachers. All the schools in the e-LHSPP received at least two well-furnished computer labs with Internet connections where classes were held. The CSFs Students by Frimpon (2012) and Naveed et al. (2020) focused on adult students studying at a distance but it can be adopted and serve as a guideline for the training of high school students in their schools. 3. A pilot project management plan is imperative for an e-learning pilot project and no project should begin without it. The plan should include the major stakeholders, such as teachers and school administrators particularly those from the schools. 4. A framework such as the Logical Framework Analysis which is an internationally recognised framework used to plan and manage development projects must also be developed for the pilot phase. 129 5. ICT skills training and technology integration training should not be separated as was the case in the pilot but should be incorporated into one training to maximise time management. The separate training resulted in administrative bundling and delayed the start of the technology integration training which was very crucial for the pilot. 6. The schools’ administration should receive training in project management because the ultimate success of the pilot in schools will primarily rest with the school's administration. Too often this responsibility is left to the teachers and in particular the IT teacher. Technology Standard for School Administrator Collaborative (2001) provided a guideline for administrators to facilitate the integration of technology in their schools which if followed should lead to success. 7. A formal policy/guideline/plan for implementing e-learning systems in schools should be established. 8. A national database should be established to store secondary data for all research conducted on behalf of the Government of Jamaica. The database should not only consist of reports but the data collection instruments and primary data. These should be digitised for ease of retrieval. 9. A national ICT Skills and Integration Certification should be designed in the first instance for all key stage 4 (years 10 and 11) teachers to ensure that all teachers at that level are competent to integrate ICT into the lesson. The training should be standardized for all universities and colleges that offer teacher training. 10. There should a bi-partisan effort between political parties to establish a board of directors to manage a national project of this nature that will remain regardless of political election outcomes. This would prevent delays in board decisions while a new board is being formed. 11. Develop strategies and systems that will maintain the integration of governance and finance, material development and instructional design, and technology and delivery (Suryawanshi & Suryawanshi, 2015). 130 6.5 Recommendations for Further Study The following recommendations for further research are based on findings from this study: 1. These research findings should be compared with the current state of e- learning in Jamaican secondary high schools. 2. Additional studies are needed about how Jamaican students and teachers use e-learning. 6.6 Assumptions, Limitations, and Delimitation of the Study The research is based on the assumption that the CSEC administrative results provided by the OEC are a true reflection of the original data set supplied by the CXC and are authentic, reliable, and valid and no tampering took place. It is also assumed that the reports from the various government agencies reflect a true account of the situation of the investigation carried out. The Child Care and Protection Act (2004) of Jamaica created an additional limitation during data collection. One main objective of The Act is to promote the best interests, safety, and well-being of children in Jamaica. Collecting individual- level data from children who sat their school leaving examinations created ethical challenges. The schools are reluctant to provide individual-level data for past students because they have a duty of care and protection for each student. According to the OEC, individual-level data is not available to the public, and only children who sat the examination can collect their examination results or request copies of their examination results. Given the period of the e-LHSPP, the number of schools, the number of past students during the e-LHSPP, and the research objectives, collecting aggregate school data is the preferred choice. The available data for immediate use by the public is aggregated school data per subject. The age at which students sit their examinations at the end of their formal five years is a limitation outside the researcher’s control. As mentioned in Chapter 1, the Education Regulation Act (1980) specifies that students entering high school should attain the age of 11 or 12, and spend a mandatory five years. Students are expected to sit their high-stakes CXC/CSEC school leaving examinations in the fifth year of high school, when they would have attained the age of 15 or 16. 131 Quartile regression may have helped to investigate if the e-LHSPP affected schools in the lower quartile differently than schools in the upper quartile. To successfully conduct the test, the schools would have to be divided into lower- performing and higher-performing schools separated by median-performing schools. It was not selected because of the limited information regarding the degree of freedom which prevented the application of the quartile DiD. The degree of freedom is N-1 and the sample size will matter. The interpretation of the test statistic is different depending on how many degrees of freedom it is based on, and meaningless without that context of the degree of freedom. Definition of Terms Information and Communication Technologies (ICT): “includes all digital and electronic resources, information technology tools and applications, education management systems which are deployed as enablers for effective teaching and learning which ultimately promotes creativity and innovation towards achieving a knowledge-based economy” (MoE, 2013, ICT in Education Policy). E-Learning: internet, intranets/extranets, audio and videotapes, satellite broadcast, interactive TV, and CD-ROM, not only for content delivery but also for interaction among participants (Wagner et al., 2008). E-Learning Jamaica Company Limited (e-LJam): An agency of the Ministry of Science, Energy, and Technology responsible for the implementation of e- Learning projects in collaboration with government ministries and agencies. E-Learning High School Project (e-LHSP): An initiative by the Government of Jamaica to implement E-Learning in all high schools in Jamaica. High/Secondary School: Schools with students in Grades 7–11 and Grades 7–13. Ministry of Education, Youth, and Information: The government ministry with responsibility for the education sector in Jamaica. Parish: The 14 administrative regions in Jamaica. Region: The six zones in which Jamaican schools are grouped. 132 References Arafat, S., Aljohani, N., Abbasi, R., Hussain, A., & Lytras, M. (2019). Connections between e-learning, web science, cognitive computation and social sensing, and their relevance to learning analytics: A preliminary study. Computers in Human Behavior, 92, 478-486. Adunola, O. (2011). The impact of teachers’ teaching methods on the academic performance of primary school pupils in Ijebu-Ode local cut area of Ogun State. Ego Booster Books. Al-Adwan, A., & Smedley, J. (2012). Implementing e-learning in the Jordanian higher education system: Factors affecting impact. International Journal of Education and Development using Information and Communication Technology, 8(1), 121-135. Alderete, V., & Formichella, M. (2016). The effect of ICTS on academic achievement: The Conectar Igualdad programme in Argentina. Cepal Review 119. http://hdl.handle.net/11362/40784 Algahtani, A.F. (2011). Evaluating the effectiveness of the e-learning experience in some universities in Saudi Arabia from male students' perceptions (Unpublished doctoral dissertation). Durham University. Al-Homod, S., & Shafi, M. (2013). Success factors of E-learning projects: A technical perspective. TOJET: The Turkish Online Journal of Educational Technology, 12(2), 247-253. http://www.tojet.net/articles/v12i2/12223.pdf Aparicio, M., Bacao, F., & Oliveira, T. (2016). An e-learning theoretical framework. Educational Technology & Society, 19(1), 292–307. Arkorful, V., & Abaidoo, N. (2015). The role of e-learning, advantages and disadvantages of its adoption in higher education. International Journal of Instructional Technology & Distance Learning, 12(1), 29–42. Auditor General’s Department. (2014). Report of the Auditor General on the financial transactions and the financial statements of the Government of Jamaica for the financial year ended 31 March 2014. https://auditorgeneral.gov.jm/wp-content/uploads/2016/09/AuGD_ ANNUAL_REPORT_FINAL_2014.pdf Aung, T.N., & Khaing, S.S. (2016) Challenges of implementing e-Learning in developing countries: A review. In T. Zin, J.W. Lin, J.S. Pan, P. Tin, & M. http://www.tojet.net/articles/v12i2/12223.pdf https://auditorgeneral.gov.jm/wp-content/uploads/2016/09/AuGD_ 133 Yokota (Eds.), Genetic and evolutionary computing (Vol. 388, pp. 405- 411). Springer, Cham. https://doi.org/10.1007/978-3-319-23207-2_41 Balanskat, A., Blamire, R., & Kefala, S. (2006). The ICT impact report: A review of studies of ICT impact on schools in Europe, European Schoolnet. http://insight. eun. org/shared/data/pdf/impact_study. pdf Banerjee, A. V., Cole, S., Duflo, E., & Linden, L. (2007). Remedying education: Evidence from two randomized experiments in India. The Quarterly Journal of Economics, 122(3), 1235-1264. Behar, P. A. (2011). Constructing pedagogical models for E-learning. International Journal of Advanced Corporate Learning, 4(3), 16-22. Behera, S. K. (2013). E- and M-Learning: A comparative study. International Journal on New Trends in Education and their Implications, 4(3), 65-77. Benn, S., & Dunphy, D. (2006). Corporate governance and sustainability: Challenges for theory and practice. Routledge. Boslaugh, S. (2007). Secondary data sources for public health: A practical guide. Cambridge. Boulton, H. (2008). Managing e-Learning: What are the real implications for schools? Electronic Journal of E-learning, 6(1), 11-18. Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27–40. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77–101. British Educational Communications and Technology Agency. (2001). ImpaCT2 - Emerging Findings from the Evaluation of the Impact of Information and Communications Technologies on Pupil Attainment (Research and Evaluation Series). www.becta.org.uk/research/reports/impact2 British Educational Research Association. (2018). Ethical guidelines for educational research (4th ed.). London. https://www.bera.ac.uk/ researchers-resources/publications/ethical-guidelines-for-educational- research-2018 Brown, D., Cromby, J., & Standen, P. (2001). The effective use of virtual environments in the education and rehabilitation of students with intellectual disabilities. British Journal of Educational Technology, 32(3), 289-299. Bryman, A. (2001). Social research methods. Oxford University Press. https://www.bera.ac.uk/%20researchers-resources/publications/ethical-guidelines-for-educational-research-2018 https://www.bera.ac.uk/%20researchers-resources/publications/ethical-guidelines-for-educational-research-2018 https://www.bera.ac.uk/%20researchers-resources/publications/ethical-guidelines-for-educational-research-2018 134 Bryman, A. (2012). Social research methods (4th ed.). Oxford University Press. Bulmer, M. (1987). Privacy and confidentiality as obstacles to interweaving formal and informal social care: The boundaries of the private realm. Journal of Voluntary Action Research, 16(1-2), 112-125. Card, D. (1990). Strikes and wages: A test of an asymmetric information model. The Quarterly Journal of Economics, 105(3), 625-659. Card, D., & Krueger, A. (1994). Minimum wages and employment: A case study of the fast-food industry in New Jersey and Pennsylvania. The American Economic Review 84(4), 772-793. Caribbean Examination Council. (2008). Annual report. https://www.cxc.org/SiteAssets/CXC_Annual_Report_2008.pdf Caribbean Examination Council. (2009). Annual report. https://www.cxc.org/SiteAssets/CXC_Annual_Report_2009.pdf Caribbean Examination Council. (2010). Annual Report. https://www.cxc.org/SiteAssets/CXC%20AR%202010%20final.pdf Caribbean Examination Council. (2016, December). About the Council. http://www.cxc.org/examinations/csec/ Caribbean Examination Council. 2009. Annual report. https://www.cxc.org/cxc- annual-report-2009/ Cassidy, E. D., Colmenares, A., Jones, G., Manolovitz, T., Shen, L., & Vieira, S. (2014). Higher education and emerging technologies: Shifting trends in student usage. The Journal of Academic Librarianship, 40(2), 124-133. Caulley, D. N. (1983). Document analysis in program evaluation. Evaluation and Program Planning, 6(1), 19-29. Chandra, V., & Lloyd, M. (2008). The methodological nettle: ICT and student achievement. British Journal of Educational Technology, 39, 1087–1098. Charlton-Laing, C., & Grant, G. (2012). What are the dynamic capabilities needed for a national e-learning implementation? A Jamaican e-learning case study [Workshop presentation]. Proceedings from the Annual Workshop of the AIS Special Interest Group for ICT in Global Development. AISeL. Charter of Fundamental Rights and Freedoms (Constitutional Amendment) Act of 2011. https://www.japarliament.gov.jm/ Child Care and Protection Act of 2004. Ministry of Justice. Jamaica Clark, T. (2014). Using archival documents as data: Working with Myra Hindley's ‘Prison Files’. SAGE. http://www.cxc.org/examinations/csec/ https://www.cxc.org/cxc-annual-report-2009/ https://www.cxc.org/cxc-annual-report-2009/ https://www.japarliament.gov.jm/ 135 Cohen, J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.). Lawrence Erlbaum Associates. Coleman, G. (1987). Logical framework approach to the monitoring and evaluation of agricultural and rural development projects. Project Appraisal, 2(4), 251-259. Cox, M., Abbott, C., Webb, M.E., Blakely, B., Beauchamp, T., & Rhodes, V. (2003). ICT and attainment: A review of the research literature. ICT in Schools Research and Evaluation Series No. 17. Becta/DfES. http://www. becta.org.uk/page_documents/research/ict_attainment_summary.pdf Crawford, A. (2006). The e-learning Jamaica project. http://pcf4.dec.uwi.edu/viewpaper.php?id=386&print=1 Crawford, A. (2011). Presentation at the International Association of School Librarianship preconference meeting, 2011. https://www.slideshare.net/IASLonline/elearning-project-9665683 Creswell, J. W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd ed.). Pearson/Prentice Hall. Dale, A., Arber, S., & Proctor, M. (1988). Doing secondary analysis. Unwin Hyman. Dendir, S., Orlov, A. G., & Roufagalas, J. (2019). Do economics courses improve students’ analytical skills? A Difference-in-Difference estimation. Journal of Economic Behavior & Organization, 165, 1-20. Deschacht, N., & Goeman, K. (2015). The effect of blended learning on course persistence and performance of adult learners: A difference-in-differences analysis. Computers and Education, 87, 83–89. e-LJam. (2006). The e-learning Jamaica project: Towards an educated and knowledge based nation. https://www.elearningja.gov.jm/wp- content/uploads/2019/03/the_e-learning_project_information_sheet1.pdf. e-LJam. (2007-2008). Annual report: Chairman’s report. https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual- Report-April-2007-March-2008.pdf e-LJam. (2008-2009). Annual report: Chairman’s report. https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual- Report-April-2007-March-2008.pdf http://pcf4.dec.uwi.edu/viewpaper.php?id=386&print=1 https://www.slideshare.net/IASLonline/elearning-project-9665683 https://www.elearningja.gov.jm/wp-content/uploads/2019/03/the_e-learning_project_information_sheet1.pdf https://www.elearningja.gov.jm/wp-content/uploads/2019/03/the_e-learning_project_information_sheet1.pdf https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual-Report-April-2007-March-2008.pdf https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual-Report-April-2007-March-2008.pdf 136 e-LJam. (2009-2010). Annual report: Chairman’s report. https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual- Report-April-2008-March-2009.pdf e-LJam. (2010-2011). Annual report: Chairman’s report. .https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual- Report-April-2009-March-2010.pdf e-LJam. (2012). The e-learning project report: Overview and brief status report. Ellis, V., & Long, S. (2004). Negotiating contrad(ICT)ion: Teachers and students making multimedia in the secondary school. Technology, Pedagogy and Education, 13(1), 11-27. Enriquez, A. (2010). Enhancing student performance using tablet computers. Journal College Teaching, 58(3), 77-84. Farrell, G. (2007). ICT in education in Kenya. http://documents.worldbank.org/curated/en/616481468047703710/pdf/463 850BRI0Box31ya010ICTed0Survey111.pdf Figlio, D., Karbownik, K., & Salvanes, K. (2015). Education research and administrative data (Working Paper 21592). National Bureau of Economic Research. https://www.nber.org/papers/w21592 Finch, D., & Jacobs, K. (2012). Online education: Best practices to promote learning. In Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting (Vol. 56, No. 1, pp. 546-550). SAGE. Flewitt, R., Kucirkova, N., & Messer, D. (2014). Touching the virtual, touching the real: iPads and enabling literacy for students experiencing disability. Australian Journal of Language & Literacy, 37(2), 107-116. Francis, N., Ngugi, M., & Kinzi, J. (2017). Influence of selected factors on the implementation of information and communication technology policy in public secondary schools in Naivasha Sub-county, Kenya. International Journal of Education and Development using Information and Communication Technology (IJEDICT), 13(2), 70-86. Frimpon, M.F. (2012). A re-structuring of the critical success factors for e-learning deployment. American International Journal of Contemporary Research, 2(3), 115-123. Gayle, W. (2017). 2017 Ranking of Jamaica's best high schools. http://www.my- island-jamaica.com/2017-ranking-of-jamaicas-best-high-schools.html https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual-Report-April-2008-March-2009.pdf https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual-Report-April-2008-March-2009.pdf https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual-Report-April-2009-March-2010.pdf https://www.elearningja.gov.jm/wp-content/uploads/2020/12/Annual-Report-April-2009-March-2010.pdf http://documents.worldbank.org/curated/en/616481468047703710/pdf/463850BRI0Box31ya010ICTed0Survey111.pdf http://documents.worldbank.org/curated/en/616481468047703710/pdf/463850BRI0Box31ya010ICTed0Survey111.pdf https://www.nber.org/papers/w21592 http://www.my-island-jamaica.com/2017-ranking-of-jamaicas-best-high-schools.html http://www.my-island-jamaica.com/2017-ranking-of-jamaicas-best-high-schools.html 137 Gholami, R., Higon, D. A., Hanafizadeh, P., & Emrouznejad, A. (2010). Is ICT the key to development? Journal of Global Information Management, 18(1), 66–83. Giddens. A. (1984). The Constitution of Society: Outline of the theory of Structuration. Cambridge: Polity. Glass, G.V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5, 3-8. Goldhaber, D., Cowan, J., & Theobald, R. (2017). Evaluating prospective teachers: Testing the predictive validity of the edTPA. Journal of Teacher Education, 68(4), 377–393. https://doi.org/10.1177/0022487117702582 Gorard, S. (2003). Quantitative methods in social science research. A&C Black. Harrison, C., Comber, C., Fisher, T., Haw, K., Lewin, C., Lunzer, E., McFarlane, A., Mavers, D., Scrimshaw, P., Somekh, B., & Watling, R. (2002). ImpaCT2: The impact of information and communication technologies on pupil learning and attainment. British Educational Communications and Technology Agency (BECTA). Hayes, D., Huey, E. L., Hull, D. M., & Saxon, T. F. (2012). The influence of youth assets on the career decision self-efficacy in unattached Jamaican youth. Journal of Career Development, 39(5), 407-422. Heaton, J. (2008). Secondary analysis of qualitative data: An overview. Historical Social Research / Historische Sozialforschung, 33(125), 33-45. www.jstor.org/stable/20762299 Henkel, M. (2000). Academic Identities and Policy Change in Higher Education: Higher Education Policy Series 46. London: Jessica Kingsley Publishers. Higgins, S., Falzon, C., Hall, I., Moseley, D., Smith, F., Smith, H., & Wall, K. (2005). Embedding ICT in the literacy and numeracy strategies (Report). http://www.excellencegateway.org.uk/page.aspx?o=145218 Hinostroza J.E., Isaacs S., & Bougroum, M. (2014). Information and communications technologies for improving learning opportunities and outcomes in developing countries. In D.A. Wagner (Ed.), Learning and education in developing countries: Research and policy for the post-2015 UN development goals (pp. 42-57). Springer. Hrastinski, S. (2019). What do we mean by blended learning? TechTrends, 63(5), 564-569. http://www.jstor.org/stable/20762299 http://www.excellencegateway.org.uk/page.aspx?o=145218 138 Hsieh, J., & Cho, V. (2011). Comparing e-Learning tools success: The case of instructor-student interactive vs. self-paced tools. Computers & Education, 57(3), 2025–2038. Ikwuka, O.I., & Henry, A.J. (2017). Effect of ICT on secondary school students’ academic performance in Christian Religious Studies in Oshimili North Local Government Area. International Journal of Innovative Science, Engineering & Technology, 4(5), 373-384. International Labour Organization & Statistical Institute of Jamaica. (2018). Jamaica Youth Activity Survey 2006. STATIN. International Society for Technology in Education. (2000). Educational technology standards and performance indicators for all teachers. https://www.hbgdiocese.org/wp-content/uploads/2012/04/NETS-for- Teachers.pdf Jang, S., & Tsai, M. (2012). Exploring the TPACK of Taiwanese elementary mathematics and science teachers with respect to use of interactive whiteboards. Computers & Education, 59(2), 327-338. Jenkins, M., & Hanson, J. (2003). E-learning series no. 1: A guide for senior managers. Learning and Teaching Support Network (LSTN) Generic Centre. Jones, P., Jones, A., & Packham, G. (2009). E-learning induction design for an undergraduate entrepreneurship degree. International Journal of Management Education, 8(1), 37-51. Kamau, L. (2014). Applying Rogers’ diffusion of innovations theory to investigate technology training for secondary mathematics teachers in Kenya. Journal of Education and Practice, 5(17), 19-31. Keramati, A., Afshari-Mofrad, M., & Kamrani, A. (2011). The role of readiness factors in e-learning outcomes: An empirical study. Computers & Education, 57(3), 1919-1929. Khan, B. H. (2005). Managing e-learning: Design, delivery, implementation and evaluation. Information Science Publishing. Kozma, R., McGhee, R., Quellmalz, E., & Zalles, D. (2004). Closing the digital divide: Evaluation of the World Links program. International Journal of Educational Development, 24, 361–381. Kusluvan, S. (2003). Managing employee attitudes and behaviors in the tourism and hospitality industry. Nova Science Publishers. https://www.hbgdiocese.org/wp-content/uploads/2012/04/NETS-for-Teachers.pdf https://www.hbgdiocese.org/wp-content/uploads/2012/04/NETS-for-Teachers.pdf 139 Lechner, M. (2011). The estimation of causal effects by difference-in-difference methods. Foundations and Trends® in Econometrics, 4(3), 165–224. Lester, R. A. (1946). Shortcomings of marginal analysis for the wage-employment problems. American Economic Review, 36(1), 63–82. Lewis, M., & Pettersson Gelander, G. (2009). Governance in education: Raising performance. World Bank Human Development Network Working Paper. Machin, S., McNally, S., & Meghir, C. (2004). Improving pupil performance in English secondary schools: Excellence in cities. Journal of the European Economic Association, 2(2/3), 396-405. Machin, S., McNally, S., & Silva, O. (2007). New technology in schools: Is there a payoff? The Economic Journal (London), 117(522), 1145-1167. Madar, M. J., & Willis, O. (2014). Strategic model of implementing e-learning. International Journal of Scientific & Technology Research, 3(5), 235-238. Marsh, C. C. (2018). Predicting first-year mathematics success at the community college: Modifications on High School Grade Point Average and the impact of high school quality (UMI No.10787642) [Doctoral dissertation, Old Dominion University]. ProQuest Dissertations & Theses Global database. Marston, D., Deno, S.L., Tindal, G., & University of Minnesota. (1983). A comparison of standardized achievement tests and direct measurement techniques in measuring pupil progress (Report). Institute for Research on Learning Disabilities, University of Minnesota. McConnell, D. (2006). E-learning groups and communities. Open University Press. Merriam, S. B. (1988). Case study research in education: A qualitative approach. Jossey-Bass. Meyer, B. (1995). Natural and quasi-experiments in economics. Journal of Business & Economic Statistics, 13(2), 151-161. Miller, E. (2004). The introduction of computers in secondary schools in Jamaica: A case of bottom-up reform. In D. W. Chapman & L. O. Mahlck (Eds.), Adapting technology for school improvement: A global perspective (pp. 101-121). UNESCO, International Institute for Educational Planning. http://unesdoc.unesco.org/images/0013/001361/136149e.pdf Mingaine, L. (2013). Challenges in the Implementation of ICT in public secondary schools in Kenya. International J. Soc. Sci. & Education, 4(1), 224- 238. http://unesdoc.unesco.org/images/0013/001361/136149e.pdf 140 Ministry of Commerce, Science and Technology & Ministry of Education, Youth and Culture. (2005, April 20). Feasibility study for e-learning project: A project of the Ministry of Commerce, Science and Technology (MCST) in collaboration with the Ministry of Education, Youth and Culture (MoEYC) and development partners. https://www.elearningja.gov.jm/wp- content/uploads/2019/03/e-learning_project_april_20051.pdf Ministry of Education, Youth and Information. (2008). Jamaica directory of educational institutions (2006/2007). MoEYI. Ministry of Education, Youth and Information. (2016a). Ministry Paper 41/16 – Education System Transformation Programme. MoEYI. Ministry of Education, Youth and Information. (2016b). School profiles 2015- 2016. MoEYI. Ministry of Education, Youth and Information. (2017a). About: The education system. http://www.moe.gov.jm/about Ministry of Education, Youth and Information. (2017b). ICT competency framework for teacher education framework (Draft). MoEYI. Ministry of Education, Youth and Information. (2018). $2.7- billion hike in education budget. https://moey.gov.jm/27-billion-hike-education-budget Ministry of Education, Youth and Information. (2020). Adult literacy rate now at 87%. https://moey.gov.jm/adult-literacy-now-87 Ministry of Education. (2012). National strategic plan: 2011-2020. MoE. Ministry of Education. (2013). Information and Communication Technology in Education Policy: Leveraging ICT, transforming citizens. MoE. Ministry of Education. (2015). Jamaica National ICT in education competency framework for teachers. MoE. Ministry of Education. (2022). Information and Communication Technology in Education Policy: Transforming lives empowering citizens and engaging citizens. MoE. Ministry of Labour & Social Security. (2015). National Employment Report: Issues and Trends in the Labour Market (2008- 2012). MLSS. Morrison, K. (2016, May 16). E-learning high school project evaluation: Final report (Unpublished report). Ministry of Education, Jamaica. Moubayed, A., Mohammadnoor, I., Ali, N., Hanan, L., & Abdallah, S. (2018). E- learning: Challenges and research opportunities using machine learning and data analytics. IEEE Access 6, 39117-39138. http://www.moe.gov.jm/about https://moey.gov.jm/27-billion-hike-education-budget https://moey.gov.jm/adult-literacy-now-87 141 Naveed, Q. N., Qureshi, M. R. N., Tairan, N., Mohammad, A., Shaikh, A., Alsayed, A. O., Shah, A., & Alotaibi, F. M. (2020). Evaluating critical success factors in implementing E-learning system using multi-criteria decision-making. PloS one, 15(5), e0231465. Network of Experts in Social Sciences of Education and Training. (2018). Education governance. http://www.nesse.fr/nesse/activities/research- mapping/educational-governance Nhando, D. (2015). 3 Key Challenges of Implementing eLearning in Africa. ELearn. https://elearningindustry.com/3-key-challenges-implementing- elearning-in-africa. Last access September 29/092020. Noesgaard S. S., & Orngreen, R. (2015). The effectiveness of e-learning: An explorative and integrative review of the definitions, methodologies and factors that promote e-learning effectiveness. The Electronic Journal of e- Learning, 13(4), 278-290. Oblinger, D. G., & Hawkins, B. L. (2006). The myth about online course development: "A faculty member can individually develop and deliver an effective online course". Educause Review, 41(1), 14-15. Onyefulu, C., Hughes, G., & Hamil, S. (2015). A situational analysis report of the e-Learning tablets in schools pilot projects in Jamaica. https://www.elearningja.gov.jm/wp-content/uploads/2019/10/Situational- Analysis-Final-Report.-March-2016-elj.pdf Panko, R. (2015). What we don’t know about spreadsheet errors today: The facts, why we don’t believe them, and what we need to do. http://arxiv.org/abs/1602.02601 Paulwell, P. (2005). Government of Jamaica - E-learning project [Ministry Paper 56]. https://www.elearningja.gov.jm/wp-content/uploads/2019/03/ministry _paper_for_e-learning_project_may_2005_rb1.pdf Peart, M. (2011). E-learning Jamaica project: Final evaluation report phase III (Unpublished Report). MoE. Pettitt, R. W. (2010). Evaluating strength and conditioning tests with Z scores: Avoiding common pitfalls. Strength & Conditioning Journal, 32(5), 100- 103. Piper, B. (2014). ICT, literacy and teacher change: The effectiveness of ICT options in Kenya. Society for Research on Educational Effectiveness. http://www.nesse.fr/nesse/activities/research-mapping/educational-governance http://www.nesse.fr/nesse/activities/research-mapping/educational-governance https://elearningindustry.com/3-key-challenges-implementing-elearning-in-africa https://elearningindustry.com/3-key-challenges-implementing-elearning-in-africa https://www.elearningja.gov.jm/wp-content/uploads/2019/10/Situational-Analysis-Final-Report.-March-2016-elj.pdf https://www.elearningja.gov.jm/wp-content/uploads/2019/10/Situational-Analysis-Final-Report.-March-2016-elj.pdf http://arxiv.org/abs/1602.02601 https://www.elearningja.gov.jm/wp-content/uploads/2019/03/ministry%20_paper_for_e-learning_project_may_2005_rb1.pdf https://www.elearningja.gov.jm/wp-content/uploads/2019/03/ministry%20_paper_for_e-learning_project_may_2005_rb1.pdf 142 Pitter, G. (2017). Business & computer studies student teachers and ICT: Are they ready to change pedagogies and raise standards in schools? Journal of Arts Science and Technology, 10, 99-119. Planning Institute of Jamaica & Statistical Institute of Jamaica. (2008). Jamaica survey of living conditions: Parish report 2008. PIOJ & STATIN. Planning Institute of Jamaica & Statistical Institute of Jamaica. (2009). Jamaica survey of living conditions: Parish report 2009. PIOJ & STATIN. Planning Institute of Jamaica & Statistical Institute of Jamaica. (2010). Jamaica survey of living conditions: Parish report 2010. PIOJ & STATIN. Rai, N., & Thapa, B. (2015). A study on purposive sampling method in research. Kathmandu School of Law. Rakes, G. C., & Casey, H. B. (2002). An analysis of teacher concern toward instructional technology. International Journal of Instructional Technology, 3(1), 124-132. https://ascilite.org/archived-journals/ijet/v3n1/rakes/ Robson, C. (2011). Real world Research: A resource for users of social research methods in applied settings (3rd ed.). John Wiley and Sons. Rogers, E.M. (2003). Diffusion of innovations. Free Press. Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41-55. Rossi, P. (2009). Learning environment with artificial intelligence elements. Journal of E-learning and Knowledge Society, 5(1), 67-75. Republic of Kenya. (2016). Ministry of Information Communications and Technology - National Information & Communications Technology (ICT) Policy. https://ictpolicyafrica.org/en/document/zoyckc56xa?page=1 Safarov, I. (2019). Institutional dimensions of open government data implementation: Evidence from the Netherlands, Sweden, and the UK. Public Performance & Management Review, 42(2), 305-328. Sangra, A., Vlachopoulas, D., & Cabrera, N. (2012). Building an inclusive definition of E-Learning: An approach to the conceptual framework. International Review of Research in Open and Distributed Learning, 13(2) 146-158. Schacter, J., & Fagnano, C. (1999). Does computer technology improve student learning and achievement? How, when, and under what conditions? Journal of Educational Computing Research, 20(4), 329–343. https://ascilite.org/archived-journals/ijet/v3n1/rakes/ https://ictpolicyafrica.org/en/document/zoyckc56xa?page=1 143 Seal, H. L. (1967). Studies in the history of probability and statistics. XV The historical development of the Gauss linear model. Biometrika, 54(1-2), 1- 24. Sedgwick, P. (2014). Before and after study designs. BMJ: British Medical Journal (Online), 349. https://doi:http://dx.doi.org/10.1136/bmj.g5074 Singh, H. (2001). Building effective blended learning programs. Educational Technology, 43(6), 51-54. Smedley, J. (2010). Modelling the impact of knowledge management using technology. OR Insight, 23, 233–250. Smith, E. (2008). Using secondary data in educational and social research. Open University Press. Smith, G., & Hardman, J. (2014). The impact of computer and mathematics software usage on performance of school leavers in the Western Cape Province of South Africa: A comparative analysis. International Journal of Education and Development using Information and Communication Technology (IJEDICT), 10(1), 22-40. Snow, J. (1855). On the mode of communication of cholera. John Churchill. Somekh, B., Underwood, J., Convery, A., Dillon, G., Jarvis, J., Lewin, C., Mavers, D., Saxon, D., Sing, S., Steadman, S., Twining, P., & Woodrow, D. (2007, June). Evaluation of the ICT test bed project: Final report. Centre of ICT, Pedagogy and Learning Education and Social Research Institute, Manchester Metropolitan University; Division of Psychology, Nottingham Trent University. Sommers, B. D., Gunja, M. Z., Finegold, K., & Musco, T. (2015). Changes in self- reported insurance coverage, access to care, and health under the Affordable Care Act. JAMA, 314(4), 366-374. Stewart, D., & Kamins, M. (1993). Secondary research: Information sources and methods. Sage. Suryawanshi, V., & Suryawanshi, D. (2015). Fundamentals of e-learning models: A review. IOSR Journal of Computer Engineering, 107-120. Taylor, L.M, Casto, D.J., & Wall, R.T. (2007). Learning with versus without technology in elementary and secondary school. Computers in Human Behavior, 23(1), 798-811. 144 Technology Standard for School Administrator Collaborative. (2001). Technology standard for school administrator collaborative. https://www.russell.k12.ky.us/userfiles/indexblue/admin_techstandards.pdf Trucano, M. (2013). Big educational laptop and tablet projects: Ten countries to learn from. http://blogs.worldbank.org/edutech/big-educational-laptop-and- tablet-projects-ten-countries Underwood, J. (2009). The impact of digital technology: A review of the evidence of the impact of digital technologies on formal education. Becta. UNESCO. (2005). Information and communication technologies in schools: A handbook for teachers or how ICT can create new, open learning environments. UNESCO. University Council of Jamaica. (2017). Accreditation. http://www.ucj.org.jm/accreditation/what-is-accreditation/ Waddell, D. L. (1991). Differentiating impact evaluation from evaluation research: One perspective of implications for continuing nursing education. The Journal of Continuing Education in Nursing, 22(6), 254-258. Wagner, D. (2005). Monitoring and evaluation of ICT for education: An introduction. Monitoring and Evaluation of ICT in Education Projects: A handbook for developing countries. The International Bank for Reconstruction and Development; The World Bank; InfoDev. Wagner, N., Hassanein, K., & Head, M. (2008). Who is responsible for E- Learning success in higher education? A stakeholders’ analysis. Educational Technology and Society, 11(3), 26-36. White, H., & Raitzer, D. A. (2017). Impact evaluation of development interventions: A practical guide. Asian Development Bank. Wims, P., & Lawler, M. (2007). Investing in ICTs in educational institutions in developing countries: An evaluation of their impact in Kenya. International Journal of Education and Development using ICT, 3(1), 5-22. Wing, C., Simon, K., & Bello-Gomez, R. A. (2018). Designing difference in difference studies: best practices for public health policy research. Annual Review of Public Health, 39(1), 453-469. World Bank. (2005a). The World Bank Annual Report 2005: Year in Review. Author. World Bank. (2005b). The logframe handbook: A logical framework approach to project cycle management (English). World Bank Group. http://blogs.worldbank.org/edutech/big-educational-laptop-and-tablet-projects-ten-countries http://blogs.worldbank.org/edutech/big-educational-laptop-and-tablet-projects-ten-countries http://www.ucj.org.jm/accreditation/what-is-accreditation/ 145 http://documents.worldbank.org/curated/en/783001468134383368/The- logframe-handbook-a-logical-framework-approach-to-project-cycle- management Yorke, M. (2011) Analysing existing datasets: Some considerations arising from practical experience. International Journal of Research & Method in Education, 34(3), 255-267. Young, J. R. (1997, October 3). Rethinking the role of the professor in an age of high-tech tools. The Chronicle of Higher Education, 44(6), pA26-A28. Yu, C., & Durrington, V. A. (2006). Technology standards for school administrators: An analysis of practicing and aspiring administrators' perceived ability to perform the standards. NASSP Bulletin, 90(4), 301- 317. Zhao, Y., & Frank, K. A. (2003). Factors affecting technology uses in schools: An ecological perspective. American Educational Research Journal, 40(4), 807-840. Zuppo, C. (2012) Defining ICT in a boundaryless world: The development of a working hierarchy. International Journal of Managing Information Technology (IJMIT) 4(3), 13. http://documents.worldbank.org/curated/en/783001468134383368/The-logframe-handbook-a-logical-framework-approach-to-project-cycle-management http://documents.worldbank.org/curated/en/783001468134383368/The-logframe-handbook-a-logical-framework-approach-to-project-cycle-management http://documents.worldbank.org/curated/en/783001468134383368/The-logframe-handbook-a-logical-framework-approach-to-project-cycle-management 146 Appendices Appendix A Teaching Curriculum The curriculum should incorporate the following areas: A. General Education (24-27 credits) The General Education component should cover the following areas: - Foreign Language (including Spanish) - Language and Communication - Community Service - Personal Development/Healthy Life Style - Mathematics - Information Technology/Computer Literacy - Logic, Critical Thinking, and Problem-solving - Caribbean Culture, Religion, and History - Environment and sustainability issues - Values, Ethics, and Citizenship - Literature and the Arts - Science and Technology - Entrepreneurship - Reading Literacy - Introduction to Research Methods B. Professional Studies (27-30 credits) Foundation Courses Foundation courses should cover the following areas: - Child Development or Adolescent Psychology (including brain research and socio biological explanations of gender differences) - Issues in Jamaican Education: philosophical, socioeconomic, historical, cultural and political perspectives - Behaviour Management (including violence and aggression, conflict management and resolution, safety and security) 147 - Education and National Development - Public Policies and Laws - Infusion of Career Education - Professionalism - Guidance and Counselling - Philosophy of Education - Multigrade Teaching Pedagogical Courses Pedagogical courses should cover the following areas: - Psychology of Learning and Teaching - Teaching Children with Exceptionalities - Teaching Diverse Groups (including methods, styles, strategies) - Introduction to Curriculum Theory, Planning, and Practice - Assessment and Learning - Instructional Technology - Action Research (including use of data to inform all teaching related decisions) - Reflective Teaching - Leading and Managing Change in the Classroom C. Programme Specialization (60-66 credits)  Early Childhood  Primary  Secondary  Special Education  Guidance and Counselling Details on each area of specialization are outlined in Section 5.6. D. Adjunct (6-9 credits)  Specialization-related subjects E. Electives (3-6 credits) 148 This component could cover the following areas:  Music/Dance/Art/Drama  Religion  Foreign Language  Family Life  Physical Education F. Teaching Practicum/Internship (15 credits) The teaching practicum/internship involves placement in a learning environment for periods of time, during which the student teacher is involved in the life of the school and exposed to real classroom and school situations in which they are assessed. The teaching practicum/internship should be organized to provide opportunities for: i. practical application of knowledge, skills and affective behaviours acquired in the professional studies and specialization components, including:  Lesson planning  Design and development of teaching and learning aids  Use of evaluative feedback  Reflection on action, in action, and for action  Action Research  Effective communication  Special learning needs  Micro teaching  Use of contextual data to inform planning  Building a community of learners ii. observation and explanation of successful practices in education through visits to schools, including -  Critique of lesson plans 149  Review of teaching and learning aids  Observation of use of technology in teaching and learning  Assessment of how teacher uses data from tests  Teacher-student interaction and its impact on student learning  Language use and communication in classroom  Classroom management  School organization and leadership  Student behaviour, safety, and security  Service orientation  Classroom interactions iii. demonstration of skills in teaching with emphasis on catering to multiple intelligences and children with learning difficulties/integration/ multigrade teaching (depending on context and grade level), including -  Teaching high and low achievers  Teaching underachievers  Identifying children with learning difficulties and taking appropriate action  Team teaching  Multigrade teaching  Using journals and portfolios iv. demonstration of (i) skills in the use of technology in aiding teaching, (ii) use of assessment data, and (iii) evaluative feedback from lessons, including -  Developing different types of assessment  Analyzing assessment data  Providing feedback from assessments and using these to improve teaching  Engaging in self-evaluation  Integrating technology in teaching 150 v. demonstration of overall competence in the use of a wide range of teaching skills and competencies so that overall teaching ability can be assessed. vi. demonstration of understanding of social, economic, and health issues impacting teaching and learning. 151 Appendix B Permission Letter from OEC 152 Appendix C Example of Student Results 153 Appendix D Parallel Trend Lines The information in Figure B1 shows the parallel trend lines for English language which revealed both pilot and non-pilot schools moving in the same parallel direction. In 2006 the mean GPA for the pilot 3.13 and non-pilot schools was 3.59. The year before the e-LHSP pilot 2008 examination showed decreases for both groups. Figure B1 Results of parallel trend for English language 2005 to 2008 The information in Figure B2 shows the parallel trend lines for Mathematics. The year 2006 showed the mean GPA for the pilot schools was 3.89 and non-pilot schools was 4.24. The examination period 2007 to 2008 before the pilot showed slight increases in the same direction for pilot schools of mean GPA 3.73 and non-pilot schools 3.98. 154 Figure B2. Results of parallel trend for Mathematics 2006 to 2008 The information in Figure B.3 shows the parallel trend lines for Chemistry going in the same direction and improving slightly. The Chemistry results for the years 2006 to 2008 showed an improved performance. The mean GPA for the pilot schools was 3.57 and non-pilot schools was 3.76 in 2006. The examination period 2007 to 2008 showed pilot schools scored 3.25 and non-pilot schools was 3.43. . Figure B3. Results of parallel trend for Chemistry 2006 to 2008 155 The information in Figure B.4 shows the parallel trend lines for Biology moving in the same direction except when they cross during the period 2006 to 2007. The Biology results for the years 2006 to 2008 also showed improved performance. The mean GPA for the pilot schools was 3.1 and non-pilot schools was 3.29 in 2006. The examination period 2007 and 2008 revealed that the pilot schools scored mean GPA 2.76 percent and non-pilot schools 3.16. Figure B4. Results of parallel trend for Biology 2006 to 2008 The information in Figure B.5 shows the parallel trend lines for Information Technology. The Information Technology results for the years 2006 to 2008 showed increase in performance but the non-pilot school outperformed the pilot schools in 2008. The average percent for the pilot schools was 3.18 and non-pilot schools was 3.47 in 2006 and by the examination period 2007 to 2008 the pilot schools scored mean 2.63 and non-pilot schools 2.47. 156 Figure B5. Results of parallel trend for Information Technology 2006 to 2008 157 Appendix E Technology Resources Distributed to Pilot Schools Technologies Pilot Schools Laptop 16 Desktops 56 Printers 3 Servers 1 Document Camera 4 DVD/CD Player 3 Multi-Media Projector 5 Digital Camera 2 Scanners 2 Televisions 2 VCR Players 3 Source: (e-lJam, 2012) Resources Pilot schools Biology Teaching Materials (CDs & Books) 1 Math Teaching Materials (CDs & Books) 5 English Teaching Materials (CDs & Books) 10 IT Teaching Materials 5 Chemistry Not available Source: (e-lJam, 2012)