research design:
to create a similar learning environment for both the control (traditional) and experimental (online) groups, similar requirements were instituted. all the materials for the financial management course were placed on two different web sites. the course material for the online group was placed on the webct (commercially available courseware) platform. additionally a separate web site, containing the same course material as the webct site, was created from scratch on the university server. the course requires two proctored tests. therefore, the online students were compelled to come to campus at specified dates to take those tests with the traditional students. the only difference between the two sites was that the webct site allowed asynchronous communication between the students, and the students/instructor, while this feature (online interaction) was not available on the site accessed by the traditional students.
for the traditional group most of the interaction was done face-to-face, while the instructional interaction for the online students was all through the asynchronous bulletin board and email. the two groups were required to interact with the course material over the internet (read or download lecture material, assigned readings, solutions to homework problems, etc.). as a result, the factor of novelty effect (students working with something different: internet) was eliminated (merisotis & phipps, 1999). thus, except for the method of delivery of the subject matter, the two groups were exposed to identical course materials, instruction, and examinations.
data and research methodology:
beginning in spring 1999, a separate section of the financial management course was offered online for the first time. a series of questions (appendix a) were developed to explain the following four major criteria:
1. learning environment [web utility; (three questions, numbered 1 - 3)];
2. interactivity (three questions, numbered 4 - 6);
3. contribution of course materials and requirements to learning (nine questions, numbered 7 - 15); and
4. students' overall satisfaction with the course (four questions, numbered 16 - 19).
appendix a was distributed to both groups (online and traditional) over the spring, summer, and fall semesters of 1999. the class sizes for online sections were comparatively smaller as indicated in exhibit i.
exhibit iclass size
traditional students online students
spring 1999 18 8
summer 1999 22 7
fall 1999 28 6
the results of the students' surveys are shown in tables 1 - 6. this data then was reorganized and compiled (exhibit ii) based on the four major criteria explained above. exhibit ii, shows the aggregate responses for the variables (questions) under each criterion. for example in spring 1999, eight students took the financial management course online. the first and second criteria, "web utility" and "interactivity" are each explained by three variables (questions). therefore, there are 3 x 8 = 24 potential responses for each of those two criteria.
using a likert-scale-based survey, each criterion will be quantitatively measured. in other words using all the variables under each criterion, an index, which represents students' opinions regarding that criterion, will be computed. if both groups generate similar indices for a given criterion, it could be inferred that the mode of course content delivery has no significant impact on that criterion. the likert technique presents a set of attitude statements. each student is asked to express agreement or disagreement on each question using the scales shown in appendix a. for example if all online students respond to the three questions under the "web utility" criterion by choosing the "strongly agree" scale, then 100% of the respondents strongly agree that the web as a learning environment is a useful component of the course. exhibit iii, shows the compilation of such data for the three semesters and the four major criteria. the score for each criterion, as an index, represents an overall opinion of the students about how each mode of course delivery has effectively met the four major criteria. by analyzing and comparing the data in exhibit iii, the study will draw some conclusion regarding the stated three hypotheses on pages 2 and 3.
results:
exhibit iii reports the computed indices representing students' opinions regarding each of the four major criteria. from this exhibit, it can be shown that the majority of students (80% - 90%) in both groups and over the three semesters positively evaluated the course and its different attributes.
specifically, in reference to the first hypothesis, the results in exhibit iii clearly demonstrate that both groups (over the three semesters) believed that the web-based information (web as a learning environment) is a valuable component of the course. based on these results, a case could be made in favor of providing web-based materials for courses being taught in a traditional format.
the students' learning experience (subject of the second hypothesis) captured by the "course materials and requirements" criterion (index), as reported in exhibit iii, also indicates a majority in both groups (over the three semesters) thought that both mediums provided sufficient materials and requirements to enhance and contribute to their learning.
with regard to the third hypothesis, in exhibit iii, again a majority of students in both groups (over the three semesters) agreed that both modes of course delivery allowed for effective interactivity among students and students/instructor.
finally, the overall satisfaction criterion reflects the students' experience and impression of the course. this criterion, similar to other criteria, was computed and tabulated in exhibit iii. over the three semesters a majority of students in both groups indicated that they were equally satisfied with the vigor and usefulness of the course.
furthermore, this study compared the final grades of both groups (online and traditional) for each semester. the average final grade for both groups in the spring and summer semesters was b. however, in the fall semester the online students' average final grade was a, while that of the traditional students was b. based on this information, this study confirms the findings of other studies such as schulman & sims (1999), and smeaton & keogh (1999) who used grades as measures of performance, and found that there were no significant differences between the performance of the two groups who took their courses in different modes of course delivery. therefore, online courses have the potential of providing comparable learning experiences for students regardless of the mode of course delivery.
conclusions:
this study collected data on four major criteria (web utility, interactivity, course material and requirements, and overall satisfaction) representing nineteen attributes (variables) that addressed different concerns of two groups of students (online and traditional) taking a graduate financial management course over three semesters. using the proposed research methodology, this study calculated four indices as measures of the four major criteria. these indices were compared in a pair-wise fashion for the two groups and for each semester. from this comparison it was concluded that there were no significant differences between the two groups' opinions regarding their feelings about the web utility, interactivity (students/students, and students/instructor), learning experience, and overall satisfaction for the financial management course delivered on-site or online.
references
barr, d. (1990), a solution in search of a problem: the role of technology in educational reform, journal for education of the gifted, 14(1), 79-95.
clarke, d., getting results with distance education, the american journal of distance education, vol. 12, no.1, 1999.
dobrin, j., who's teaching online, itpe news - vol. 2, issue 12, june 22, 1999.
dutton, j., dutton, m., & perry, j., do online students perform as well as traditional students?, submitted for publication, north carolina state university, 1999.
hoffman, k. m., what are faculty saying? ecollege.com - may 1999.
keegan, d., foundations of distance education. new york: routledge, 1990.
linke, r., et al. (1984), report of a study group on the measurement of quality and efficiency in australian higher education. canberra: ctec, p. 19.
merisotis, j. p., & phipps, r. a., what's the difference? outcomes of distance vs. traditional classroom-based learning. the institute of higher education policy, april 1999.
navarro, p. & shoemaker, j., the power of cyber learning: an empirical test, journal of computing in higher education, 1999.
russell, thomas l., the no significant difference phenomenon. chapel hill, nc: office of instructional telecommunications, north carolina state university, 1999.
schulman, a. h. & sims, r. l., t.h.e. journal, vol. 26, no.11 june 1999.
sherry, l., (1996), issues in distance learning, international journal of distance education.
smeaton, a. & keogh, g., an analysis of the use of virtual delivery of undergraduate lectures, computers and education, vol. 32, 1999.
u.s. department of education, national center for education statistics, distance education at postsecondary education institutions: 1997-1998, december 1999.
wade, w., assessment in distance learning: what do students know and how do we know that they know it?, t.h.e. journal, vol. 27, no. 3, october 1999.
wagner, e. d. (1997), interactivity: from agents to outcomes, new directions for teaching and learning, 71, 19-26.