Internet Learning Volume 3, Number 1, Spring 2014 | Page 82
Effect of Student Readiness on Student Success in Online Courses
could be easily replicated for extended statistical
analysis using our methodology or
utilizing other designs, such as a matched
pair design. Another approach to increase
the sample size would be to expand the
study to multiple institutions/instructors
with similar characteristics as the original
institution in the first study. We would alert
future researchers to control the inputs of
quality course design and experienced, engaging
online instructors.
This study was quantitative. Qualitative
information could be gathered and
analyzed (1) to discover other indicators
of student success and (2) to test alternative
analyses. For example, students who
complete the SmarterMeasure instrument,
perhaps as an online learning orientation
(Koehnke, 2013), may be more likely to
complete class work leading to student success
compared to the students who elect not
to complete the required SmarterMeasure
instrument. Focus groups of student participants
in a replicated study would add
additional depth to any findings, as would
using educational analytics to determine
if any correlations exist between students
previous online course success and readiness
factors.
Another avenue of study would be
to explore the actions of experienced, engaging
online instructors teaching of the
courses. It could be enlightening to learn if
the highly skilled online instructors in this
study mitigated the impact of the four other
readiness factors measured that were not
found statistically significant (life factors,
individual attributes, technical knowledge,
and reading comprehension). The findings
could reveal a snapshot of pedagogical habits
that promote student success in the online
classroom.
The data for Life Factors and Individual
Attributes indicate that a large number
of students ranked at the 0% to 84%
level. In this study of the 200 students, 147
ranked within 0% to 84% for Life Factors,
while 53 ranked at the upper level and 169
ranked within 0% to 84% for Individual
Attributes, while 39 ranked at the upper
level. A future study could compare these
online student rankings with students taking
comparable courses using other delivery
methods (e.g., face-to-face, web-hybrid).
The results should also be compared
to success factors in different disciplines
using a matched pair experiment. For example,
how does an English course, where
reading comprehension is critical, compare
to courses in other disciplines.
In addition, future studies could
compare results from QM-certified courses
to courses that have not been designed using
QM standards. Likewise, a study could
compare the results of less experienced
with those of higher-skilled, experienced
online instructors.
References
Adkins, (2013, April 25). Concerning online
learning: Experience matters [Web
log post]. Retrieved from http://wcetblog.
wordpress.com/2013/04/25/experience_
matters/
Allen, M., Omori, K., Burrell, N., Mabry, E.,
& Timmerman, E. (2013). Satisfaction with
distance education. In M. G. Moore (Ed.),
Handbook of distance education (3rd ed.,
pp. 143–154). New York, NY: Routledge.
Aman, P. R. (2009). Improving student satisfaction
and retention with online instruction
through systematic faculty peer review
of courses. (Unpublished doctoral dissertation).
Oregon State University, Corvallis,
OR. Retrieved from http://ir.library.
oregonstate.edu/xmlui/bitstream/handle/1957/11945/Aman_Dissertation.pdf
81