Skip to main content
UCF Libraries Home

Teaching Online in Texts and Technology

About This Guide

 Based on major topics surrounding the potential for Software in Teaching, Tutoring, and Grading Writing, this sources included in this guide are divided into three sections:  

-Potential Tools for Instructors

-Potential Tutors for Students

-Perspectives on CAI Potential

Getting Involved

Scholastic Guides - A website containing a variety of writing instruction videos, guides, and other teaching resources.

National Writing Project - A website and network of teachers, researchers, and scholars decidated to sharing knowledge and expertise in order to improve writing instruction.

Professionals Against Machine Scoring Of Student Essays In High-Stakes Assessment A petition to stop using computer scoring on essays and high-stakes tests.

SAGrader - Detailed information on one positively-reviewed writing tutor/grading software that provides interactive activities and immediate, personalized feedback.

About the Author

Joshua Roney is a doctoral student in UCF's Texts and Technology program.

His primary scholarly interests include Basic Writing instruction, rhetoric, and technical writing.  

He also works full-time at the UCF Office of Research & Commercialization as a Proposal Development Coordinator.

Software to Teach, Tutor, and Grade Writing


This resource addresses an ongoing discussion of theories and experiments into the roles of computer programs in writing instruction. As technology continues to advance, the discussion of Computer Aided Instruction (CAI), which began in the 1960s, continues to be relevant to writing classrooms today. There are intersecting perspectives based on optimism, fear, confusion, and skepticism at the potential of this technology. As the broader term of ‘instructor’ begins to be subdivided by continuing discussion and assessment, the roles of computers are becoming selectively assessed and valued for their specific potential use. The realized potential in these tools thus far has been in the employment of automated scoring, providing instant feedback, tracking student progress, and tutoring students to reinforce instructor activities. Experiments continue in exploring its potential to resolve limitations in their use thus far, including their ability to provide contextualized assistance, argument analysis, rich feedback and guidance that mimics human grading, interaction, and instruction. Additionally, researchers and educators are also testing the validity of the many existing and emerging tools for their ability to instruct, guide students to learning goals, and provide holistic assistance across the entire learning process.  Sources are hyperlinked to UCF Library records or their location on the internet.

Recommended Sources


Potential Tools for Instructors


Ware, Paige. "Computer-Generated Feedback On Student Writing." TESOL Quarterly: A Journal For Teachers Of English To Speakers Of Other Languages And Of Standard English As A Second Dialect 45.4 (2011): 769-774. ERIC. Web.

In this article, Ware presents and clarifies the distinctions between "computer-generated scoring" and "computer-generated feedback", which emphasize syntax or assisting tools, respectively.  She asserts the timeliness of potential advances while being conscious of limitations, writing that “the resounding consensus about computer-generated feedback, among developers and writing specialists alike, is that the time is ripe for critically examining its potential use as a supplement to writing instruction, not as a replacement” (770). The author continues with a look into why the latter is of such interest to teachers and finally discusses how it can be employed alongside instruction to improve student learning and mastery of writing. Issues discussed of multimedia, models, asynchronous communication, and feedback sources in learning are most applicable to Teaching Online in T&T.


Shermis, Mark, Jill Burstein, Derrick Higgins, and Klaus Zechner. “Automated Essay Scoring: Writing Assessment And Instruction.”  International Encyclopedia Of Education (2010): 20-26. ScienceDirect. Web.

This article describes the possible uses and benefits of automated essay scoring (AES) technology used as a grading and teaching tool supporting writing instructors teaching students ranging from Elementary level to English Language Learners. The areas of assessment, feedback, diagnosis, and integration into curriculum are included in the later description along with examples and evaluative research background support. Several programs are explored and reflect the differing approaches to designing scoring systems, ranging from basic rubric-weighted human scoring criteria to semantic, mathematically calculated judges (8). These programs function based on pre-selected observable components. Limitations, particularly in regard to style and inference, are clear limitations that presently exist.  However, Shermis presents optimism in future potential and value, asserting that “though it has been demonstrated to replicate human judgement in the grading of essays, over time it will be enhanced to do so with even more proficiency and accuracy. […] Finally it has engendered a discussion about what constitutes good writing and how is it best achieved” (19). The inclusion of various programs developed, listed various observable components involved in software design, suggested audiences, and ways of incorporation into curriculum are areas of most relevance to Teaching Online in T&T.

Chandrasegaran, Antonia, Mary Ellis, and Gloria Poedjosoedarmo. "Essay Assist--Developing Software For Writing Skills Improvement In Partnership With Students." RELC Journal: A Journal Of Language Teaching And Research 36.2 (2005): 137-155. ERIC. Web.

This article presents the program “Essay Assist” as a potential guide to students in terms of their design making. Chandrasegaran et. al suggest that a greater potential exists for computer-mediated instruction (as opposed to traditional instructional methods) because it can meet a greater variety of individual student skills and needs, adapting necessary organization and material accordingly. A survey of student perspectives of the tool’s strengths and weaknesses is also presented, along with the summary that “the primary thrust of Essay Assist, to direct thinking to macro rhetorical goals and socio-cultural context during writing, drew a favourable response from students. The shortcomings reported were largely concerned with technical problems” (147). The emphasized issues of socio-cultural contextualization, rhetoric, and software usability in learning are most applicable to Teaching Online in T&T.   


Shermis, Mark. "State-Of-The-Art Automated Essay Scoring: Competition, Results, And Future Directions From A United States Demonstration." Assessing Writing (2013): 1-24. ScienceDirect. Web.

This article by Shermis describes the results of two competitions related to the current state of automated essay scoring: 1) a competition of commercial vendors utilizing AES to grade higher-stakes essays as compared with human reviewers; and 2) a secondary competition to match the vendors’ AES performance in seven primary areas. He writes that “while there are numerous studies evaluating the performance of a single machine scoring system, there are few studies that simultaneously evaluate multiple systems” and that as a result of this study, “a common data set represents a new opportunity for understanding the current state-of-the-art” (3). Two notable limitations  suggested as to the applicability of software used in this way included: “agreement with human ratings is not necessarily the best or only measure of students’ writing proficiency (or the evidence of proficiency in an essay)” and that “a predictive model may do a good job of matching human scoring behavior, but for reasons the construct of interest” (22). The link to the Assessing Writing peer-reviewed journal, the list of of vendors developing essay grading programs, the employed method for assessing such tools, and the criticisms presented regarding to the purpose, design, and validity of these tools/methods are most applicable to Teaching Online in T&T.



Potential Tutors for Students


Rowley, Kurt, and Nick Meyer. "The Effect Of A Computer Tutor For Writers On Student Writing Achievement." Journal Of Educational Computing Research 29.2 (2003): 169-187. ERIC. Web.

This article presents the background and evaluation results of a year-long implementation of Computer Tutor for Writers (CTW), a program designed for high school writers. The nature of CTW is of a learning support system that presents past and ongoing content and also functions as a tool for writing assignments. This study suggests positive results to the method and application of this program. Rowley and Meyer show the existing limitations in this experiment, writing that “exactly how the use of the CTW reinforced the proper writing methods, and led to improved writing scores for students [...] is not completely clear”; nevertheless, this experiment “confirms and elaborates earlier studies from the research program showing that within the domain of writing instruction, the use of well-designed technologies that provide cognitive support can produce reliable gains in student writing achievement over traditional methods” (183). Topics related to Instructional Design, cognitive models of writing, and systems theory most directly apply to Teaching Online in T&T.

 Lai, Yi-hsiu. "Which Do Students Prefer To Evaluate Their Essays: Peers Or Computer Program." British Journal Of Educational Technology 41.3 (2010): 432-454. ERIC. Web.

In this article, Lai reports about a study that tested the use of automated writing evaluation (AWE) and observed its impact on English as foreign language (EFL) learners in Taiwan. The project employed the program to provide automated grading to writing assignments along with peer-review activities with fellow students. Lai discusses the differences in perception of each evaluation source and the preferences students had (typically favoring peer-review) and how this relates to larger topics such as social learning. Lai concludes that “the computerised feedback, though sometimes being imperfect, could also motivate learners, especially those who were low in computer-anxiety trait, and save time for writing teachers” (444). Issues of feedback sources, social constructivist theory, student perceptions, and computer anxiety have the most direct relevance to Teaching Online in T&T.

Pounds, Diana. “NEW ISU Software Aims to Boost Students’ Research Writing Skills.” Inside Iowa State (2010). Iowa State University. Web.

This publication released from Iowa State Public Relations describes a new software for student use that is being prototyped and iteratively improved. The program, called The Research Writing Tutor (RWT) provides analysis of undergraduate and graduate student writing based on their level and field of study. The article also provides sample feedback comments, identified college units involved in improving content, and which courses are currently utilizing this resource. A quote from the leader of the RWT project provides a description of its intent “Our goal is to implement the RWT as a core component of a campus-wide, technology-enhanced research writing support system” (1).Topics related to rhetoric, discipline-specific writing standards, data/trend based feedback, and pilot implementation at the college level are most relevant to Teaching Online in T&T.


Perspectives on CAI Potential


Markoff, John. “Essay-Grading Software Offers Professors a Break.”  The New York Times (2013). Science. Web.

This article examines perspectives and promises relating to an essay grading and feedback software being created by EdX. Markoff first describes the intentions of using this artificial intelligence technology to provide instant, detailed feedback to students and relief from overburdened teachers, not to mention being utilized in massive open online courses (MOOCs). He provides a balance of perspectives in the various quotes in support of or objecting to this resource, coming from researchers and educators regarding the potential and validity of this grading software as well as to the practice of automated grading in general. Supportive claims exclaim that “there is a huge value in learning with instant feedback,” and that it “would be a useful pedagogical tool”, while opposing views challenge this with the objection that “they did not have any valid statistical test comparing the software directly to human graders” (1). The issues of human grading subtleties, feedback sources and content, and colleges identified as piloting free courses in EdX are most useful to Teaching Online in T&T.

  • EdX and this ongoing discussion was also featured in a 2013 blog posting by Education reporter Valerie Stauss of The Washing Post


Lovell, Meridith, and Linda Phillips. "Commercial Software Programs Approved For Teaching Reading And Writing In The Primary Grades: Another Sobering Reality." Journal Of Research On Technology In Education 42.2 (2009): 197-216. ERIC. Web.

This report presents information on a comprehensive evaluation of existent teaching software for teaching reading and writing in K-12 students. Lovell et. al assessed the programs based on criteria  that includes Instructional Design, appropriateness, and advertised claims, among others. Lovell and Phillips report that, of the 13 programs assessed, “the majority of the [tested] programs are non-instructional; they do not track student progress, provide feedback, or adapt to suit student needs, thereby limiting their usefulness as educational tools” (211). They also suggest that validation of teachers’ perceptions of various technologies and their uses as pedagogical tools are additional areas to research and validate. Issues related to interface design, Instructional Design, concept-mapping, the drill-and-practice method, and continuity/gaps in assisting throughout the writing process most directly apply to Teaching Online in T&T.


Bloom, Molly. “The Pros and Cons of Using Computers to Teach Students How to Write.” Eye on Education (2012). StateImpact. Web.

This is a journalistic report that weighs both the potential, possible value in computer essay graders and the criticisms given by some experts in the discipline of Composition. This article highlights issues raised by faculty at MIT and Carnegie Mellon, the perspective and continuing work of Mark Shermis of the University of Akron, and the views of other teachers and administrators on this issue. Bloom provides a breakdown of the tools’ present success rate, writing that “on shorter writing assignments the computer programs matched grades from real live humans up to 85 percent of the time. But on longer, more complicated responses, the technology didn’t do quite as well” (1). Bloom effectively frames the varied perspectives on this topic by providing important hyperlinks throughout which connect readers to related reports, articles, and contact information for interviewees included in the report. The links to supporting resources as well as the varied views provided by researchers, instructors, consumers, and businesses on this issue usefully illuminate the social context involved with these tools as they can be employed via Teaching Online in T&T.