Determining Efficient Post-Writing Activity for Error Correction: Self-Editing, peer Review, or Teacher Feedback?

Writing in English has always been a formidable obstacle for learners; accordingly, many studies aimed to find not band-aid but complete solutions for learners to improve their writing proficiency. One of these solutions, largely thought to reduce language errors, is error correction. However, instructors seem to be alternating between different corrective feedbacks with the purpose of determining the most efficient one for their students. Previous research largely compared peer feedback and teacher correction and ignored self-editing. In this sense, this study investigated three error correction methods, namely self-editing, peer review, and teacher corrections. To achieve this, three student groups were created and each group, composed of 10 students, was tested with one method. Wilcoxon, Kruskal-Wallis, and Mann-Whitney U tests were employed for analyses and the results yielded significant differences in terms of all methods concerning comparisons of preand post-tests. On the other hand, the test to determine inter-group differences found significant results for the method of teacher correction. Furthermore, the most frequent linguistic errors in students’ writing were revealed. This research contributes to teaching pedagogy by comforting instructors regarding the efficiency of teacher correction and suggests instructors focus on particularly spelling, punctuation, and article to prompt writing development.


IntroductIon
Corrective feedback, also error correction (EC), is a pivotal part of writing particularly if it is second language writing because the workload on the writer proliferates together with all the caveats that need to be known for effective writing. Writing is a unitary process that requires acquisition of language skills together with aptitude for proficiency. The sophisticated nature of writing, hence, compelled researcher to make a substantial number of studies in a way to be of help for those aiming to improve their flair for writing. Many of these studies focussed on decreasing language errors in writing through investigating the efficiency of EC. The similarities of these studies were that they classified EC into two strands: those investigating the impact of teacher editing on errors and those focussing on peer interaction on language development (Diab, 2010); however, one more strand -namely self-editing-deserves merit for close attention together with other EC methods because it has advantages such as saving time and reducing the need for external support. In addition to other significant EC methods studied in the literature such as teacher feedback (Lee, 2014;Zhao, 2010) and peer-correction (Franco, 2008;Sultana, 2009), self-editing should also be regarded an effective EC method for the researchers aiming to study or to compare the effectiveness of EC methods. Concisely, this research aims to lead the way for further studies aiming to compare the effectiveness of not only peer review and teacher feedback, but also self-editing as a post-writing activity. Accordingly, this study empirically compared three methods and found significant results at different levels, which is a missing in other studies that made bilateral comparisons.

LIterature
Many students hanker to write free of errors, which necessitates comprehensive language skills. It can be unequivocally said that all efforts to calibrate a piece of text would be pointless without the writer's attentive struggle on the process of emancipating the writing from error. In line with that various studies looked at the issue of EC in ESL from different standpoints. Some studies emphasized the importance of peer correction whereas others stated the advantage of teacher correction; meanwhile, a few studies indicated that self-editing may be an option for peer and teacher correction. However, it seems that there is a paucity of research that compared the efficiency of three methods at one time through statistical analyses. Here, it may be useful to mention some prominent studies regarding the issue.

Research into the effectiveness of self-editing, peer review, and teacher correction
Positive assistance of peer feedback was indicated by Franco (2008) who aimed to develop writing skills through wiki-based peer-correction. One addition was that peer review may increase student responsibility with their peers; the study had been carried out with a limited number of students though. To measure the efficiency of peer correction, Rachmawati, Juniardi and Fawziah (2018) investigated students' writing and emphasized the importance of the technique for students' mutual improvement. Surely, peer review is essential for collaborative learning; however, there are some reservations regarding the efficiency of it when it comes to the reliability of the correction. Writing is a multidimensional skill; therefore, it requires not only grammar and lexical knowledge but also other extra-linguistical pragmatic skills. That is why many writers do not want to be edited by their peers because they may not have sufficient competence in error correction. The reliability of correction is a critical issue that some writers are afraid of that they may not have a reliable assessment due to lack of evaluation skills or laziness (Sunahase, Baba, & Kashima, 2019). Concisely, it can be concluded that peer feedback seems to be useful in reducing anxiety level of student writers; on the other hand, there are some reliability concerns regarding this type of editing. Furthermore, for adult writers like academicians it would not be an easy work to find a peer to review or request such a demand from a colleague.
Another significant technique for EC is teacher correction, which is widely used by student writers. Despite that, some researchers are rather sceptical about the efficiency of this type of correction in decreasing the number of errors in students' writing due to the disproportionate interference of teachers. One of the purposes of error correction is to aim to reduce errors by increasing writers' awareness through steering their attention on the lingual problems (Truscott, 2007). Teacher correction prevents learners from learning their errors because students stay in a very passive standpoint, which leads student writers to run into a stone wall on the course of acquiring a language (Truscott, 1996). Although it is a reliable method for correction, student writers may suffer the drawbacks; such as, some students may feel anxiety about the review of teacher or even be afraid of the teacher. To overcome the setback of teacher interference as an EC method, Gurbanova (2020) stressed the importance of creating a friendly atmosphere where students do not fear of making errors or to be corrected by their instructors.
Some researchers underline the role of teachers as mediators (Elcin, 2017) and putting traditional EC role of teacher aside. The common feature of researchers who opposed EC role of teachers is the curiosity of whether self-editing might be an available option as an EC that makes up for what peer review and teacher correction fail to achieve. In line with this, Sangeetha (2020) proposed some pedagogical implication on how to take best advantage of these methods while Tsuroyya (2020) did not only investigated the effect of peer review but also students' sympathy for peer review. Similarly, Hojeij and Hurley (2017) indicated than self-editing might be an efficient way of EC not only in traditional learning environments such as classes but also on technological platforms (See Aydın, 2017 for tech-based problems of ELT students) such as virtual and online learning environments. On the other hand, Diab (2010) found that self-editing may not be as successful as peer review in reducing rule-based language errors in revised drafts; however, the researcher (2011) did not find a statistically significant difference between the two groups in another study as neither did Balderas and Cuamatzi (2018). Sure about the efficiency of self-editing, Al-Wasy and Mahdi (2016) investigated how to improve self-editing skills through mobile phone applications and found that the application may be of use in reducing errors of grammar and punctuation, but in spelling and capitalization (see Li & Hegelheimer, 2013 for a similar study).
Peer review is unreliable for student writers because students may not be able to spot language weaknesses as they are weak themselves (Allaei & Connor, 1990) and they do not trust one another's revisions (Carson & Nelson, 1996); however, it is efficient in reducing effective filter of the student. Regarding teacher correction, it can it may be too interventionist for student writers, which may lower language awareness while costly for students as well as disruptive in terms of research content. Self-editing, on the other hand, seems to be partly successful. In concise, how to respond to students' errors in SLA is a controversial issue even today. Among controversial results, it seems that there exists no common consensus about the efficiency of EC methods because little research has compared EC techniques with one another to investigate the impact of them on the writing quality of students. Therefore, a study that holistically compares the three techniques may be of importance to reach a concrete result.

Justification for the study and research aim
Different from other linguistic skills, improvement of writing need to be seen a unitary process and this "complex integrated activity" (Leggette, Rutherford, & Dunsford, 2015, p. 250) serves as a basic skill particularly for undergraduate students studying in English-medium departments, which entails students to search for ways to possess effective writing skills. Accordingly, some scholars (e.g Liaw, 2007) argue that content is to be developed for proficient writing whereas others defend that writing is about grammar (e.g. Andrews, et al., 2006;Elola, 2010) and mechanics (e.g. Crossley, Kyle, Varner, & McNamara, 2014). Leaving all opinions behind, it is a well-established fact that writing cannot be pushed into confined zones because its scope of learning overspreads a larger area than it used to be considered. Considering the importance of writing, it becomes more sensible why writers studiously avoid writing errors because writing is not only about knowledge of content, but also grammar, mechanics, and even extra-linguistics factors such as cognitive and visual-perceptual skills (Vinter & Chartrel, 2010). "All writing is rewriting" says Donald Murray, and one of the common mistakes in English writing is the lack of a second eye in checking the preciseness of the language. Keeping this in mind, this study dealt with student translators in the department of translation and investigated the efficiency of the three post-writing EC methods: Self-editing, Peer review, and Teacher correction. The study aims to determine which EC might be the most influential in reducing the number of errors in student translators' English writing. Increasing awareness of EC is another primary expectation in this study. This study might be a guide for instructors in deciding which EC to use to decrease language errors in student essays. Concisely, this study aims to eliminate ESL writing errors though determining the most effective EC. Besides, the study aims to reveal which language error patterns, out of 26 error types that this study featured (App. B), are the most prevalent in student essays.

Participants
The participants, students at the Department of Translation and Interpreting at the University of Siirt, Turkey, were selected through the criterion sampling method. The student samples were composed of three groups, which are a group of self-editing (Group A), a group of peer review (Group B), and a group of teacher correction (Group C). No group was categorized as a comparison (control) or experimental group because the study considered all the groups as experimental groups. Out of total 78 students, 30 students were included in the study following a few examination questions, 17 of whom were registered in the third (junior) grade while the rest were from the last (senior) grade. The number of students in each group was equalized to 10. Although the university is a Turkish medium university, the department is English medium (except for a few Turkish-based courses). All the students placed in the study should have scored between 320 and 350 on the University Entrance Exam of Foreign (English) language, and those who were out of this range were excluded so as not to entail a reliability problem. It was ensured that the participants, aged between 21 and 25, had not been taking supplementary English courses out of school hours. Students only with Turkish mother tongue comprised student samples although the department had students from various national backgrounds, namely Syrian, Egyptian, Iranian, Iraqi, and Azerbaijani. Citizenship was not a category for the student selection procedure because there were several Turkish citizens from different nation descent at the department. The knowledge of that some students also speak Kurdish and/or a local dialect of Arabic (a common incidence in the southeast of Turkey where the study context is) was ignored because no relation that could adversely affect the study results was reasoned. Students who did not like to edit an essay of their friends were also excluded because peer-review was required in the study. Last, the researcher sought and gained the participants' consent to carry out the research on the data that were going to be attained from their essays.

Materials
Four instruments were employed for data collection: a semi-structured questionnaire, a diagnostic essay, essays on prompts, and an editing template. To start with, the questionnaire (Appendix A, adapted from Diab, 2010;Liebman, 1992) was constructed by the researcher for this research and included leading (open-ended) questions, Likert questions, dichotomous question, and rating scale questions to collect detailed data about the participants. At the very beginning of the research, the questionnaire was delivered to gather information about the students' personal and cultural backgrounds and to determine their linguistic levels (see table 1). The questionnaire was not blinded; therefore, the students were requested to write their name on the questionnaire in case there might be a need of seeking clarification on vague responses. The questionnaire enabled the researcher to select the most suitable student samples for the study. In the wake of building the samples, students were asked to write a diagnostic essay (see Hamp-Lyons, 1991 for detail) which helped significantly to specify their writing skill so that homogeneity of the groups was made possible. Ensuring the homogeneity of the three groups was significant to strike a linguistic balance between the groups; therefore, a second-rater who has Ph.D. in ELT evaluated 30% of all diagnostic essays to determine the degree of agreement among the raters, and an inter-rater concordance of 0.90 (Cohen's kappa coefficient) was found. The core instrument in this study was the essays collected based on writing prompts. Each student was asked to write ten essays in total: three persuasive, three descriptive, two literary, and two expository. The essay should not have less than 500 words or more than 550 words. Finally, an editing template (Appendix B), designed for this research, was delivered to all groups so that they could categorize and edit the errors to a certain rubric.
Having told about the purpose of the questionnaire and the research, the researchers delivered the questionnaire that helped to create the student samples. Questions from 1 to 10 aimed to determine 30 participants for the study while the questions from 11 to 16 intended to place the participants into the most appropriate group. The results of the questionnaire, delivered to 78 students, revealed that 42 students were junior and 36 students were senior (Q5). Eleven students were not Turkish citizens (Q4) and 18 students had a different mother tongue other than Turkish (Q3). It was noticed that seven students had a score under 320 while five students had a score over 350 (Q6). Five students informed that they had supplementary English courses out of school hours (Q8). Twelve out of sixty students whose mother tongue was Turkish indicated they also spoke Kurdish while five students talked a local dialect of Arabic (Q7). Three students were reluctant to make peer-review (Q9). Finally, one student made excuses not to take part in the study. After a process of elimination in line with the responses to the questions from 1 to 10, thirty most suitable students were spotted. The eleventh question was of great importance to categorize the student samples: Seven of 30 indicated that they edited themselves when needed while 3 students marked the option of D I make it edited through a PC-based programme. These students (10 in total) were placed into the group of self-editing (Group A, henceforth SEG). Twelve students marked the option of C I ask my teacher to edit it (and they were placed into the group of teacher correction, henceforth TCG) while 6 students marked B I ask my friend to edit it (and they were allocated to the group of peer-review, Group B, henceforth PRG). Only 2 students marked I do nothing. The questions 12, 13, and 14 were needed to step in for the creation of the groups of peer review and teacher correction, and we noticed that 2 students who marked I do nothing in the 11 th question stated that they never trusted peer-review (Q12); therefore, they were placed into TCG (Group C). Hence, PRG had 6 students while TCG had 14. Two students in TCG indicated that they trusted their friend's comment on their writing (Q12) and another 2 students responded positively to the 13 th question. These students were considered to be more suitable for PRG, and accordingly placed into the group. In this way, student elimination and group placement were completed. The last instrument -diagnostic essay-for the calibration of the groups compared the results and found that two students in SEG and one student in PRG outperformed the other students. Because their language ability might turn the scale in favour of their groups and damage the group calibration, the researcher had to replace one of the two students who outperformed in SEG into TCG; thus, each group had an outperformed student, and the grouping was completed at all point.

Procedure
This is two-phased research; the first phase was composed of 4 weeks and the second phase lasted 10 weeks. The research required students to actively participate in all sessions incessantly throughout the 14-week period (two students missed the sessions in the 8 th week but this missing was retrieved in the weeks ahead). All the courses at the department ended at before 3 p.m.; therefore, the meeting time with the students was planned for 3 p.m. throughout the research. In phase one, we convened three times a week (fixed days: Mon, Tue, and Wed) and each meeting lasted 60 minutes. The four-week duration was saved for the instruction of academic writing conventions and editing skills. To ensure the effectiveness of the training period, the teacher and students collectively edited three writing samples successfully. Form focused instruction is a significant way of teaching learners in SLA (Ellis, 2002); therefore, it was concluded that having received form-focused instruction students learnt the components of editing processes and to what they should pay attention regarding academic writing conventions. The students were not instructed on content analysis or organisation because the study focused on reporting only the editing of language errors.
In the wake of phase one, students were introduced with the editing template because they may not have been well informed on how to use error codes and label them properly. Each student was provided with 10 copies of the editing template because they would be asked to edit 10 writing samples. After being certain that they learnt how to label codes, the researcher set an example on how to use the editing template in the presence of students. Then, phase two that lasted for 10 weeks was initiated. Different from phase one, phase two required students to convene two times a week (Mon and Tue); Writing prompts were delivered on Mondays, and editing templates were filled on Tuesday conventions. En passant, not all students were invited to the both Monday and Tuesday activities, i.e., students in TCG were exempted from Tuesday activities because they did not have a duty of editing. To clarify the situation, students in SEG (group A) filled their editing templates (so the group was named 'self-editing group'; SEG'); group of peer-review (group B, this is the group that like to be peer-reviewed so it was named 'peer-review group'; PRG) was edited by TCG (group C, this is the group that do not trust peer-review so the group was named 'teacher correction group'; TCG); and the researcher edited the group C (see Figure 1 for detail).

Figure 1. Group names and the distribution of tasks.
There was no time constraint for both activities not to lead any anxiety; to talk specifically, Monday activities that required students to write an essay between 500-550 words had an average of 57 minutes while Tuesday activity had 32 minutes. Meanwhile, five students effectiveness of the training period, the teacher and students collectively edited three writing samples successfully. Form focused instruction is a significant way of teaching learners in SLA (Ellis, 2002); therefore, it was concluded that having received formfocused instruction students learnt the components of editing processes and to what they should pay attention regarding academic writing conventions. The students were not instructed on content analysis or organisation because the study focused on reporting only the editing of language errors.
In the wake of phase one, students were introduced with the editing template because they may not have been well informed on how to use error codes and label them properly. Each student was provided with 10 copies of the editing template because they would be asked to edit 10 writing samples. After being certain that they learnt how to label codes, the researcher set an example on how to use the editing template in the presence of students. Then, phase two that lasted for 10 weeks was initiated. Different from phase one, phase two required students to convene two times a week (Mon and Tue); Writing prompts were delivered on Mondays, and editing templates were filled on Tuesday conventions. En passant, not all students were invited to the both Monday and Tuesday activities, i.e., students in TCG were exempted from Tuesday activities because they did not have a duty of editing. To clarify the situation, students in SEG (group A) filled their editing templates (so the group was named 'self-editing group'; SEG'); group of peer-review (group B, this is the group that like to be peer-reviewed so it was named 'peer-review group'; PRG) was edited by TCG (group C, this is the group that do not trust peer-review so the group was named 'teacher correction group'; TCG); and the researcher edited the group C (see Figure  1 for detail).

Figure 1. Group names and the distribution of tasks.
There was no time constraint for both activities not to lead any anxiety; to talk specifically, Monday activities that required students to write an essay between 500-550 words had an average of 57 minutes while Tuesday activity had 32 minutes. Meanwhile, five students missed Monday activities one time and three students missed two times; similarly, four students failed to catch Tuesday activities. Those who were absent in the activities were asked to complete the activities in the same week on the following days, and all handed in the activities as said. One caveat regarding correction processed is that although SEG and PRG filled the editing-templates by themselves, the researcher checked whether they were accurately recorded the errors and filled the editing-template or not. Accordingly, the final data for the analyses passed through the examination of the researcher.

Self-Editing Group
Filled their own editing templates.

Peer-review Group
Editing templates were filled by Group C

Teacher-correction Group
Editing templates were filled by the researcher missed Monday activities one time and three students missed two times; similarly, four students failed to catch Tuesday activities. Those who were absent in the activities were asked to complete the activities in the same week on the following days, and all handed in the activities as said. One caveat regarding correction processed is that although SEG and PRG filled the editing-templates by themselves, the researcher checked whether they were accurately recorded the errors and filled the editing-template or not. Accordingly, the final data for the analyses passed through the examination of the researcher.

Analysis
The first analysis aimed to determine whether the data had a normal distribution. In line with this, Skewness and Kurtosis test through SPSS was used to determine the normality of the data distribution. The test results showed that the data for all groups were not distributed normally; therefore, non-parametric tests were used for the analyses: Wilcoxon, Kruskal-Wallis, and Mann-Whitney U tests. At the beginning of the main statistical calculations, Kruscal-Wallis test was employed to determine whether the scattering of the groups is homogenous or inhomogenous (heterogeneous). Then, Wilcoxon tests were used to determine whether each group had a statistically significant difference at the start and the end of the study. For all calculations, the average of week 1 and week 2 was thought to be pre-test (variable 1) while the average of week 9 and week 10 was considered to be post-test (variable 2). Then, again Kruskal-Wallis test was employed to reveal whether there was a statistically significant difference among the groups at the end of the study. In the wake of Kruskal-Wallis, a Mann-Whitney U test was employed to allow the comparison of the independent groups, which allowed us to detect between-groups differences.

reSuLtS
To be more informative and suggestive, the test results of each group averaged and were presented in Figure 2.

Analysis
The first analysis aimed to determine whether the data had a normal distribution. In line with this, Skewness and Kurtosis test through SPSS was used to determine the normality of the data distribution. The test results showed that the data for all groups were not distributed normally; therefore, non-parametric tests were used for the analyses: Wilcoxon, Kruskal-Wallis, and Mann-Whitney U tests. At the beginning of the main statistical calculations, Kruscal-Wallis test was employed to determine whether the scattering of the groups is homogenous or inhomogenous (heterogeneous). Then, Wilcoxon tests were used to determine whether each group had a statistically significant difference at the start and the end of the study. For all calculations, the average of week 1 and week 2 was thought to be pre-test (variable 1) while the average of week 9 and week 10 was considered to be posttest (variable 2). Then, again Kruskal-Wallis test was employed to reveal whether there was a statistically significant difference among the groups at the end of the study. In the wake of Kruskal-Wallis, a Mann-Whitney U test was employed to allow the comparison of the independent groups, which allowed us to detect between-groups differences.

RESULTS
To be more informative and suggestive, the test results of each group averaged and were presented in Figure 2.

Figure 2. Average of each test
The averages of errors in each group appear to be diminishing throughout the tests. In this linear decrease, the highest average belongs to SEG in test 1 while the least average is in the last test of TCG. To ensure the homogeneity of the groups, Kruskal-Wallis test was used and the test results showed that there is not a statistically significant difference between SEG, PRG, and TCG with a mean rank of 16.95, 13.85, and 15.70, respectively. This means that the scattering of the groups is homogenous (χ 2 (2) = .641, p = .726).

109
The averages of errors in each group appear to be diminishing throughout the tests. In this linear decrease, the highest average belongs to SEG in test 1 while the least average is in the last test of TCG. To ensure the homogeneity of the groups, Kruskal-Wallis test was used and the test results showed that there is not a statistically significant difference between SEG, PRG, and TCG with a mean rank of 16.95, 13.85, and 15.70, respectively. This means that the scattering of the groups is homogenous (χ 2 (2) = .641, p = .726).
To determine whether each group had a statistically significant improvement in terms of language errors between the start and the end of the study, the average of the weeks 1 and 2 was calculated to constitute the pre-test figures while the average of the weeks 9 and 10 constituted the figures for post-test. Wilcoxon signed-rank test showed that 10-week self-editing language error treatment elicited a statistically significant change in lowering language errors that the students had in their essays (Z = -2,812, p = 0.005). Meanwhile, Median Error Score rating was 7.8 for the pre-test and 4.8 for the post-test. The same calculation test was applied to PRG and similar results were obtained; Wilcoxon signed-rank test revealed that 10-week peer-reviewing language error treatment yielded a statistically significant change in terms of reducing language errors in students' essays (Z = -2,677, p = 0.007). Last, Wilcoxon signed-rank test resulted that 10-week teacher correction language error treatment found a statistically significant improvement in improving students' skill of essay writing through decreasing total language errors (Z = -2,816, p = 0.005).
To reveal whether the results of the groups had a statistically significant difference at the end of the study, a Kruskal-Wallis H test was used and it was found that that there was a statistically significant difference between the groups (χ 2 (2) = 11.321, p = .003), with a mean rank language error score of 16.90 for SEG, 21.05 for PRG, and 8.55 for TCG. After, the Mann-Whitney U test compared the groups to determine between-groups differences and it was concluded that SEG and PRG had similar results; therefore, no statistically significant difference was detected between them (U = 33.500, p = .185). On the other hand, the results elicited a statistically significant change for SEG and TCG (U = 19.500, p = .016). Similarly, a statistically significant difference was revealed between the last groups: PRG and TCG (U = 11.000, p = .002).
The content was also analysed to determine which language errors were the most common in student writing. Similar to statistical analyses, the data from week 1 and week 2 were analysed for pre-tests, and week 9 and week 10 were analysed for post-tests; however, the total number of language errors was calculated and the results showed that students had 1741 language errors in total. SEG had the highest language errors pre-test while PRG had the lowest number. Regarding post-test TCG had the lowest number of errors whereas PRG had the highest number (Table 1 for more detail)

110
To provide detailed information regarding the content of the errors, the most common error pattern was spelling, and then punctuation and article errors for the data of pre-tests. These error patterns were checked in the data of the post-test and a significant drop was observed (Table 2 for more detail). The analyses revealed that students were also prone to making word-based errors such as subject/verb agreement (S/V), weird word combinations (Coll), preposition (Prep), pronoun/noun agreement, tense (T), and wrong of word (WF). In addition, some awkward sentences, due to needless sentence prolonging, were detected widespread in data. Given the findings, it appears that students noticed their errors, and corrected the most common error patterns in post-tests.

dIScuSSIon and concLuSIon
This study had two-fold purposes, one of which was to investigate three EC methods and attempt to reveal which one was more effective in reducing language errors in students' essays and the second was to determine the most frequent language errors that were found in students' essays. For the first purpose, the experimental results showed that all groups lowered the number of language errors in their essay writing. In other words, when compared to the context before the study SEG, PRG and TCG achieved to drop the number of errors. Similar to the present study, Diab (2010;2011) found that self-editing was more effective. Although statistical results provide a statistically significant difference between SEG and PRG, the effect of peer-editing seems to be more successful when the percentage of error elimination is calculated: SEG reduced their errors by 39% whereas PRG dropped by 30%, which is a similar result to Khaki and Biria's (2016). The success of self-editing over peer-reviewing may be because SEG attentively engaged in EC procedure, hence more carefully edited, just because the edited text was theirs. In other words, this subtle difference of success can partly be attuned to the amount of attention. However, some studies (Abadikhah & Yasami, 2014;Winarto, 2018) concluded that both peer-review and self-edit strategies equally improved students' writing and did not mention any superiority between two EC strategies. Different from studies tested merely the linguistic effect of these tech-niques, Warsono (2017), together with approving the efficiency of self-editing, underlined the importance of motivation level during EC course, and concluded that student in PRG succeeded better when they had high motivation, which shows that student readiness and willingness should not be overlooked while creating a PRG (as this study sought consent and willingness from students to make peer-review).
Eksi (2012) did not find a statistically significant difference between TCG and PRG and approved the performance of both groups. Similarly, this study concluded that all of the three techniques have a significant impact on increasing accuracy in students' writing; however, the post-test results revealed that there were statistically significant differences between the tests. In other words, TCG statistically outperformed SEG and PRW, which may be because students valued the teacher's comments. The feeling of thrust is essential particularly when the comments are made on your errors. In this direction, Jodlowski (2000) investigated the treatment effect of teacher correction and peer-review on students' self-editing skills and found that teacher correction rectified students' self-editing skills better than peer-review depending on the maintenance duration. On the other hand, not all studies proved the effectiveness of teacher correction; for example, Ganji (2009) found that teacher correction was not any better when compared to peer-review. However, Ganji had created a student sample from IELTS candidates with high self-motivation which plays a significant role in EC (Warsono, 2017).
Concerning the second research purpose, it was found that students are more prone to language errors of spelling, punctuation, and article. The majority of the spelling errors stem from the incorrect use of vowels or wrong order of letters. Punctuation is also a widespread error source in students' writing; therefore, many studies aiming to reduce punctuation errors to improve the writing quality of students investigated the issue. For example, Alamin and Ahmed (2012) analysed technical writing of college students and concluded that it was learners' failure to understand fundamental English grammar to blame for punctuation errors.
Although this study found the efficiency of the three methods, the self-editing method deserves a particular attention because it fosters learners' independence (Sangeetha, 2020) and offers writer more time that would be lost in the pursuit of looking for a peer. Also, new technologies aiming to reduce inter-dependence on EC in second language writing (see Li & Hegelheimer, 2013;Al-Wasy & Mahdi, 2016;Hojeij & Hurley, 2017) have been concentrating on the technique of self-editing because the advantages of this technique outweigh when compared to other methods. However, it is should be taken into consideration that self-editing necessitates competence in the grammar of the target language; therefore, if this method is desired for the university students, it is recommended that the method should be supported with computer based-programmes or the lecturer (Kasule & Lunga, 2010). University students may take the advantages of some research (e.g. Li & Zhous, 2005;Hendrix, 2013) that aims to develop students' writing skills through self-editing and yet they may need the simultaneous contribution of the instructor because the process requires a good deal of expertise in language.
On the other hand, this study still suggests the use of peer-review and teacher correction methods as effective EC methods. Depending on the constraints or availabilities, lectures may decide on which EC to use because no statistically significant difference was found between the three methods although teacher correction is considered to be more advantageous. Furthermore, lecturers are suggested to pay special attention to the most problematic three language errors (see Table 2) that this study found because they make up a large part in total language errors. This study only studied language errors in writing; however, writing involves different dimensions such as content and organisation. Further studies may investigate these issues to increase the writing skills of students. Also, error patterns found in this study were not subjected to any statistical analyses. Researchers are suggested to conduct analyses to reveal error-pattern based results; for instance, whether the number of errors regarding S/V agreement statistically changed between pre-and post-tests. Finally, this research was conducted in the first cycle, and learners at other levels of education may also provide valuable results on the same issue.