Abstracts
Abstract
The use of new technologies within research into interpreting quality has produced new tools that are expected to increase the number of subjects taking part in survey studies. The growth of Internet users has led to a rise of online questionnaires mainly as a result of their time saving advantages. This paper compares the response rate obtained using three different ways of presenting a questionnaire about quality expectations in interpreting to subjects: in person, via an invitation to take part in an online questionnaire and by including the questionnaire within the text of an email to the subjects. The results of this study show that the subjects tend to participate more when the questionnaire is administered in person. In general male participation was higher than female, but no significant difference was observed with respect to the method of administration. Regarding the particular field of knowledge, the group of subjects working in a scientific and technological area was the only one in which the response rate for the paper “in person” questionnaire was not notably higher than for the other methods.
Keywords:
- survey administration,
- level of participation,
- expectations,
- comparison of methods
Résumé
L’utilisation de nouvelles technologies dans le cadre de recherches sur la qualité de l’interprétation a favorisé l’avènement de nouveaux outils qui devraient permettre de faire augmenter le nombre de participants à des enquêtes. L’augmentation du nombre d’utilisateurs d’Internet a entraîné une hausse du nombre de questionnaires en ligne, principalement parce que ceux-ci permettent de gagner du temps. Le présent article compare les taux de réponse obtenus à l’aide de trois méthodes différentes de présentation d’un questionnaire sur les attentes en matière de qualité en interprétation : en personne, par invitation à répondre à un questionnaire en ligne et par inclusion du questionnaire dans un courriel adressé aux sujets. Les résultats de cette étude démontrent que les sujets sont davantage enclins à participer lorsque le questionnaire est rempli en personne. De façon générale, la participation des hommes était plus élevée que celle des femmes, mais aucune différence significative n’a été observée en fonction de la méthode de conduite de l’enquête. En ce qui concerne les domaines de savoir, le groupe de participants provenant d’un secteur scientifique et technique est le seul pour qui le taux de réponse au questionnaire « en personne » n’était pas particulièrement plus élevé que pour les autres méthodes.
Mots-clés :
- conduite d’enquête,
- niveau de participation,
- attentes,
- comparaison de méthodes
Article body
1. Introduction
Research into the quality of spoken-language interpreting has been approached from two main perspectives: quality as process and quality as product. In research into quality as product (for example, Bühler 1986; Kurz 1989; 1993; Gile 1990), one of the fundamental tools is the questionnaire, to which other techniques such as interviews (Vuorikoski 1993; 1998; Mack and Cattaruzza 1995) and discussion groups (Collados Aís 2009) have been added. The advent of new technologies such as the Internet provides researchers with a new, highly practical means of distributing questionnaires (Chiaro and Nocella 2004; Zwischenberger and Pöchhacker 2010). Such methods not only produce savings in terms of time and money, but also offer researchers the chance to enlarge both the sample groups and the scope of their study, because in principle the Web is not limited by frontiers.
The recruitment of people to participate in survey studies is an ongoing problem for researchers on interpreting quality. Finding subjects willing to answer a questionnaire is a major challenge when they do not obtain any compensation. The Internet allows researchers to increase the number of subjects in their samples considerably because they can reach a much larger target audience, but will the response rate be higher than in a questionnaire conducted in the traditional way? Chiaro and Nocella (2004: 284) suggested that the normal response rate for traditional surveys conducted in person, which they put at 10-15%, was doubled when the online distribution method was used, given that in their opinion less effort was required to complete the questionnaire on the computer: “A few clicks of the mouse while the recipient’s mail is open does not involve the effort of filling in and above all posting traditional hard copy questionnaire.” (Chiaro and Nocella 2004: 285). I am unaware of any comparative research in interpreting studies that could confirm or deny the hypothesis put forward by these researchers and it would be difficult to compare the response rates obtained in previously published research, as in many papers this information is not provided. I have, therefore, decided to conduct this study in order to compare the response rate obtained from subjects who received a questionnaire via the Internet with those who received it in person. I also decided to test two forms of online distribution of the questionnaire: firstly an invitation to take part in a questionnaire via a link and secondly a questionnaire presented within the invitation email itself. My aim was to discover whether or not response rates increase when less effort is required.
1.1. Methods of administration in survey research
The enormous increase in Internet use and computer-mediated communication over the last twenty years has been reflected in survey research. The adoption of online surveys has exposed scholars to new challenges in terms of survey methodology and techniques (Andrews, Nonnecke et al. 2003). However, the fact that online surveys provide the ability to conduct large-scale studies, the access to individuals in distant locations and the possibility of automated data collection has increased their popularity with researchers from different fields.
Two key advantages are taken into account when opting for electronic or web-based surveys: time and cost savings. Previous works confirmed that online questionnaires may obtain similar results to face-to-face or postal surveys (for example, Yun and Trumbo 2000), but Wright (2005) suggested use of both online and traditional means in order to analyze whether the method of administration affects respondents’ response rate. This effect has been analyzed as part of a wider research project (García Becerra 2012), but, in this paper, I will focus my attention on an aspect that has also attracted the interest of other scholars: the effect of the method of administration on the response rate.
Some studies have shown that online surveys obtain equal or better levels of participation than traditional ones (for example, Mehta and Sivadas 1995; Bachmann, Elfrink et al. 2000), whereas others have found that they achieved lower response rates (for example, Schuldt and Totten 1994; Tse 1995; McDonald and Adam 2003). According to Fricker and Schonlau (2002: 354), there is little evidence that Internet-based surveys increase response rates and the few cases that have attained higher response rates have been carried out either in university-based populations or in small, specialized ones. They suggest that university staff and university students tend to be more disposed to respond to an online survey than a random sample of the general population (Fricker and Schonlau 2002: 350). This is one of the reasons why I have decided to compare the level of participation in a survey depending on its method of administration to university staff members. There were other reasons for this decision as well: the staff directories were available on the web, easing the way of contacting the members of the sample, and the results could be compared with previous studies in the field.
In addition, the design of the questionnaire might have subtle or dramatic effects on the response rate as participants are required to make cognitive contributions to the process of data collection (Bowling 2005). For example, written surveys admit longer and more complex questions than interviews, and questionnaires administered in person allow greater flexibility than self-administered ones. The choice of a method of administration also has a direct impact on the format of the questions. Electronic surveys have distinctive technological, demographic and response rate characteristics that determine their design (Andrews, Nonnecke et al. 2003). Thus, the specific features of the target sample and the method selected for distributing the questionnaire should be taken into account when designing the survey.
1.2. Methods of administration in expectations surveys and response rate
After the seminal work of Bühler (1986) analyzing expectations, many subsequent research studies have used questionnaires. In the late 1980s and the 1990s, survey research into interpreting quality was undertaken by means of face-to-face and telephone interviews. Only in the mid-2000s were online questionnaires explored as a survey mode.
As regards the level of participation, Bühler (1986) surveyed the importance that 41 members of the International Association of Conference Interpreters (AIIC) and 6 members of the Committee on Admissions and Language Classification (CACL) attributed to 16 evaluation parameters that she had proposed. Kurz (1989, 1993) employed a questionnaire based on Bühler’s list of parameters to explore the expectations of real users regarding interpreting quality (47 doctors, 29 engineers and 48 members of the Council of Europe) and in 1995, together with Franz Pöchhacker, Kurz compared these results with those of a sample group made up of 19 representatives from Austrian and German TV companies. Lidia Meak (1990) received responses from 10 doctors from different specialist fields, and Stefano Marrone (1993), in a study that combined both expectations and evaluation, managed to obtain 87 questionnaire responses from about 150 people attending a Law lecture (a response rate of about 58%).
Anna-Riita Vuorikoski (1993; 1998) conducted her research on 480 Finnish delegates attending five different seminars and received a response from 173 (36.04%). In addition to this questionnaire, in which she included questions relating to both expectations and evaluation, she added another technique, that of the telephone interview which she performed a few weeks after distributing the questionnaire. Using a similar methodology, Mack and Cattaruzza (1995) undertook a study of expectations via questionnaires and interviews in person and on the telephone. To do this they distributed a total of 161 questionnaires in different types of multilingual meetings that took place in Italy and obtained a response from 75 subjects (46.58%).
Kopczyński (1994) conducted a survey of 57 Polish users from different fields (20 subjects from Humanities, 23 from Science and Technology, and 14 diplomats) about their expectations regarding simultaneous interpretation. At an international level, Moser (1995) presented the results of a study commissioned by the AIIC to find out more about the expectations of different user groups. For this purpose, 94 interpreters from AIIC took part in 201 interviews at 84 communicative events during which a questionnaire specifically designed for this purpose was distributed.
Ángela Collados Aís (1998) carried out research regarding interpretation expectations in which she obtained a response from 42 of the 59 subjects she contacted (71.19%), all of whom, at some time in their lives, had been users of an interpretation service. This questionnaire was also answered by 15 interpreters. Using the same work methodology, Pradas Macías (2003) interviewed 15 interpreters and 43 of the 90 members of the Faculty of Law and the Faculty of Political Sciences of the University of Granada that met the criteria (having used simultaneous interpreting services before and did not take part in the study of Collados Aís) for participants in the study (47.78%); and Collados Aís, Pradas Macías et al. (2007) obtained a response from 197 teachers from four Spanish universities. In addition to this, Garzone (2003) also included a brief expectations questionnaire in a study in which subjects were asked about four of Buhler’s quality parameters. This was answered by 16 subjects, eight doctors and eight from other professional fields (mainly engineers), all of whom had experience as users of simultaneous interpretation.
Chiaro and Nocella (2004) were pioneers in using the Internet to distribute questionnaires about the quality of interpretation. In order to test the efficacy of this method, they sent out around 1,000 invitations to professional interpreters from all over the world and received replies from 286 (28.60%). A more recent study by Zwischenberger and Pöchhacker (2010) of a sample group made up of members of the AIIC focused on the analysis of expectations and opinions of the role of the interpreter. Of the 2,523 invitations to take part in the survey that were sent out, replies were received from 704 (27.90%).
Table 1 presents a summary of the different articles referred to above in which the number of subjects interviewed has been mentioned and the response rate could be obtained.
In earlier studies, it would seem that the response rate, when possible to calculate, was higher than the 10-15% cited as normal by Chiaro and Nocella (2004: 284). Although, it is true that in most cases the sample sizes are quite small, it is worth pointing out that Vuorikoski (1993; 1998), Moser (1995) and Collados Aís, Pradas Macías et al. (2007) achieved quite large response rates without using the Internet. However, in order to make a comparison, we must take into account not only the method by which the survey was distributed (via the Internet or a printed questionnaire distributed in person), but also aspects such as: the time taken to distribute the questionnaires, which ranged from the duration of a conference (Marrone 1993) to the eight-month period used by Mack and Cattaruzza (1995) for their research; the type of subjects to whom the questionnaire was addressed (interpreters, employers, users and potential users); the number of researchers taking part in the project, which ranged from a lone researcher (Bühler 1986, for example) to the 94 interpreters that took part in the project presented by Moser (1995); or whether the research is being supported or promoted by an institution.
2. The study
2.1. Objectives and hypotheses
At the end of section 1.1 of this paper I set out two objectives: 1) to compare the response rates obtained when different methods are used to administer questionnaires and 2) to discover whether in the university-based samples the response rate is higher for online surveys.
I start from the hypothesis that the use of the Internet does not necessarily increase the level of participation in a survey.
2.2. Methodology
In order to be able to compare the response rates obtained from different ways of distributing questionnaires, three methods of administration were used for an expectations survey: 1) in person, 2) an online questionnaire sent to the subject via an invitation with the Lime Survey programme, and 3) a questionnaire embedded within the text of an email that was sent to the subjects using the Google Docs service. I thought it would be an interesting idea to divide the online distribution of the questionnaire into two different methods to see whether including the questionnaire directly inside the email could encourage more subjects to participate, as they would not have to visit any other websites or reply to the mail to answer the questionnaire.
2.2.1. Questionnaire
The questionnaire (see Appendix) was aimed at exploring subjects’ expectations in interpreting quality regarding three aspects: those relating to form, those relating to content and those relating to fidelity. The subjects had to assess the importance of each of these aspects on a scale of 1 to 7, in which 1 meant “not at all important” and 7 “very important.” They also had to provide their sociodemographical information: gender, age, field of knowledge and experience as users of interpreting services.
2.2.2. Subjects
The selection of the sample group was performed at random from teaching and research staff from three faculties at the University of Granada: Pharmacy, Philosophy and Letters, and Psychology. The study was conducted in these centres because it was considered interesting to see if the particular field of knowledge to which the subjects belonged (Science and Technology, Humanities and Social Sciences) affected: their expectations on interpreting quality in any way or their level of participation in the study. I wanted to explore if subjects from the same field of knowledge tend to be more disposed to take part in the survey when using one specific method of administration.
From an alphabetical list of the personnel from these faculties, 90 subjects from each faculty were selected by drawing lots. These subjects were then divided, also by random, into three groups: one for each of the methods of administration. This meant a total sample of 90 subjects for each group: in person, Lime Survey and Google Docs.
In total, there were 270 subjects in the sample group, 147 men and 123 women. By discipline, the groups from the Faculty of Pharmacy and the Faculty of Philosophy and Letters had more men than women (46/44 and 58/32, respectively), Psychology was the only group in which women were in the majority (43/47). The distribution of sexes according to the method used to administer the questionnaire and the subjects’ academic field was as follows:
2.2.3. Procedure
Once the sample group had been selected, we contacted the subjects to begin the research. The Lime Survey programme has a tool to manage the sending of invitations (which include the link to the survey) and the reminders. Both the invitation and the reminder messages can be personalized. It also includes an option consisting of a link that any subject who does not want to take part in the study can use to be removed from the list so as not to be contacted again. In the case of Google Docs, the management of those taking part is very limited. This application only allows one to embed the questionnaire within the body of a message and include a link in case the subjects experience problems with viewing or operation. All other tasks must be done manually. In both cases, an initial contact email was sent to the subjects inviting them to take part in the study which included the link or the questionnaire. This was followed later by two reminders.
For administration in person, a similar system was used in order to be able to guarantee the comparability of the results. Rather than directly visiting this subject group at their offices, an introductory contact email was sent in which I explained that they had been selected to take part in a research study and that if they agreed to participate, the researcher would visit them to answer a questionnaire. The potential participants were asked to indicate when would be a suitable time for the study visit to take place. If the subjects did not answer this first email they were sent a reminder, which was repeated one week later if an answer was not received. Appointments were arranged with those subjects who replied to the email and I visited the faculties to enable the subjects to complete the questionnaire.
The study lasted for a total of five weeks. It began on November 17, 2011 and came to an end at the beginning of the Christmas holidays for the university staff on December 22, 2011, at which point the online versions were closed and no further questionnaires were distributed in person.
3. Results
Of the total sample (270 subjects), only 44 subjects from the three different methods (a response rate of 16.30%) completed the questionnaire. A total of 11 questionnaires were lost, because three subjects did not return the questionnaire after asking to keep a copy and answer it later when they were less busy and eight subjects began the Lime Survey questionnaire but failed to complete it. After various attempts we did not manage to organize an appointment with six subjects who answered the contact email and a total of 17 subjects declined the invitation to take part in the research. A total of 192 subjects (71.11%) did not answer any of the emails I sent them.
It is important to make clear that the Google Docs option did not work as we had expected: despite piloting of this survey instrument being undertaken without any incidents, a technical problem occurred that prevented subjects from answering the questionnaire directly from the email. Instead they had to use a link attached to the message that was meant to help people having problems viewing the survey. In spite of the fact that this option was simpler than the Lime Survey option and the subjects could see the questionnaire inside the message (although they could not actually answer it), this method obtained the lowest response rate. I am therefore unable to prove or refute the hypothesis proposed above that the ability to answer the questionnaire from within an email without having to do anything else would encourage the subjects to take part.
The highest response obtained on the basis of the method of administration was that for distribution in person 26 subjects (28.89%), followed by Lime Survey 13 subjects (15.48%) and Google Docs 5 subjects (5.56%). Adding the two online methods together, the average response was lower than that for the paper questionnaire: 18 subjects (10%).
Figure 1 shows the response rates according to the administration method used and the general level of participation of the sample group.
As far as the participation according to gender is concerned, in the case of distribution in person, 15 men and 11 women took part in the study, 30% and 27.5% respectively of the total of men and women I tried to interview using this method. Some nine men (18.37%) and four women (9.76%) responded to the Lime Survey option and three men (6.38%) and two women (4.65%) responded to the Google Docs option. In total, 27 men and 17 women took part in the study, 18.37% and 13.82%, respectively, of those contacted. These results show that in all three methods more men took part than women.
By age, the most common group was between 30 and 45 (19 subjects) followed by those between 46 and 60 (17 subjects). A total of five subjects in the over-60 age-group responded while only three of the under-30s did. These results according to age and the method of administration are set out in Table 4.
As regards the different specialist fields of the subjects, the most participative were those from the Faculty of Philosophy and Letters with 17 subjects and the Faculty of Psychology with 13 subjects. However, breaking down these results according to the method used to administer the questionnaire, there are some variations: Philosophy and Letters heads the list for participations using distribution in person (11) and Google Docs (3), but Pharmacy had the most participants in the Lime Survey option (6).
4. Discussion
The overall response rate obtained (16.30%) is closer to the rate of 10-15% suggested by Chiaro and Nocella (2004: 284) for traditional questionnaires than any of the rates we managed to calculate for previous studies in this field (see Table 1). This may be because many of these studies were conducted with subjects who were users of interpreting services (Marrone 1993; Vuorikoski 1993; 1998; Kopczyński 1994; Mack and Cattaruzza 1995) within the context of a communicative event in which they had just been in contact with interpretation and this may have been an incentive to take part in the survey. In the case of studies of expectations in which interpreters took part (Chiaro and Nocella 2004; Zwischenberger and Pöchhacker 2010), the fact that the object being studied was directly related to the subjects’ profession probably influenced their decision to participate. In my survey the subjects were members of the teaching and research staff of three faculties of the University of Granada and perhaps the subject (the quality of interpreting) that they were asked to consider was a little abstract or out of context in that prior to the survey they had not been listening to an interpretation. It is also possible that a lot of them were not and had never been users of this kind of service (some stated this as the reason why they did not wish to take part in the study) either because they only use Spanish in their work or because they are comfortable using English as their language of scientific communication and do not require an interpreter. Although the studies by Collados Aís (1998), Pradas Macías (2003) and Collados Aís, Pradas Macías et al. (2007) were performed using subjects with a similar profile and achieved acceptable levels of participation, they administered the questionnaire in a more direct way than in the present study.
It is important to point out that I am not aware of any other research in which an online survey was used to find out users’ interpretation expectations. One of the reasons why this kind of study has not been conducted before could be the difficulty in finding out who these users are and obtaining their email addresses. This is another reason why in this study, I decided to use teaching and research staff from the University of Granada as a sample group, as I imagined that they would have some experience of interpreting, given that their work often has an international dimension involving, for example, attending scientific meetings. In order to find out if this was true and if there are any differences between different disciplines, members of staff from three faculties in clearly different branches of knowledge (Science and Technology, Humanities and Social Sciences) were invited and a question was included in the survey about the particular circumstances in which they had used interpreting services. Half of the sample group said that their only experience of interpreting was at congresses or conferences; 18.8% said that their only contact with interpreting was through the media (TV and radio); notably over a quarter of those in the sample group had experience of interpreting in both contexts. Although a high percentage of the sample group stated that they had used interpreting services at scientific meetings, they acknowledged that this kind of service is not normally provided and they use English when communicating with other scientists. This appears to be commonplace in all the different specialist fields surveyed.
As for the level of participation according to the method used for administering the survey, at least in this sample, participation was higher when the questionnaire was distributed in person (28.89%). This contradicts the hypothesis put forward by Chiaro and Nocella (2004) that participation doubles when the questionnaire is distributed via the Internet. One reason why this rate is over 13% higher than that achieved by the Lime Survey (15.48%) and 23 points higher than the Google Docs (5.56%) method could be that when the questionnaire is distributed in person, a more friendly contact is established and this could encourage potential subjects to take part. Another possible factor is that questionnaires on paper may be more flexible than those designed for online distribution so subjects are not reluctant to fill them in. Lastly, the fact that the Google Docs option did not work correctly due to a technical problem could be another reason why this option obtained such a low response rate. Nor should we rule out the possibility that some members of certain age-groups are still reluctant to use new technologies and that perhaps the explanations set out in the contact email were not sufficient to convince them to take part in the survey.
As a whole, more men (66.67% of the total) took part in the survey than women. This dominant male participation occurred in all three methods of administering the questionnaire and in the three faculties surveyed. This could be a result of there being more men in the randomly selected sample group (147/123). By age-groups, the most participative subjects were those between 30 and 45 years old and between 46 and 60, although these are the most common age-groups within the survey population because most of the university teaching and research staff are between 30 and 60 years old.
In general, there was little difference between the participation rates of the different faculties. However, if we break down the figures according to the method of administration, there are striking differences between the various methods within the same faculty. In the case of the Faculty of Pharmacy, the number of participants using the distribution-in-person method and the Lime Survey method was almost the same, which would imply that the members of this Faculty had no clear preference for either method. In the Faculty of Philosophy and Letters, there was a marked difference between the response obtained with the distribution-in-person and the online methods, while there was no difference between those obtained using the Lime Survey and Google Docs. This suggests that the staff from this faculty prefer distribution in person. Finally, in the group from the Faculty of Psychology there were differences between the three methods and there seems to have been a certain degree of preference for distribution in person.
It therefore seems, in principle, that distribution in person obtained the highest response rate regardless of age or sex. As regards the particular field of knowledge, the Faculty of Pharmacy was the only one in which the response rate for the paper questionnaire was not notably higher than for the other methods.
5. Conclusions
Given the size of the sample group that took part in the survey, any conclusions reached must be taken with extreme caution. Nevertheless, this project has produced some useful conclusions about the methodology to be followed in this kind of research.
Firstly, although the results seem to suggest that users prefer questionnaires that are distributed in person, the fact that the initial contact was made in a similar way in all three methods means that we should not rule out the possibility that this apparent preference is a characteristic of the context within which this research has been conducted or even of the participants. I therefore believe that it would be interesting to repeat the study, not only in this context but with different departments or faculties, but also in different contexts and countries, which would perhaps enable us to construct a profile of the subjects that prefer one or other method.
Using the Internet as the means of distributing surveys has various clear advantages: large savings in terms of time and money, the possibility of reaching distant subjects far and wide without having to leave your office and the fact that the subjects can choose the most convenient moment in which to complete the questionnaire. However, I believe that there are also certain disadvantages compared to the traditional method, which must be analyzed and taken into account when embarking on this kind of research: the possible reluctance of certain subjects to use new technologies, which prevents them from taking part in the survey due to their lack of knowledge and the colder, more distant, way of making contact. In this study, another problem arose when various subjects did not realize that there was a link by which they could withdraw from the project, and this caused a degree of irritation. In addition, we must bear in mind the time cost involved in distributing the questionnaire in person, given that there is more interaction between the researcher and the subject. For example, the researcher may be privy to some of the comments, opinions or suggestions that subjects may make while responding to the questionnaire, which may on occasions be very interesting when it comes to understanding the way users behave and which should be taken into account in future projects.
In view of the various advantages and limitations identified in this study, it seemed like a good idea to consider using another method of administering questionnaires, such as social networks. With this in mind, I also carried out a study of expectations using an internationally accessible social network (García Becerra 2012), the results of which are in preparation. I think that more research should be done into their possible usefulness in this kind of study. Perhaps the use of the new technological tools could help to overcome some of the disadvantages mentioned for the online distribution methods.
Regarding the design of the questionnaire, I believe that the online version should be more flexible than that used in this study so as to avoid possible frustration on the part of the subjects that could cause them to give up on the questionnaire before completing it. Often when designing and drafting each question, the researcher tries to make the subject answer all the questions making this obligatory sometimes without considering what this may mean for the subject. When drafting the questionnaires, it is necessary to consider all possible options in the answers to closed questions, and to offer the subjects the possibility of not answering those questions for which they cannot identify a suitable answer from the available options.
In this study, I have been unable to test whether the rate of participation is higher when the questionnaire is embedded directly in the email and can be answered easily without the subject having to do anything else, as technical problems prevented us from doing so. It would be interesting to be able to make this comparison because it could provide us with information about the variations in the response rate depending on the relative ease with which subjects can access and complete the questionnaire.
Lastly, considering the observation of Marrone (1993) that in order to guarantee the possibility of comparing the results of the different studies made in this field, it would be a good idea to arrange closer collaboration among the different research groups and design a common questionnaire which would allow definition of, for example, prototype users by professional fields and by countries. It would be useful to try to discover if there are differences arising from the particular origin of the subjects, at least in the interpretation expectations field. This would perhaps enable us to reach a higher percentage of real users of interpreting services and to define their needs better.
Appendices
Appendix
Appendix. Questionnaire (our English translation)
SIMULTANEOUS INTERPRETING
I am carrying out a study on quality in simultaneous interpreting. I will be very grateful if you answer the following questions.
PART 1
Gender: _ Male _ Female
-
Age:
_ Under 30
_ Between 30 and 45
_ Between 46 and 60
_ Over 60
What is your field of knowledge?
-
In which context did you use interpreting services?
_ Conferences and congresses
_ Media (TV, radio…)
_ Other
-
How satisfied are you with these services so far?
PART 2
-
When listening to a simultaneous interpretation, how important are the following aspects?
If you think that there are other aspects that might influence the quality of an interpreting service, please write them and rate their importance on a seven-point scale.
Does the importance given to the previous aspects vary depending on the gender, the age or other any characteristic of the interpreter? Could you please give an example?
PART 3
Were all the questions clear to you or did you find any particular question difficult to understand?
Do you think there are additional aspects that might affect the quality of an interpretation which are not included in this questionnaire?
Acknowledgements
This study has been carried out as part of the research project P07.HUM.02730 (Junta de Andalucía, Spain).
Bibliography
- Andrews, Dorine, Nonnecke, Blair and Preece, Jennifer (2003): Electronic survey methodology: A case study in reaching hard-to-involve Internet users. International Journal of Human-Computer Interaction. 16(2):185-210.
- Bachmann, Duane P., Elfrink, John and Vazzana, Gary (2000): E-mail and snail mail face off in rematch. Marketing Research. 11(4):10-15.
- Bowling, Ann (2005): Mode of questionnaire administration can have serious effects on data quality. Journal of Public Health. 27:281-291.
- Bühler, Hildegrund (1986): Linguistic (semantic) and extra-linguistic (pragmatic) criteria for the evaluation of conference interpretation and interpreters. Multilingua. 5(4):231-235.
- Chiaro, Delia and Nocella, Giuseppe (2004): Interpreters’ Perception of Linguistic and Non-Linguistic Factors Affecting Quality: A Survey through the World Wide Web. Meta. 49(2):278-293.
- Collados Aís, Ángela (1998): La evaluación de la calidad en interpretación simultánea. La importancia de la comunicación no verbal. Granada: Comares Interlingua.
- Collados Aís, Ángela (2009): Evaluación de la calidad en interpretación simultánea: contrastes de exposición e inferencias emocionales. Evaluación de la evaluación. In: Gyde Hansen, Andrew Chesterman and Heidrun Gerzymisch-Arbogast, eds. Efforts and Models in Interpreting and Translation Research: A Tribute to Daniel Gile. Amsterdam/Philadelphia: John Benjamins, 193-214.
- Collados Aís, Ángela, Pradas Macías, E. Macarena, Stévaux, Elisabeth et al., eds. (2007): Evaluación de la calidad en interpretación simultánea: parámetros de incidencia. Granada: Comares Interlingua.
- Fricker, Ronald D. and Schonlau, Matthias (2002): Advantages and disadvantages of Internet research surveys: evidence from the literature. Field Methods. 14:347-367.
- García Becerra, Olalla (2012): La incidencia de las primeras impresiones en la evaluación de la calidad de la interpretación: un estudio empírico. Doctoral thesis, unpublished. Granada: Universidad de Granada.
- Garzone, Giuliana (2003): Reliability of quality criteria evaluation in survey research. In: Ángela Collados Aís, María Manuela Fernández Sánchez and Daniel Gile, eds. La evaluación de la calidad en interpretación: investigación. Granada: Comares Interlingua, 23-30.
- Gile, Daniel (1990): L’évaluation de la qualité de l’interprétation par les délégués: une étude de cas. The Interpreters’ Newsletter 3:66-71.
- Kopczyński, Andrzej (1994): Quality in conference interpreting: some pragmatic problems. In: Mary Snell-Hornby, Franz Pöchhacker and Klaus Kaindl, eds. Translation Studies: An Interdiscipline. Amsterdam/ Philadelphia: John Benjamins, 189-198.
- Kurz, Ingrid (1989): Conference Interpreting - User Expectations. In: Deanna L. Hammond, ed. Coming of age. Proceedings of the 30th Annual Conference of the American Translators Association. Medford, NJ: Learned Information Inc, 143-148.
- Kurz, Ingrid (1993): Conference interpretation: expectations of different user groups. The Interpreter’s Newsletter. 5:13-21.
- Kurz, Ingrid and Pöchhacker, Franz (1995): Quality in TV Interpreting. Traslatio - Nouvelles de la FIT- FIT Newsletter. 14(3-4):350-358.
- Mack, Gabriele and Cattaruzza, Lorella (1995): User surveys in SI: a means of learning about quality and/or raising some reasonable doubts. In: Jorma Tommola, ed. Topics in Interpreting Research. Turku: Centre for Translation and Interpreting, University of Turku, 37-49.
- Marrone, Stefano (1993): Quality: a shared objective. The Interpreter’s Newsletter 5:35-41.
- McDonald, Heath and Adam, Stewart (2003): A comparison of online and postal data collection methods in marketing research. Marketing intelligence & planning. 21(2):85-95.
- Meak, Lidia (1990): Interprétation simultanée et congrès médical: attentes et commentaires. The Interpreter’s Newsletter 3:8-13.
- Mehta, Raj and Sivadas, Eugene (1995): Comparing response rates and response content in mail versus electronic mail surveys. Journal of the Market Research Society. 37(4):429-439.
- Moser, Peter (1995): Survey on Expectations of Users of Conference Interpretation. (Translated by Jennifer Mackintosh and Catherine Stenzi). Vienna: SRZ Stadt + Regionalforschung GmbH.
- Pradas Macías, E. Macarena (2003). Repercusión del intraparámetro pausas silenciosas en la fluidez: Influencia en las expectativas y en la evaluación de la calidad en interpretación simultánea. Doctoral thesis, unpublished. Granada: Universidad de Granada.
- Schuldt, Barbara A. and Totten, Jeff W. (1994): Electronic mail vs mail survey response rates. Marketing Research. 6:36-39.
- Tse, Alan C.B.(1995): Comparing two methods of sending out questionnaires: E-mail versus mail. Journal of the Market Research Society. 37(4):441-446.
- Vuorikoski, Anna-Riitta (1993): Simultaneous interpretation – user experience and expectations. In: Catriona Picken, ed. Translation – The Vital Link. XIII FIT World Congress, Proceedings. Brighton: Institute of Translation and Interpreting, 317-327.
- Vuorikoski, Anna-Riitta (1998): User Responses to Simultaneous Interpreting. In: Lynne Bowker, Michael Cronin, Dorothy Kennyet al., eds. Unity in Diversity? Current Trends in Translation Studies. Manchester: St Jerome, 184-197.
- Wright, Kevin B. (2005): Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services. Journal of Computer-Mediated Communication. 10(3).
- Yun, Gi Woong and Trumbo, Craig W. (2000): Comparative response to a survey executed by post, e-mail, and web form. Journal of Computer Mediated Communication, 6(1). Visited on 3 March 2012, http://onlinelibrary.wiley.com/doi/10.1111/j.1083-6101.2000.tb00112.x/full.
- Zwischenberger, Cornelia and Pöchhacker, Franz (2010): Survey on quality and role: conference interpreters’ expectations and self-perceptions. Communicate! 53. Visited on 15 December 2011, http://www.aiic.net/ViewPage.cfm/article2510.htm.