Development and implementation of quality management systems in translation agencies

© Copyright.

Introduction

At Technolex we break quality assurance process into 2 levels:

  • All of the steps necessary to ensure that the work requested by the customer is of high quality. Everybody is well aware of these processes.
  • Internal process of evaluating quality of the tasks fulfilled by translators. It is used to train translators, upgrade their skills and also for incentives in order to improve quality.

The two levels are shown in the following diagram:

The left area includes the stages that have to be fulfilled before the translation is sent to the customer. The extra stages that we consider appropriate to ensure the long-term quality of our linguists' work are shown on the right. Those additional stages and their implementation will be discussed in this article.

Monitoring editor's changes

The following is how the QA process at the translation agency usually looks like:

  • After receiving the translation from the translator, it is given to the editor and the proofreader who corrects the mistakes thus ensuring quality required by the customer.
  • The translation is delivered to the customer; the production chain successfully forgets about it and starts a new one.

In this process a translator, especially inexperienced one, very often serves as some kind of supplier of raw materials for the editor. It is very seldom that editor has time and desire to give the translator some feedback about the translation. As a result, the translator may not know, which changes have been introduced in the translation. The translator may even not have the slightest idea about the issues that he has to solve. As the result, everybody loses:

  • the translator does not grow as a professional;
  • the editor has to correct the same mistakes made by the translator;
  • the translation agency has to spend more money on correcting mistakes.

If such situation continues for years, the translator contents himself as a supplier of raw materials and does not even try to upgrade his skills hoping that he has the editor behind his back who would correct any of the translator’s mistakes.

Due to the abovementioned reasons, at Technolex we decided to implement an additional procedure. According to this procedure every translator must receive the lists of corrections made in his translations and review them so that he does not make the same mistakes in the subsequent translations.
But there was no convenient tool in the market that would allow implementing this kind of process. That is why we decided to develop our own tool. We wrapped up the development of ChangeTracker (www.change-tracker.com) at the end of 2011. This is rather simple application that compares the bilingual file delivered by the translator with the file edited by the editor. As the result of such comparison, we get a chart where all the changes are shown very clearly:

This report is saved as Excel file.

According to our processes, after finishing his work the editor must submit not only the edited files but also a detailed report about the corrections made. Then we send this report to the translator for review. In addition, the editor himself can view his own corrections to make sure that he did not make mistakes while editing the translation. The translation agency benefits from this tool by evaluating the editor's performance.

The application can compare the files in the following formats:

  • Trados (TTX, SDLXLIFF);
  • MemoQ (XLIFF);
  • Idiom, Translation Workspace (XLZ);
  • Oscar (TMX);
  • Wordfast (TXML);
  • Microsoft Helium (HE);
  • Microsoft Word (DOC, DOCX, RTF).

We decided to release this application as a freeware for use by all who need it. You can use it in your processes if you consider it useful.

Quality evaluation

There are many discussions in our industry about which translation can be considered high quality and how to evaluate quality in general. Quantitative evaluation is the most widely used in business processes. Its essence is as follows:

  • A part of translation is taken, for example 1,000 words.
  • It is searched for mistakes.
  • Every mistake is classified depending on its type and gravity.
  • Demerit points are calculated for every mistake depending on its type and gravity.
  • Demerit points are then subtracted from 100. Thus evaluation score is received. Let us say the translator received 12 demerit points. Then his score will be 88.

Similar practice is used in many western translation agencies. That is why we decided to adopt this approach at Technolex. But for these purposes all the translation agencies used Excel spreadsheets that had several drawbacks. In particular, the editor has to paste the text from the translated file into the Excel spreadsheet, which is a time-consuming and rather dull mechanical process. Thus it was decided to move away from Excel spreadsheets and create such array of functions in ChangeTracker application by releasing its version 2.0. (Officially it has not been released yet, but it is actively used by us in our internal workflow.)

Description of quality evaluation process

After editing files the editor performs the following actions using ChangeTracker 2.0:

  1. Compares the non-edited file with the edited one and creates a chart with report about the changes (similar to the first version of ChangeTracker)
  2. Chooses the part of text for evaluation using the preset criteria:
  3. Classifies every change by the mistake type and gravity

While editing the evaluation file, the application calculates demerit points automatically depending on mistake classification and produces the translation quality evaluation score:

The editor then sends the evaluation file to the manager who sends it to the translator.

The translator opens the evaluation file in ChangeTracker 2.0 and reviews the changes. If he has comments or if he does not agree with something, he adds his own comments:

The application has the function of discussion between the translator and the editor. Next, the translator sends the file with his comments to the manager. The latter either sends the file back to the editor for reevaluation or sends it to the third party.

As the result of the above, each and every translation job performed at Technolex receives a certain evaluation.

Hence the question: Why do we do this? Why don't we just send the edited translation to the customer and close the project? Why do we spend additional time and money on this?

The answer is very simple: We are extremely interested in the professional growth of our translators as it is a great need of highly skilled personnel that holds back our company growth. Therefore, if we start cooperation with a translator and his work quality is at least acceptable, we aim at a long-term cooperation and are very interested that he or she will be making fewer mistakes in the subsequent translations. In the long term we want this translator to assure quality of our other experienced and trained translators. The sheer fact of receiving such evaluations and the lists of changes stimulates the translators to learn. And this process allows speeding up training significantly. In addition, it allows detecting those who can't learn. If the translator knows that his work is monitored, he tries to do it better.

Systematization and creation of evaluation statistics

Having implemented quality evaluation process, we ran up against the following issues:

  • There was not still a bigger picture of translators’ work quality. Every manager worked separately and it was difficult to see the overall picture.
  • All communications were done via e-mail so managers had to spend additional time on this.
  • We could not trace the evaluation relationship between the editor and the subject. We just had the picture of the translator’s scores. As the result, the translators who worked with the same editor could get higher scores than those who worked with different editors.
  • We did not have efficient tools to trace the dynamics of quality for every translator.

That is to say that we did not have a bigger picture and statistics that would allow us to make organizational and personnel decisions more efficiently. That is why we decided to develop new processes and tools. Thus we created Quality Tracker Server (QTS).

Quality Tracker Server is an online system for recording quality of translations made at Technolex Translation Studio. It is intended for:

  • storing the translation quality evaluation files created in ChangeTracker desktop application;
  • anonymous interaction between the quality evaluation process participants and file exchange within the framework of this process;
  • maintaining statistics and generating quality reports.

Let us take a closer look at these aspects…

Interaction among the participants of quality evaluation process

For more objective evaluation of quality as well as for improving efficiency of translators' training, it is necessary to create a process where translators could get evaluations for their translations, analyze their mistakes and have the ability to dispute a certain score with reason.

Below is the simplest scheme of interaction:

  1. The evaluator sends the translation evaluation to the manager.
  2. The manager forwards the evaluation to the translator.
  3. The translator either agrees with the evaluation and it is then considered to be final or sends his reasonable comments to the evaluator.
  4. The evaluator reviews the translator’s objections, changes the score and returns it back to the translator.
  5. The translator either agrees with the updated evaluation and it is then considered to be final or sends his reasonable comments to the arbiter.
  6. The arbiter reviews the dispute and makes a decision. The score becomes final.

As you see, this process may include rather many steps. To implement it, two issues have to be minimized:

  1. The administrative load on the manager has to be reduced. The direct communication between the process participants has to be organized. The process must not turn into a complicated bureaucratic procedure.
  2. Anonymity of all the participants has to be ensured. When it comes to discussions about quality and disputes between the translator and the editor, these discussions often get emotional. In addition, if the translator and the editor know each other, their personal relations may influence the grading.

That is why we decided to reduce the impact of personal factors as much as possible. We’ve created a system of communications where the evaluation process participants (translator, evaluator and arbiter) do not know each other, which ensures maximum objectivity. Only the manager knows all the process participants.

Every evaluated translation is uploaded to the evaluation server. Next, the discussion process starts that proceeds as follows.

  1. The manager receives the evaluation file, uploads it, and indicates the translator, the editor and the arbiter.
  2. The system sends the translator an e-mail and informs him that his translation quality has been evaluated with a certain score and asking him to review the changes.
  3. The translator logs in the system, downloads the file and reviews it. If he agrees with everything, he completes the evaluation project by pressing a button.
  4. If he does not agree with something, he adds his comments and uploads the file to the system, which sends the file to the evaluator for review. Alternatively, he may send the file not to the evaluator, but directly to the arbiter.
  5. The evaluator receives an e-mail from the system that translator has objections, reviews the translator’s comments, changes the score, if necessary, and uploads the new file to the system.
  6. The translator logs in the system, downloads the file and reviews it. If he agrees with everything, he completes the evaluation project by pressing a button.
  7. If he does not agree with something, he adds his comments and uploads the file to the system, which sends the file to the arbiter for review.
  8. The arbiter reviews the translator’s comments and, if necessary, changes the score and completes the project.
  9. The manager, the translator and the editor receive an e-mail about the project completion with a final score.

The following important details are provided for in the system:

  • Only the manager knows all the process participants. The translator, the editor and the arbiter do not know each other. This is required in order to ensure higher objectivity.
  • Actually, the manager only creates the project; next, the system does everything itself. It means that manager spends on this project less than 3 minutes of his time.

Practical application of statistics on translators’ work quality

Having a unified database with all the translations evaluations, we can use it for generating reports and statistics that can be used for practical purposes such as:

Such reports allow to:

  1. Identify the translators who do their job with the highest quality. Someone may disagree saying that managers know anyway who makes the job better and who makes it with lower quality. However, there is a drawback in such knowledge as the manager is the only one who has it. His peers or the leadership may even not know, which translators can be relied upon. And if a new manager is hired, the translator work quality statistics will be very useful for him.
  2. Determine whether the translator’s rates match quality of his translations. Sometimes it happens that translator with higher rates makes his job worser than the translator with lower rates. Having this data, it is possible to review the rating of freelancers and the sequence of giving them the translation jobs.
  3. Monitor the growth of a particular translator. Some new hires are quick learners while others may stay at the same level for years and make no effort to develop their skills. Having reviewed the dynamics of evaluations for one translator during several months, one may come to rather objective conclusions concerning promising outlook of a certain translator.
  4. Monitor how evaluations depend on the editor or the subject. It happens that one editor evaluates the translations more censoriously than the other. It may give the manager a wrong perception about the translator’s work quality.
  5. For example, it can be identified, which scores the translator received from different editors. If there are significant differences, it can be checked whether other translators have similar issue. If they do, it means that this editor is inclined to evaluate translations more censoriously and has to adjust his evaluation work standards. Or just remember that with other editors the score could be higher and that translator does his job not as badly as the editor may tell.
  6. Use the evaluation to develop incentives. For many translators the score itself is a stimulating factor. But one can go further and add a financial component. For example, the bonuses may be introduced for the projects with higher scores and penalties for the projects with lower scores.

Conclusion

The management methods reviewed in this article are not obligatory for fulfilling translation projects and ensuring quality of delivered translations. Also, they involve additional financial costs for a company. However, they have a strategic nature and the effect from their use will be seen in the long term. Such system stimulates translators to grow as professionals resulting in new high-skilled experts for a company. This strengthens its competitiveness and increases its producing capacity. This enables coping with an increasing workload when the demand grows (which in turn is caused by high quality of made translations). In short, a company can grow due to such process.

-->

Other articles

Freelancing: How to Manage Customers, Deadlines, and Payments

24.04.2017 Freelancing is often not taken seriously and considered to be a side job; an additional source of money on top of a main salary, or a temporary cash aid until a full time job is found.

Translator vs. Translation Agency

04.09.2015 There have been perpetual discussions within professional translation circles between translators and translation agencies, whereby translators have questioned the existence of translation agencies per se. They see them only as commercial agents who reap the cream, but do not benefit customers in any real way. Now and again debates have arisen about whose services the customer should better use: an independent translator or a company providing translation services. In this article we will try to investigate this matter and consider all the advantages and drawbacks of dealing with independent translators vs. translation agencies.

Freelancer Selection Criteria for Realizing Projects at Translation Agencies

03.02.2015 In this article we will consider the reasons why freelance translators who have passed the initial tests at translation agencies do not always receive a large volume of work. Also, we will provide recommendations on building long-term relationships with customers.

EN--Спасибо!

Мы получили ваше резюме.

Как только мы его изучим, мы свяжемся с вами.

Thank you!

We have received your message.

We will contact you once we read it.


Normally we reply within an hour
if the message is received between
7:00 and 15:00 GMT.

Thank you!


You have subscribed successfully.

Message

+ Attach file
EN-

Мы внимательно изучим ваше резюме
и свяжемся с вами в ближайшее время