| 
   
Article Details
Common Sense Advisory Blogs
Using Big Data to Save Money on Translation
Posted by Hélène Pielmeier on July 26, 2017  in the following blogs: Best Practices, Translation and Localization, Technology
Pages | |


Buyers of language services crave the ability to measure translation quality in an objective way, get easy-to-digest reports on how it is tracking over time, and be able to drill down as needed for process improvement. However, quality control remains a mostly human-driven process – even when supported by QA technology – because humans have to sift through the reports these tools produce. What if there was another way to approach quality?

U.S.-based technology and service provider Smartling has come up with an innovative approach to translation based on a quality confidence score (QCS) that forecasts the chances that a human reviewer would consider a translation to be a quality one. This enables the company to present a radical approach to decision-making on projects.

Kunal Sarda, Vice President of Language Services at Smartling, reported that “We started the QCS 3.5 years ago as an exercise to automate the process of flagging content that wasn’t high enough quality to deliver to clients. Eventually, customers asked to make it available as an API. A year ago, we released it to clients to help guide process decisions on their content.” This data-driven approach to project decisions presents a major move forward for the language services industry, that extends beyond other efforts to aggregate data to manage quality.

  • How did Smartling achieve this? The company considered more than 100 elements it could track in its translation platform, such as the number of steps in a process, the presence of visual context, or the time a translator spends in a segment. It determined whether each factor had an impact on quality outcomes, using the TAUS DQF as the measuring stick for quality. Smartling incorporated roughly 75 factors into its algorithm. It set the default for an acceptable QCS at 95%, but because quality is defined by the customer, users can adjust the QCS to emphasize certain factors or lower the threshold for certain low-risk content types.     

Source: Smartling

  • Is the QCS score reliable? Smartling had the advantage of sitting on a goldmine of data: It based its algorithm on an examination of seven billion words of content processed using the company’s cloud-based system, which yielded many more datapoints per text segment than most LSPs have. This consisted of work done by both Smartling and its LSP partners.      

  • How can the QCS help clients? Smartling uses the QCS to track the root cause of high and low scores, which serves as the basis for discussions with LSP partners, vendors, and clients. It leverages low scores to back up requests for their clients to provide visual context, invest in translation memory maintenance, or develop good style guides. On the other hand, high scores may indicate that the client can afford to skip a step in the process for lower-risk content in particular locales or content types, such as information buried deep within a website. However, Smarting doesn’t recommend skipping steps – regardless of QCS score – on mission-critical content like creative text that needs transcreation.
 
Source: Smartling

Over time, Smartling intends to publish data-driven best practices regarding quality where it will contrast variables such as two-step vs. three-step translation processes, the presence vs. absence of visual context when translating, or the process for legal vs. medical texts. Clients expect LSPs to be data savvy, so all LSPs will eventually need to deliver such data.

CSA Research expects other tech-driven service providers to leverage project data in their efforts to educate clients on improving translation outcomes and safely reducing costs. Even those companies with less rich information in their systems should build similar models and improve them over time. Between translation management systems, translation memory solutions, and translation quality and in-context review tools, many buy-side and supply-side organizations have a wealth of data that could benefit from smart analytics that lead to actionable advice.

CSA Research contends that this development paves the way for the future on how LSPs and enterprises will be able to leverage big data to support decisions, enable stakeholders to validate and debunk theories on quality, and make necessary improvements accordingly. Just as with the use of artificial intelligence in project management and the shift to augmented translators, the smart use of data is becoming a crucial differentiator for tech- and business-minded providers. Eventually, companies that don’t exploit big data will be left behind.

 

Post a Comment

Name
Email address :(Your Email Address Will Not Be Displayed)
URL

Your Comments
Enter Code given below :    

Related Research
TechStack: Translation Quality and In-Context Review Tools
Tech-Savvy Providers Nail the LSP Metrix
Link To This Page

Bookmark this page using the following link:http://www.commonsenseadvisory.com/Default.aspx?TabID=63&1=1&moduleID=390&Contenttype=ArticleDetAD&AId=39836

Do you have a website? You can place a link to this page by copying and pasting the code below.
Back
Keywords: Localization, LSP Production Models, Quality, Translation technologies

  
Refine Your Search
  Date
Skip Navigation Links.
Skip Navigation Links.




 
 
Terms of Use | Privacy Statement | Contact Us
Copyright © 2017 Common Sense Advisory, Inc. All Rights Reserved.