Prosecution Insights
Last updated: April 19, 2026
Application No. 18/773,129

AUTOMATIC POST-EDITING MODEL FOR GENERATED NATURAL LANGUAGE TEXT

Non-Final OA §103
Filed
Jul 15, 2024
Examiner
PULLIAS, JESSE SCOTT
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
873 granted / 1052 resolved
+21.0% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
47 currently pending
Career history
1099
Total Applications
across all art units

Statute-Specific Performance

§101
15.0%
-25.0% vs TC avg
§103
50.4%
+10.4% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to application 18/733,129, which was filed 07/15/24 and is a continuation of application 17/700,123, now US Patent 12,039,286, which is a continuation of application 16/511,806, now US Patent 11,295,092. Claims 1-20 are pending in the application and have been considered. Claim Objections Applicant is advised that should claim 5 be found allowable, claim 6 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Applicant is advised that should claim 12 be found allowable, claim 13 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Applicant is advised that should claim 19 be found allowable, claim 20 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-8, 10-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Waibel et al. (US 20110307241) in view of Pal et al. (“A Transformer-Based Multi-Source Automatic Post-Editing System”. Proceedings of the Third Conference on Machine Translation (WMT), Volume 2: Shared Task Papers, pages 827–835 Belgium, Brussels, October 31- November 1, 2018). Consider claim 1, Waibel discloses a method implemented by one or more processors (computer method, implemented on e.g. a laptop computer, a processor being inherent in a laptop computer, [0011]-[0012]), the method comprising: processing a first instance of text in a target language using multilingual post-editing to generate first edited text, wherein the first instance of text in the target language is generated using a machine translation model translating a first source language to the target language, wherein the multilingual post-editing is used in correcting one or more translation errors introduced by the machine translation model, and wherein the multilingual post-editing is for use in correcting translation errors in the target language translated from any one of a plurality of source languages (multi-directional speech-to-speech translation system receives and recognizes speech in language L1 to produce text in language L1, [0044], Fig 1, and machine translation module translates from language L1 to Ln, [0045], which is post-edited via correction and repair module, [0047], by the user to correct a word, i.e. post-edit the translation output, [0062-0063], [0065], correction and repair module correcting translation errors from any of languages L1-Ln-1 to language Ln, for example, [0044]); causing a client device to perform one or more actions based on the first edited text (the edited text is converted to speech via TTS module and output from speakers on the device, [0054]); processing a second instance of text in the target language using the multilingual post-editing to generate second edited text, wherein the second instance of text in the target language is generated using a machine translation model translating a second source text in a second source language to the target language (multi-directional speech-to-speech translation system receives and recognizes speech in language L2 to produce text in language L2, [0044], Fig 1, and machine translation module translates from language L12 to Ln, [0045], which is post-edited via correction and repair module, [0047], by the user to correct a word, i.e. post-edit the translation output, [0062-0063], [0065]); and causing the client device to perform one or more actions based on the second edited text (the second edited text is converted to speech via TTS module and output from speakers on the device, [0054]). Waibel does not specifically mention: an automatic post-editing model; a neural machine translation model; and the post-editing model is trained for use in correcting translation errors. Pal discloses an automatic post-editing model (transformer-based multi-source APE, page 828, Section 2); a neural machine translation model (neural machine translation, page 827, Section 1, NMT Task, page 831, Section 3.3.2); and the post-editing model is trained for use in correcting translation errors (post editing model PBSMT is trained using real PE training data, Section 3.3.1, pages 830-831, the APE models correcting translation errors in MT output, page 827, Section 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel by replacing the manual post-editing of Waibel with the automatic post-editing model trained for use in correcting translation errors of Pal and replacing the statistical MT model of Waibel with the neural machine translation model of Pal in order to improve quality of fully automatic translations, as suggested by Pal (page 827, Section 1). Doing so would have led to predictable results of reducing costs associated with translation performed by humans, as suggested by Pal (page 827, Section 1). The references cited are analogous art in the same field of machine translation. Consider claim 8, Waibel discloses a client device (laptop, [0005]) comprising: memory storing instructions (inherent in laptop, [0005]); one or more processors that execute the instructions, stored in the memory (inherent in laptop, [0005]) to: process a first instance of text in a target language using multilingual post-editing to generate first edited text, wherein the first instance of text in the target language is generated using a machine translation model translating a first source language to the target language, wherein the multilingual post-editing is used in correcting one or more translation errors introduced by the machine translation model, and wherein the multilingual post-editing is for use in correcting translation errors in the target language translated from any one of a plurality of source languages (multi-directional speech-to-speech translation system receives and recognizes speech in language L1 to produce text in language L1, [0044], Fig 1, and machine translation module translates from language L1 to Ln, [0045], which is post-edited via correction and repair module, [0047], by the user to correct a word, i.e. post-edit the translation output, [0062-0063], [0065], correction and repair module correcting translation errors from any of languages L1-Ln-1 to language Ln, for example, [0044]); cause a client device to perform one or more actions based on the first edited text (the edited text is converted to speech via TTS module and output from speakers on the device, [0054]); process a second instance of text in the target language using the multilingual post-editing to generate second edited text, wherein the second instance of text in the target language is generated using a machine translation model translating a second source text in a second source language to the target language (multi-directional speech-to-speech translation system receives and recognizes speech in language L2 to produce text in language L2, [0044], Fig 1, and machine translation module translates from language L12 to Ln, [0045], which is post-edited via correction and repair module, [0047], by the user to correct a word, i.e. post-edit the translation output, [0062-0063], [0065]); and cause the client device to perform one or more actions based on the second edited text (the second edited text is converted to speech via TTS module and output from speakers on the device, [0054]). Waibel does not specifically mention: an automatic post-editing model; a neural machine translation model; and the post-editing model is trained for use in correcting translation errors. Pal discloses an automatic post-editing model (transformer-based multi-source APE, page 828, Section 2); a neural machine translation model (neural machine translation, page 827, Section 1, NMT Task, page 831, Section 3.3.2); and the post-editing model is trained for use in correcting translation errors (post editing model PBSMT is trained using real PE training data, Section 3.3.1, pages 830-831, the APE models correcting translation errors in MT output, page 827, Section 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel by replacing the manual post-editing of Waibel with the automatic post-editing model trained for use in correcting translation errors of Pal and replacing the statistical MT model of Waibel with the neural machine translation model of Pal for reasons similar to those for claim 1. Consider claim 15, Waibel discloses a non-transitory computer-readable storage medium storing instructions executable by one or more processors of a computing system (inherent in laptop, [0005]) to perform a method of: processing a first instance of text in a target language using multilingual post-editing to generate first edited text, wherein the first instance of text in the target language is generated using a machine translation model translating a first source language to the target language, wherein the multilingual post-editing is used in correcting one or more translation errors introduced by the machine translation model, and wherein the multilingual post-editing is for use in correcting translation errors in the target language translated from any one of a plurality of source languages (multi-directional speech-to-speech translation system receives and recognizes speech in language L1 to produce text in language L1, [0044], Fig 1, and machine translation module translates from language L1 to Ln, [0045], which is post-edited via correction and repair module, [0047], by the user to correct a word, i.e. post-edit the translation output, [0062-0063], [0065], correction and repair module correcting translation errors from any of languages L1-Ln-1 to language Ln, for example, [0044]); causing a client device to perform one or more actions based on the first edited text (the edited text is converted to speech via TTS module and output from speakers on the device, [0054]); processing a second instance of text in the target language using the multilingual post-editing to generate second edited text, wherein the second instance of text in the target language is generated using a machine translation model translating a second source text in a second source language to the target language (multi-directional speech-to-speech translation system receives and recognizes speech in language L2 to produce text in language L2, [0044], Fig 1, and machine translation module translates from language L12 to Ln, [0045], which is post-edited via correction and repair module, [0047], by the user to correct a word, i.e. post-edit the translation output, [0062-0063], [0065]); and causing the client device to perform one or more actions based on the second edited text (the second edited text is converted to speech via TTS module and output from speakers on the device, [0054]). Waibel does not specifically mention: an automatic post-editing model; a neural machine translation model; and the post-editing model is trained for use in correcting translation errors. Pal discloses an automatic post-editing model (transformer-based multi-source APE, page 828, Section 2); a neural machine translation model (neural machine translation, page 827, Section 1, NMT Task, page 831, Section 3.3.2); and the post-editing model is trained for use in correcting translation errors (post editing model PBSMT is trained using real PE training data, Section 3.3.1, pages 830-831, the APE models correcting translation errors in MT output, page 827, Section 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel by replacing the manual post-editing of Waibel with the automatic post-editing model trained for use in correcting translation errors of Pal and replacing the statistical MT model of Waibel with the neural machine translation model of Pal for reasons similar to those for claim 1. Consider claim 3, Waibel discloses the first instance of text in the target language is generated using a first machine translation model and wherein the second instance of text in the target language is generated using a machine translation model (MT module, [0044]-[0045]). Waibel does not specifically mention: neural machine translation and a distinct second neural machine translation model. Pal discloses neural machine translation and a distinct second neural machine translation model (SS and MS models, Section 3.3.2, page 831). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel by utilizing neural machine translation and a distinct second neural machine translation model for reasons similar to those for claim 1. Consider claim 4, Waibel discloses the first instance of text in the target language is generated using a multilingual machine translation model and wherein the second instance of text in the target language is generated using the multilingual machine translation model (multilingual MT module, [0044]-[0045]). Waibel does not specifically mention: neural machine translation. Pal discloses neural machine translation (NMT Task, Section 3.3.2, page 831). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel by utilizing neural machine translation for reasons similar to those for claim 1. Consider claim 5, Waibel does not, but Pal discloses the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language (word order errors, page 832, Section 4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel such that the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language for reasons similar to those for claim 1. Consider claim 6, Waibel does not, but Pal discloses the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language (word order errors, page 832, Section 4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel such that the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language for reasons similar to those for claim 1. Consider claim 7, Waibel does not, but Pal discloses the multilingual automatic post-editing model is a transformer model that includes a transformer encoder and a transformer decoder (multi-source transformer-based APE includes encoders and decoder, Figure 1, page 829). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel such that the multilingual automatic post-editing model is a transformer model that includes a transformer encoder and a transformer decoder for reasons similar to those for claim 1. Consider claim 10, Waibel discloses the first instance of text in the target language is generated using a first machine translation model and wherein the second instance of text in the target language is generated using a machine translation model (MT module, [0044]-[0045]). Waibel does not specifically mention: neural machine translation and a distinct second neural machine translation model. Pal discloses neural machine translation and a distinct second neural machine translation model (SS and MS models, Section 3.3.2, page 831). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel by utilizing neural machine translation and a distinct second neural machine translation model for reasons similar to those for claim 1. Consider claim 11, Waibel discloses the first instance of text in the target language is generated using a multilingual machine translation model and wherein the second instance of text in the target language is generated using the multilingual machine translation model (multilingual MT module, [0044]-[0045]). Waibel does not specifically mention: neural machine translation. Pal discloses neural machine translation (NMT Task, Section 3.3.2, page 831). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel by utilizing neural machine translation for reasons similar to those for claim 1. Consider claim 12, Waibel does not, but Pal discloses the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language (word order errors, page 832, Section 4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel such that the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language for reasons similar to those for claim 1. Consider claim 13, Waibel does not, but Pal discloses the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language (word order errors, page 832, Section 4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel such that the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language for reasons similar to those for claim 1. Consider claim 14, Waibel does not, but Pal discloses the multilingual automatic post-editing model is a transformer model that includes a transformer encoder and a transformer decoder (multi-source transformer-based APE includes encoders and decoder, Figure 1, page 829). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel such that the multilingual automatic post-editing model is a transformer model that includes a transformer encoder and a transformer decoder for reasons similar to those for claim 1. Consider claim 17, Waibel discloses the first instance of text in the target language is generated using a first machine translation model and wherein the second instance of text in the target language is generated using a machine translation model (MT module, [0044]-[0045]). Waibel does not specifically mention: neural machine translation and a distinct second neural machine translation model. Pal discloses neural machine translation and a distinct second neural machine translation model (SS and MS models, Section 3.3.2, page 831). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel by utilizing neural machine translation and a distinct second neural machine translation model for reasons similar to those for claim 1. Consider claim 18, Waibel discloses the first instance of text in the target language is generated using a multilingual machine translation model and wherein the second instance of text in the target language is generated using the multilingual machine translation model (multilingual MT module, [0044]-[0045]). Waibel does not specifically mention: neural machine translation. Pal discloses neural machine translation (NMT Task, Section 3.3.2, page 831). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel by utilizing neural machine translation for reasons similar to those for claim 1. Consider claim 19, Waibel does not, but Pal discloses the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language (word order errors, page 832, Section 4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel such that the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language for reasons similar to those for claim 1. Consider claim 20, Waibel does not, but Pal discloses the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language (word order errors, page 832, Section 4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel such that the one or more translation errors introduced by the neural machine translation model are one or more words incorrectly translated from the source language to the target language for reasons similar to those for claim 1. Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Waibel et al. (US 20110307241) in view of Pal et al. (“A Transformer-Based Multi-Source Automatic Post-Editing System”. Proceedings of the Third Conference on Machine Translation (WMT), Volume 2: Shared Task Papers, pages 827–835 Belgium, Brussels, October 31- November 1, 2018), in further view of Koh (US 20200152186). Consider claim 2, Waibel and Pal do not, but Koh discloses causing the client device to perform one or more actions based on the edited text comprises: processing the edited text to determine one or more device actions of a device associated with the client device, wherein the device associated with the client device is a light, a thermostat, or a camera (correcting, i.e. editing “turn on the light above the refrigerator” to “kitchen light on”, [0011-0012], noting the claim language “or” only requiring one in the alternative); and causing the device to perform the one or more device actions (turning the kitchen light on, [0011]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel and Pal by causing the client device to perform one or more actions based on the edited text comprises: processing the edited text to determine one or more device actions of a device associated with the client device, wherein the device associated with the client device is a light, a thermostat, or a camera; and causing the device to perform the one or more device actions in order to allow control of devices even when a user does not know or remember the voice command, as suggested by Koh ([0011]), predictably resulting in an improved user experience in an unfamiliar situation such as a hotel room, as suggested by Koh ([0011]). The references cited are analogous art in the same field of post-editing. Consider claim 9, Waibel and Pal do not, but Koh discloses causing the client device to perform one or more actions based on the edited text comprises: processing the edited text to determine one or more device actions of a device associated with the client device, wherein the device associated with the client device is a light, a thermostat, or a camera (correcting, i.e. editing “turn on the light above the refrigerator” to “kitchen light on”, [0011-0012]); and causing the device to perform the one or more device actions (turning the kitchen light on, [0011]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel and Pal by causing the client device to perform one or more actions based on the edited text comprises: processing the edited text to determine one or more device actions of a device associated with the client device, wherein the device associated with the client device is a light, a thermostat, or a camera; and causing the device to perform the one or more device actions for reasons similar to those for claim 2. Consider claim 16, Waibel and Pal do not, but Koh discloses causing the client device to perform one or more actions based on the edited text comprises: processing the edited text to determine one or more device actions of a device associated with the client device, wherein the device associated with the client device is a light, a thermostat, or a camera (correcting, i.e. editing “turn on the light above the refrigerator” to “kitchen light on”, [0011-0012]); and causing the device to perform the one or more device actions (turning the kitchen light on, [0011]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Waibel and Pal by causing the client device to perform one or more actions based on the edited text comprises: processing the edited text to determine one or more device actions of a device associated with the client device, wherein the device associated with the client device is a light, a thermostat, or a camera; and causing the device to perform the one or more device actions for reasons similar to those for claim 2. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20170323203 Matusov discloses using meta-information in neural machine translation US 9606988 Andreoli discloses predicting the quality of automatic translation of an entire document US 10235362 Boynes discloses continuous translation refinement with automated delivery of re-translated content US 20170286376 Mugan disclose neural grammar checking using an encoder and decoder US 10248651 Fuerstenau discloses separating translation correction post-edits from content improvement post-edits in machine translated content Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jesse Pullias whose telephone number is 571/270-5135. The examiner can normally be reached on M-F 8:00 AM - 4:30 PM. The examiner’s fax number is 571/270-6135. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Andrew Flanders can be reached on 571/272-7516. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jesse S Pullias/ Primary Examiner, Art Unit 2655 03/10/26
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Mar 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596885
Automatically Labeling Items using a Machine-Trained Language Model
2y 5m to grant Granted Apr 07, 2026
Patent 12573378
SPEECH TENDENCY CLASSIFICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12572740
MULTI-LANGUAGE DOCUMENT FIELD EXTRACTION
2y 5m to grant Granted Mar 10, 2026
Patent 12566929
COMBINING DATA SELECTION AND REWARD FUNCTIONS FOR TUNING LARGE LANGUAGE MODELS USING REINFORCEMENT LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12536389
TRANSLATION SYSTEM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+13.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month