DETAILED ACTION
This Action is responsive to claims filed 09/16/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/16/2025 has been entered.
Status of the Claims
Claims 1 and 11 have been amended. Claims 1-3, 5-13, and 15-20 are pending.
Response to Arguments
Applicant's arguments, see Pages 6-11, filed 09/16/2025, with regard to the 35 U.S.C 101 Rejection of claims 1-3, 5-13, and 15-20 have been fully considered but they are not persuasive.
The Applicant argues the current drafting of the claims are analogous to Example 39, the Examiner respectfully disagrees with the applicant. Example 39 pertains to the collection and transformation of digital facial images, and subsequent training of a neural network, none of which is practically performed within the human mind or with the aid of pen and paper. Conversely, there is no specific structure or implementation precluding the independent claims’ series of “determining…” steps from being practically performed within the human mind or with the aid of pen and paper. Determining a sequence based on averages calculated from scores is practically performed within the human mind or with the aid of pen and paper. If the Applicant’s proposed improvement comes from the output determined by this order, the Examiner reminds the Applicant per MPEP 2106.05(a) that the specific improvement cannot come from the abstract idea itself, but rather from an additional element integrating the abstract idea into a practical application. See the updated 35 U.S.C. 101 Rejection below.
Applicant’s arguments, see Pages 6-11, filed 09/16/2025, with regard to the 35 U.S.C 102(a)(1) Rejection of claims 1-7, 10-17, and 20 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 U.S.C. 103.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-3, 5-13, and 15-20 rejected under 35 U.S.C. 101 because Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.)
Step 1:
Claims 1-3 and 5-10 recite a computer-implemented method for translating a source sequence with improved quality, which falls under the statutory category of a process. Claims 11-13 and 15-20 recite a computing system, comprising: one or more processors and one or more non-transitory, computer-readable media storing instructions, which falls under the statutory category of a machine.
Step 2A – Prong 1:
Claim 1 recites an abstract idea, law of nature, or natural phenomenon. The limitations of “determining, by the computing system, a plurality of reference utilities for each candidate output by a neural utility metric model and based on a reference set comprising a plurality of reference translations, the neural utility metric model configured to determine a utility of a candidate translation based at least in part on a reference translation, wherein the plurality of reference utilities comprise a plurality of neural metrics output by the neural utility metric model;”, “determining, by the computing system, an average utility of each candidate output of the plurality of candidate outputs based at least in part on the plurality of reference utilities;”, “determining, by the computing system, respective utility scores output by the neural utility metric model for each candidate output as the candidate translation paired with each of the other candidate outputs as the reference translation;”, “and averaging the respective utility scores respective to each candidate output to determine the average utility of each candidate output;”, and “and determining, by the computing system, an output sequence based at least in part on the average utility of each candidate output of the plurality of candidate outputs.” under the broadest reasonable interpretation, covers a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. This limitation therefore falls within the mental process group.
Step 2A – Prong 2:
The additional elements of claim 1 do not integrate the abstract idea into a judicial exception. Claim 1 recites the additional elements “a computing system”, “computing devices”, “outputs”, “a source sequence”, “reference utilities”, and “reference translations” are recognized as generic computer components recited at a high level of generality. Although it has and executes instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application).
The additional elements recited in the limitations “a neural utility metric model” are recognized as non-generic computer components, however, they are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
The additional elements recited in the limitation of “obtaining, by a computing system comprising one or more computing devices, a plurality of candidate outputs based at least in part on a source sequence;” is recognized as a mere insignificant pre- or post-extra-solution activity data gathering or transmittal step recited at a high level of generality without significantly more (See MPEP 2106.05(g)).
Step 2B:
The only limitation on the performance of the described method is a limitation reciting “a computing system”, “computing devices”, “outputs”, “a source sequence”, “reference utilities”, and “reference translations” These elements are insufficient to transform a judicial exception to a patentable invention because the recited elements are considered insignificant extra-solution activity (generic computer system, processing resources, links the judicial exception to a particular, respective, technological environment). The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components; mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (see MPEP 2106.05(f)).
The additional elements recited in the limitations “a neural utility metric model” are recognized as non-generic computer components, however, they are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
The additional elements recited in the limitation of “obtaining, by a computing system comprising one or more computing devices, a plurality of candidate outputs based at least in part on a source sequence;” is recognized well-understood, routine, or conventional activity (See WURC examples 2106.05(d)(II)(i) first list).
Taken alone or in ordered combination, these additional elements do not amount to
significantly more than the above-identified abstract idea. There is no indication that the
combination of elements improves the functioning of a computer or improves any other
technology. Their collective functions merely provide conventional computer implementation.
For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claims 11.
Claim 11 recites “A computing system, comprising: one or more processors; and one or more non-transitory, computer-readable media storing instructions that, when implemented, cause the one or more processors to perform operations, the operations comprising:” (generic computer components). The additional elements found in claim 1 and repeated in claim 11 are recognized as non-generic computer components, however, they are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Dependent Claims:
Claim 2 (Claim 12) recites refinements to the “obtaining…” abstract idea of claim 1 with an abstract idea mental process step “inputting, by the computing system, the source sequence into a machine-learned translation model configured to estimate a probability of a target segment given a source segment;” as well as mere data transmittal or extra-solution activity steps “receiving, by the computing system, the plurality of candidate outputs as output from the machine-learned translation model.”
Claim 3 (Claim 13) recites refinements to the additional elements of claims 1 and 2. “a transformer model” is found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim 5 (Claim 15) recites refinements to the “determining…” abstract idea mental process step with a mental process step “…selecting the candidate output of the plurality of candidate outputs with the highest average utility as the output sequence.”
Claim 6 (Claim 16) recites a data type.
Claim 7 (Claim 17) recites a data type.
Claim 8 (Claim 18) recites additional elements found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim 9 (Claim 19) recites additional elements found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim 10 (Claim 20) recites refinements to the additional elements.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3, 5-13, and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eikema et al. (Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation, 2020), hereinafter Eikema and Federmann et al. (To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation, 2021), hereinafter Federmann.
In regards to claim 1: The present invention claims: “A computer-implemented method for translating a source sequence with improved quality, the computer-implemented method comprising:” Eikema highlights and analyzes several methods for machine translation for quality (Abstract).
“obtaining, by a computing system comprising one or more computing devices, a plurality of candidate outputs based at least in part on a source sequence;” See Eikema Sections 7.1-7.4 for where Eikema evaluates Minimum Bayes Risk (MBR) Decoding (7.1, “NMT, by the nature of its model specification, assigns probability mass to each and every possible sequence consisting of tokens in its vocabulary. Ideally, however, a well-trained NMT model assigns the bulk of its probability mass to good translations of the input sequence. We take 1,000 unbiased samples from the model for each input sequence and count the cumulative probability mass of the unique translation sampled.” (mapping taking samples from a probability mass spread over translations to “candidate outputs”).
“determining, by the computing system, a plurality of reference utilities for each candidate output by a neural utility metric model and based on a reference set comprising a plurality of reference translations, the neural utility metric model configured to determine a utility of a candidate translation based at least in part on a reference translation;” “…a plurality of reference utilities for each candidate output…” See Eikema Sections 7.1-7.4 for where Eikema evaluates Minimum Bayes Risk (MBR) Decoding (Section 7.3, METEOR is calculated multiple times across experiments for an average; a neural utility metric model (METEOR). Section 7.1, both within a language and across multiple languages reads on “plurality of reference translations…”
“determining, by the computing system, an average utility of each candidate output of the plurality of candidate outputs based at least in part on the plurality of reference utilities;” See Eikema Sections 7.1-7.4 for where Eikema evaluates Minimum Bayes Risk (MBR) Decoding Table 1 and Section 7.4 reads on “average utility of each candidate output…”
“wherein determining the average utility of each candidate output comprises: determining, by the computing system, respective utility scores output by the neural utility metric model for each candidate output as the candidate translation paired with each of the other candidate outputs as the reference translation; See Eikema Sections 7.1-7.4 for where Eikema evaluates Minimum Bayes Risk (MBR) Decoding “Next, we investigate the performance we would achieve if we could select the best sample from a set. For that, we employ an oracle selection procedure using sentence-level METEOR with the reference translation to select the best sample from a set of samples. We vary sample size from 5 to 30 samples and repeat each experiment four times. Figure 3 plots the results in terms of corpus-level METEOR. Average METEOR scores for oracle selection out of 30 samples are shown in Table 1.” (Section 7.3, Page 4513) (determining average utility with a score based on each output against the reference).
“and averaging the respective utility scores respective to each candidate output to determine the average utility of each candidate output;” See Eikema Sections 7.1-7.4 for where Eikema evaluates Minimum Bayes Risk (MBR) Decoding “Figure 2 shows the average cumulative probability mass for all test sentences with 1 standard deviation around it, as well as the final cumulative probability values for each input sequence.” (averaging reference utilities)
“and determining, by the computing system, an output sequence based at least in part on the average utility of each candidate output of the plurality of candidate outputs.” “an output sequence based on the average utility…” See Eikema Sections 7.1-7.4 for where Eikema evaluates Minimum Bayes Risk (MBR) Decoding (7.4, mapping MBR selecting a maximum utility translation to determining an output sequence based on average utility).
While Eikema teaches the above, Eikema fails to explicitly teach “, wherein the plurality of reference utilities comprise a plurality of neural metrics output by the neural utility metric model;” (METEOR being non-neural metrics versus the instant application). However, based on the analysis done in Federmann (Table 2) the use of measurements for neural metrics such as BLEURT and COMET.
Federmann shows in Tables 2, 3, 4, and 6 the performance benefits of metrics such as BLEURT and COMET over multiple other metrics, including METEOR. Section 5, especially Page 5, right column and Page 6, right column. Section 6, Page 9, right column shows BLEURT and COMET directly performing better than METEOR for machine translation. It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to use better-performing metrics such as BLEURT or COMET, rather than METEOR to realize known benefits from known methods in a system such as the one taught in Eikema.
In regards to claim 2: The present invention claims: “wherein obtaining the plurality of candidate outputs comprises: inputting, by the computing system, the source sequence into a machine-learned translation model configured to estimate a probability of a target segment given a source segment;” Eikema teaches how a typical NMT is input with source sequence(s) (Introduction and Section 3). Eikema also teaches “We estimate expected utility using S = 30 ancestral samples, and use the translations we sample to make up an approximation to H(x).” (Section 7.4, mapping to obtaining source sequences for input into METEOR).
“and receiving, by the computing system, the plurality of candidate outputs as output from the machine-learned translation model.” See above where Eikema teaches uses MBR and multiple samples within a set of possible translations (candidate outputs) to arrive at a maximized utility output.
In regards to claim 3: The present invention claims: “wherein the machine-learned translation model comprises a transformer model.” Eikema uses a NMT model for machine translation with sequence data (Abstract, Introduction, Section 5).
In regards to claim 5: The present invention claims: “wherein determining the output sequence comprises selecting the candidate output of the plurality of candidate outputs with the highest average utility as the output sequence.” Eikema teaches “For a given utility function u(y; h), which assesses a hypothesis h against a reference y, statistical decision theory…prescribes that the optimum decision y? is the one that maximises expected utility” (Section 7.4).
In regards to claim 6: The present invention claims: “wherein the source sequence comprises text data comprising one or more sentences.” Eikema Section 7.1 reads on the test data being comprised of sentences.
In regards to claim 7: The present invention claims: “wherein the output sequence comprises a translation of the text data.” Eikema reads on neural machine translation and translation of text (Abstract).
In regards to claim 8: The present invention claims: “wherein the neural utility metric model comprises a BLEURT metric.” While Eikema does teach the use of METEOR as a metric model, it fails to teach the use of BLEURT. However, based on the analysis done in Federmann (Table 2), it would have been obvious to one of ordinary skill in the art at the time of the applicant’s filing to use a metric model such as BLEURT, given its performance.
In regards to claim 9: The present invention claims: “wherein the neural utility metric model comprises a COMET metric.” While Eikema does teach the use of METEOR as a metric model, it fails to teach the use of COMET. However, based on the analysis done in Federmann (Table 2), it would have been obvious to one of ordinary skill in the art at the time of the applicant’s filing to use a metric model such as COMET, given its performance.
In regards to claim 10: The present invention claims: “wherein the reference set comprises the plurality of candidate outputs.” See above where Eikema teaches the use of ancestral samples.
In regards to claim 11-13 and 17-20: Claims 11-13 and 17-20 recite similar limitations to claims 1-3 and 7-10, save for the recitation of “A computing system, comprising: one or more processors; and one or more non-transitory, computer-readable media storing instructions that, when implemented, cause the one or more processors to perform operations…” However, given that the claimed system relies on the method of claims 1-7 and 10, both sets of claims are similarly rejected.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRIFFIN TANNER BEAN/Examiner, Art Unit 2121
/Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121