Prosecution Insights
Last updated: April 19, 2026
Application No. 18/534,384

TRAVEL-SPECIFIC NATURAL LANGUAGE PROCESSING SYSTEM

Final Rejection §103
Filed
Dec 08, 2023
Examiner
VOGT, JACOB BUI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Expedia Inc.
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
4 granted / 7 resolved
-4.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 12/29/2025. Claims 1-24 are pending and have been examined. Hence, this action has been made FINAL. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant claims the benefit of US Provisional Application No. 63/486,746, filed February 24, 2023. Claims 1-24 have been afforded the benefit of this filing date. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification. The following terms in the claims have been given the following interpretations in light of the specification: Noisy text data: ¶ [0024], “travel data may include data in a conversational format ( e.g., grammatical errors, punctuation or capitalization errors, spelling errors, formatting errors, etc.). Such conversational data is sometimes referred to herein as "noisy" data.” Thus, noisy text data may include conversational or error-prone text. This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims. Should applicant wish different definitions, Applicant should point to the portions of the specification that clearly show a different definition. Response to Arguments The reply filed on 12/29/2025 has been entered. Applicant’s arguments with respect to claims 1-24 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 101, Applicant has amended each of the independent claims and asserts that “The human mind is not practically capable of training or tuning parameters of a travel-specific natural language machine-learning model.” The examiner agrees that these newly added limitations overcome the rejection under 35 U.S.C. § 101. With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 103, the applicant’s arguments with respect to claims 1-24 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7, 8, 12-16, 18, 19, and 23 are rejected under 35 U.S.C. 103 as obvious over "Pre-training Language Model Incorporating Domain-specific Heterogeneous Knowledge into A Unified Representation" (Zhu et al.) in view of US Patent Publication US 20250335720 A1 (Oh et al.) in view of "Parameter-Efficient Transfer Learning for NLP" (Houlsby et al.). Claim 1 Regarding claim 1, Zhu et al. disclose a method comprising: maintaining, [by one or more processors,] a first dataset having a [standardized text format] and a second dataset [having a noisy text format], each of the first dataset and the second dataset corresponding to a travel-specific lexicon (Zhu et al. pg. 2, Section 1, Paragraph 6, "The main contributions of this paper are as follows: ... We construct 4 datasets for evaluating downstream tourism NLP tasks." See table 5 for the list of travel-specific datasets); generating, [by the one or more processors,] training data corresponding to a first training objective and a second training objective (Zhu et al. pg. 2, Section 1, Paragraph 5, "We combine three objective functions to jointly pre-train the multi-format text. For unstructured text, we adopt the masked language model (MLM) objective to train the domain adaption model. For semi-structured text, we propose title matching training (TMT) to classify whether the title matches the paragraph. For well-structured text, we propose a triple classification (TC) task to classify whether the knowledge triple is modified.") based on the first dataset and the second dataset (Zhu et al. pg. 2, Section 1, Paragraph 5, "To obtain the aforementioned multi-format text, we construct a corpus in the tourism domain and pre-train our TravelBERT." See Table 5 for tourism corpora); training, [by the one or more processors,] a travel-specific natural language machine-learning model using the training data according to the first and second training objectives (Zhu et al. pg. 5, Section 2.3, Paragraph 1, "For the three tasks, we can optimize the combined objective function min θ ⁡ L = ∑ i = 1 | D | ( L i m l m + λ L i t c + μ L i t m t ) where L i m l m , L i t c , and L i t m t are the objectives of three tasks respectively. ... θ is the model parameter."); receiving, by the one or more processors, [additional] text data having the noisy text format from one or more data sources, the [additional] text data corresponding to the travel-specific lexicon (Zhu et al. pg. 11, Appendix, Table 5, "[KdConv] is developed by (Zhou et al., 2020) through crowdsourcing to generate multi-turn conversations related to the domain-specific knowledge graph. ... Considering the scope of this paper, we only use the travel domain dataset for evaluation."); and tuning, by the one or more processors, a second set of parameters of [at least one adapter layer of] the travel-specific natural language machine-learning model using the [updated] training data (Zhu et al. pg. 5, Section 2.4, Paragraph 1, "When fine-tuning downstream tasks, our model does not need to change the input text because the model can learn heterogeneous knowledge in the pretraining stage and learn better parameters, like GPT-3 (Brown et al., 2020) and WKLM (Xiong et al., 2020). Then the model uses the learned knowledge (parameters) to better solve downstream tasks." Learned parameters are considered analogous to a second set of parameters). Zhu et al. do not explicitly disclose all of updating datasets. However, Oh et al. disclose a method, comprising: maintaining, by one or more processors (Oh et al. ¶ [0115], "CPU 990 fetches an instruction from RAM 998 at an address indicated by a register therein (not shown) referred to as a program counter, interprets the instruction, reads data necessary to execute the instruction from RAM 998, SSD 1000 or from other device in accordance with an address specified by the instruction, and executes a process designated by the instruction."), a first dataset having a standardized text format and a second dataset having a noisy text format (Oh et al. ¶ [0045], "fourth storage 126 [stores] the noise-added word sequence/HIRAGANA pairs output from noise-adding unit 124 and the original word sequence/HIRAGANA pairs before adding the noise, respectively." Original word sequence pairs are considered analogous to standardized text. Noise-added word sequence pairs are considered analogous to noisy text.), [each of the first dataset and the second dataset corresponding to a travel-specific lexicon]; generating, by the one or more processors, training data corresponding to a first training objective and a second training objective (Oh et al. ¶ [0051], "In the pre-training according to the present embodiment, MLM and NSP (Next Sentence Prediction), both well-known as the manner of pre-training BERT, are used. As shown in FIG. 5, in the present embodiment, in MLM 226, both the word sequence and the phonetic letter sequence are masked, and BERT 170 is trained through inference of word or reading of the masked portion.") based on the first dataset and the second dataset (Oh et al. ¶ [0045], "noise-adding unit 124 [adds] noise to each of the word sequence/phonetic letter sequence pairs stored in the second storage 115 and [outputs] the noise-added pairs as noise-added word sequence/HIRAGANA pairs; and fourth storage 126 [stores] the noise-added word sequence/HIRAGANA pairs output from noise-adding unit 124 and the original word sequence/HIRAGANA pairs before adding the noise, respectively."); training, by the one or more processors, [a first set of parameters of] a travel-specific natural language machine-learning model (Oh et al. ¶ [0049], "in order to commonly represent the pre-trained language model 122 and additionally pre-trained language model 134, the language model as the object of training is represented by BERT 170.") using the training data (Oh et al. ¶ [0076], "In the embodiment above, first, BERT is pre-trained to obtain pre-trained language model 122. ... The entire pre-training may be done by using noise-added training data." Pre-training is considered analogous to training a first set of parameters) according to the first and second training objectives (Oh et al. ¶ [0051], "In the pre-training according to the present embodiment, MLM and NSP (Next Sentence Prediction), both well-known as the manner of pre-training BERT, are used."); receiving, by the one or more processors, additional text data having the noisy text format from one or more data sources (Oh et al. ¶ [0084]-[0086], "Referring to FIG. 11, this example 450 assumes a situation in which a robot asks an elderly person about living conditions. ... In this example, an answer, for which a YES/NO response is expected, is given and the response thereto is classified into any of five categories ... labels corresponding to these five categories may be added to the noise-added training data for fine-tuning."), [the additional text data corresponding to the travel-specific lexicon]; updating, by the one or more processors, the training data in real-time or near real-time using the additional text data (Oh et al. ¶ [0046], "additional pre-training data generator 128 [generates] training data for additional pre-training from each of the word sequence/phonetic letter sequence pairs stored in the fourth storage 126; and fifth storage 130 for storing the training data generated by additional pre-training data generator 128." Storing additional pre-training data in fifth storage 130 is considered analogous to updating training data); and tuning, by the one or more processors, [a second set of parameters of at least one adapter layer of] the [travel-specific] natural language machine-learning model using the updated training data (Oh et al. ¶ [0078], "The training data to which noise is added by the same method as in the first embodiment may be used for fine-tuning in order to adapt a pre-trained language model to a specific application, rather than for the additional pre-training." Fine-tuning a pre-trained language model is considered analogous to tuning a second set of parameters of a natural language ML model using updated training data). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhu et al.’s travel-specific natural language processing system to include Oh et al.’s dataset updating because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Zhu et al.’s static datasets as modified by Oh et al.’s dataset updating can yield a predictable result of improving model usability since fine-tuning on up-to-date data will help the model stay relevant to current times. Thus, a person of ordinary skill would have appreciated including in Zhu et al.’s travel-specific natural language processing system the ability to do Oh et al.’s dataset updating since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Zhu et al. in view of Oh et al. do not disclose all of tuning an adapter layer. However, Houlsby et al. disclose training, by the one or more processors, a first set of parameters of a travel-specific natural language machine-learning model (Houlsby et al. pg. 3, Section 3.1, Paragraph 2, "Our training procedure also follows Devlin et al. (2018). We optimize using Adam (Kingma & Ba, 2014), whose learning rate is increased linearly over the first 10% of the steps, and then decayed linearly to zero. All runs are trained on 4 Google Cloud TPUs with a batch size of 32.") [using the training data according to the first and second training objectives]; receiving, by the one or more processors, additional text data having the noisy text format (Houlsby et al. pg. 5, Table 2. The dataset "Crowdflower corporate messaging" is considered analogous to additional text data corresponding to having noisy text format) from one or more data sources (Houlsby et al. pg. 4, Section 3.3, Paragraph 1, "To further validate that adapters yields compact, performant, models, we test on additional, publicly available, text classification tasks."), the additional text data corresponding to the travel-specific lexicon (Houlsby et al. pg. 5, Table 2. The dataset "Crowdflower airline" is considered analogous to additional text data corresponding to travel-specific lexicon); updating, by the one or more processors, the training data in real-time or near real-time using the additional text data (Houlsby et al. pg. 4, Section 3.3, Paragraph 1, "To further validate that adapters yields compact, performant, models, we test on additional, publicly available, text classification tasks." Pulling additional classification tasks is considered analogous to updating training data); and tuning, by the one or more processors, a second set of parameters of at least one adapter layer of the [travel-specific] natural language machine-learning model using the updated training data (Houlsby et al. pg. 2, Section 2, Paragraph 2, "Adapter modules perform more general architectural modifications to re-purpose a pretrained network for a downstream task. In particular, the adapter tuning strategy involves injecting new layers into the original network. The weights of the original network are untouched, whilst the new adapter layers are initialized at random." pg. 3, Figure 2, "During adapter tuning, the green layers are trained on the downstream data, this includes the adapter, the layer normalization parameters, and the final classification layer (not shown in the figure)." See Figure 2 for adapter layer architecture). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhu et al. in view of Oh et al. to incorporate Houlsby et al.’s adapter layer. The suggestion/motivation for doing so would have been that, “Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task,” as noted by Houlsby et al. in the abstract. Claim 2 Regarding claim 2, the rejection of claim 1 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. disclose all the elements of the claimed invention as stated above. Zhu et al. further disclose tuning, by the one or more processors, the travel-specific natural language machine-learning model using an additional training set according to a third training objective (Zhu et al. pg. 2, Section 1, Paragraph 5, "We combine three objective functions to jointly pre-train the multi-format text. For unstructured text, we adopt the masked language model (MLM) objective to train the domain adaption model. For semi-structured text, we propose title matching training (TMT) to classify whether the title matches the paragraph. For well-structured text, we propose a triple classification (TC) task to classify whether the knowledge triple is modified." Well-structured text is considered analogous to the additional training set. The triple classification (TC) task is considered analogous to a third training objective.). Claim 3 Regarding claim 3, the rejection of claim 1 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. disclose all the elements of the claimed invention as stated above. Zhu et al. further disclose wherein maintaining the first dataset comprises scraping, by the one or more processors, a plurality of webpages hosted by one or more second data sources to retrieve data having the standardized text format (Zhu et al. pg. 7, Section 4.1.2, Paragraph 3, "For the TravelQA dataset, we use a crawler to get user questions from the travel guide channel of ctrip.com. We select the answer adopted by the questioner as the gold answer from the replies, and the negative answers come from replies to other questions." A Q&A structure is considered analogous to a standardized text format.). Claim 4 Regarding claim 4, the rejection of claim 1 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. disclose all the elements of the claimed invention as stated above. Zhu et al. further disclose wherein maintaining the second dataset comprises accessing, by the one or more processors, one or more chat logs, social media sources, or peer-to-peer communications maintained by the one or more data sources to retrieve the additional text data having the noisy text format (Zhu et al. pg. 6, Section 3.1, Paragraph 2, “For downstream tourism NLP tasks, due to the lack of sufficient evaluation datasets in the tourism domain, we adopt a well-known dialogue dataset KdConv (Zhou et al., 2020) and construct 4 tourism NLP datasets.”; pg. 11, Section A, Table 5, “[KdConv] is developed by (Zhou et al., 2020) through crowdsourcing to generate multi-turn conversations related to the domain-specific knowledge graph.” Crowdsourced Multi-turn conversations are considered analogous to peer-to-peer communication. Conversation data is considered analogous to noisy text, as per ¶ [0024]-[0027] in the specification of the instant application). Claim 5 Regarding claim 5, the rejection of claim 1 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. disclose all the elements of the claimed invention as stated above. Oh et al. further disclose wherein the first training objective is masked-language modeling (MLM) and the second training objective is next sentence prediction (NSP) (Oh et al. ¶ [0051], "In the pre-training according to the present embodiment, MLM and NSP (Next Sentence Prediction), both well-known as the manner of pre-training BERT, are used."). Claim 7 Regarding claim 7, the rejection of claim 1 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. disclose all the elements of the claimed invention as stated above. Zhu et al. further disclose wherein training the travel-specific natural language machine- learning model comprises: training, by the one or more processors, using the training data, the travel-specific natural language machine-learning model according to the first training objective and the second training objective concurrently (Zhu et al. pg. 6, Section 3.3, Paragraph 1, "For the three tasks, we can optimize the combined objective function: m i n θ L = ∑ i = 1 | D | ( L i m l m + L i t c + L i ( t m t ) ) "). Claim 8 Regarding claim 8, the rejection of claim 1 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. disclose all the elements of the claimed invention as stated above. Zhu et al. further disclose wherein the travel-specific natural language machine-learning model comprises a deep learning model (Zhu et al. pg. 4, Section 3.1, Paragraph 2, "Given that BERT is a representative PLM, all studies in this paper use BERT as the backbone." BERT is considered analogous to a deep learning model). Claim 12 Regarding claim 12, Oh et al. disclose a system, comprising: one or more processors coupled to non-transitory memory (Oh et al. ¶ [0115], "CPU 990 fetches an instruction from RAM 998 at an address indicated by a register therein (not shown) referred to as a program counter, interprets the instruction, reads data necessary to execute the instruction from RAM 998, SSD 1000 or from other device in accordance with an address specified by the instruction, and executes a process designated by the instruction."). The remaining limitations of claim 12 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 13 Regarding claim 13, the rejection of claim 12 is incorporated. The limitations of claim 13 are similar in scope to that of claim 2 and therefore are rejected for similar reasons as described above. Claim 14 Regarding claim 14, the rejection of claim 12 is incorporated. The limitations of claim 14 are similar in scope to that of claim 3 and therefore are rejected for similar reasons as described above. Claim 15 Regarding claim 15, the rejection of claim 12 is incorporated. The limitations of claim 15 are similar in scope to that of claim 4 and therefore are rejected for similar reasons as described above. Claim 16 Regarding claim 16, the rejection of claim 12 is incorporated. The limitations of claim 16 are similar in scope to that of claim 5 and therefore are rejected for similar reasons as described above. Claim 18 Regarding claim 18, the rejection of claim 12 is incorporated. The limitations of claim 18 are similar in scope to that of claim 7 and therefore are rejected for similar reasons as described above. Claim 19 Regarding claim 19, the rejection of claim 12 is incorporated. The limitations of claim 19 are similar in scope to that of claim 8 and therefore are rejected for similar reasons as described above. Claim 23 Regarding claim 23, Oh et al. disclose a non-transitory computer-readable medium with instructions embodied thereon (Oh et al. ¶ [0115], "CPU 990 fetches an instruction from RAM 998 at an address indicated by a register therein (not shown) referred to as a program counter, interprets the instruction, reads data necessary to execute the instruction from RAM 998, SSD 1000 or from other device in accordance with an address specified by the instruction, and executes a process designated by the instruction.") The remaining limitations of claim 23 are similar in scope to that of claim 1 and claim 2, and therefore are rejected for similar reasons as described above. Claims 6 and 17 are rejected under 35 U.S.C. 103 as obvious over Zhu et al. in view of Oh et al. in view of Houlsby et al. as applied to claims 1 and 12 above, and further in view of "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" (Devlin et al.). Claim 6 Regarding claim 6, the rejection of claim 1 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. disclose all the elements of the claimed invention as stated above. Zhu et al. in view of Oh et al. in view of Houlsby et al. do not explicitly disclose all of training a model by applying training objectives in sequence. However, Devlin et al. disclose wherein training the travel-specific natural language machine-learning model comprises: training, by the one or more processors, the travel-specific natural language machine- learning model using a first training set according to the first training objective (Devlin et al. pg. 4, Section 3.1, Paragraph 2, "we pre-train BERT using two unsupervised tasks ... In order to train a deep bidirectional representation, we simply mask some percentage of the input tokens at random, and then predict those masked tokens. We refer to this procedure as a “masked LM” (MLM)," Masked LM (MLM) is considered analogous to the first training objective. Input tokens to BERT are considered analogous to a first training set); and subsequently training, by the one or more processors, the travel-specific natural language machine-learning model using a second training set according to the second training objective (Devlin et al. pg. 4, Section 3.1, Paragraph 2, "In order to train a model that understands sentence relationships, we pre-train for a binarized next sentence prediction task that can be trivially generated from any monolingual corpus." See Figure 1 and Appendix A.1, which both display NSP as a subsequent step to MLM. The output of the first task (MLM) is therefore considered analogous to a second training set). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhu et al. to incorporate Devlin et al.’s sequenced training objectives. The suggestion/motivation for doing so would have been that, “Despite its simplicity, we demonstrate in Section 5.1 that pre-training towards [next sentence prediction] is very beneficial to both QA and NLI,” as noted by Devlin et al. in pg. 4, Section 3.1, Paragraph 5. Claim 17 Regarding claim 17, the rejection of claim 12 is incorporated. The limitations of claim 17 are similar in scope to that of claim 6 and therefore are rejected for similar reasons as described above. Claims 9, 10, 20, 21, and 24 are rejected under 35 U.S.C. 103 as obvious over Zhu et al. in view of Oh et al. in view of Houlsby et al. as applied to claims 1 and 12 above, and further in view of US Patent Publication 20190065462 A1 (Salloum et al.). Claim 9 Regarding claim 9, the rejection of claim 1 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. disclose all the elements of the claimed invention as stated above. Zhu et al. in view of Oh et al. in view of Houlsby et al. do not explicitly disclose all of sentence pairs. However, Salloum et al. disclose wherein generating the training data comprises generating, by the one or more processors, a plurality of sentence pairs using text data from the first dataset and the second dataset (Salloum et al. ¶ [0031]-[0032], "FIG. 1 is a flowchart depicting contemplated steps in preferred methods of generating bitexts for training a SMT system. In Step 101, a corpus comprising automated speech recognition (ASR) dictations and formatted reports is obtained. ... SMT requires sentence-aligned data, or bitexts: parallel pairs of translational equivalent sentences in source and target languages. The source language is the output from a speech recognition system, usually unformatted text transcripts (i.e., hypotheses) ... The target language (output of the machine translation postprocessor) is a fully formatted report" Fully formatted reports are considered analogous to the first dataset. Retrieved ASR output is considered analogous to the second dataset). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhu et al. in view of Oh et al. in view of Houlsby et al. to include Salloum et al.’s sentence pairs because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Zhu et al.’s travel-specific datasets and Salloum et al.’s sentence pairs perform the same general and predictable function, the predictable function being providing training data for training a natural language machine-learning model. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself - that is in the substitution of Zhu et al.’s travel-specific datasets by replacing it with Salloum et al.’s sentence pairs. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. Claim 10 Regarding claim 10, the rejection of claim 9 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. in view of Salloum et al. disclose all the elements of the claimed invention as stated above. Salloum et al. further disclose wherein the plurality of sentence pairs comprise a first pair having a first sentence in the standardized text format and a second sentence in the noisy text format (Salloum et al. ¶ [0031]-[0032], "FIG. 1 is a flowchart depicting contemplated steps in preferred methods of generating bitexts for training a SMT system. In Step 101, a corpus comprising automated speech recognition (ASR) dictations and formatted reports is obtained. ... SMT requires sentence-aligned data, or bitexts: parallel pairs of translational equivalent sentences in source and target languages. The source language is the output from a speech recognition system, usually unformatted text transcripts (i.e., hypotheses) ... The target language (output of the machine translation postprocessor) is a fully formatted report" Fully formatted reports are considered analogous to a standardized text format. Retrieved ASR output is considered analogous to noisy data, since ¶ [0024] of the instant application states that "noisy" data can include conversational data.). Claim 20 Regarding claim 20, the rejection of claim 12 is incorporated. The limitations of claim 20 are similar in scope to that of claim 9 and therefore are rejected for similar reasons as described above. Claim 21 Regarding claim 21, the rejection of claim 20 is incorporated. The limitations of claim 21 are similar in scope to that of claim 10 and therefore are rejected for similar reasons as described above. Claim 24 Regarding claim 24, the rejection of claim 23 is incorporated. The limitations of claim 24 are similar in scope to that of claim 9 and claim 10, and therefore are rejected for similar reasons as described above. Claims 11 and 22 are rejected under 35 U.S.C. 103 as obvious over Zhu et al. in view of Oh et al. in view of Houlsby et al. as applied to claims 1 and 12 above, and further in view of "Deep Sentence Denoising beyond Grammatical Error Correction" (Liang et al.). Claim 11 Regarding claim 11, the rejection of claim 9 is incorporated. Zhu et al. in view of Oh et al. in view of Houlsby et al. in view of Salloum et al. disclose all the elements of the claimed invention as stated above. Salloum et al. further disclose wherein the plurality of sentence pairs comprise a first [predetermined] number of sentence pairs having at least one sentence in the noisy text format (Salloum et al. ¶ [0032], “SMT requires sentence-aligned data, or bitexts: parallel pairs of translational equivalent sentences in source and target languages. The source language is the output from a speech recognition system, usually unformatted text transcripts (i.e., hypotheses) … Hypotheses and reports cannot be naïvely used as translational equivalents because there are too many discontinuities between the two. For example: … errors and corrections spoken in the dictation (“scratch that,” e.g.) are not present in the report" ASR dictations that contain speech errors (“scratch that,” e.g.) are considered analogous to noisy text; see ¶ [0024] of the instant application). Zhu et al. in view of Oh et al. in view of Houlsby et al. in view of Salloum et al. do not explicitly disclose all of generating a predetermined number of sentence pairs. However, Liang et al. disclose wherein the plurality of sentence pairs comprise a first predetermined number of sentence pairs (Liang et al. pg. 3, Section 3B, Paragraph 1, "We use SimpleWiki as the seed corpus for verbosity noise, misordering noise, and synonym noise. A cleaning process is performed and 680k sentences are kept, which are further split into training/validation/test sets by 90:5:5. Noise generation scripts are applied after the data split. As a result, verbosity noise, misordering noise, and synonym noise will share the same seed corpus for train/validation/test sets, even though they are noised in different ways." Splitting up a set number of sentence pairs using a predetermined ratio is analogous to generating a predetermined number of sentence pairs) having at least one sentence in the noisy text format (Liang et al. pg. 3, Section 3A, Paragraph 1, "Every dataset consists of pairs of two sentences: (noisy sentence, noise-free sentence)"). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Zhu et al.’s travel-specific NLP model to include Liang et al.’s predetermined number of sentence pairs because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Zhu et al.’s travel-specific NLP model as modified by Liang et al.’s predetermined number of sentence pairs can yield a predictable result of reducing model variability since using a constant number of inputs would make the input to the machine learning model more consistent over multiple tests. Thus, a person of ordinary skill would have appreciated including in Zhu et al.’s travel-specific NLP model the ability to do Liang et al.’s predetermined generation since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 22 Regarding claim 22, the rejection of claim 20 is incorporated. The limitations of claim 22 are similar in scope to that of claim 9 and therefore are rejected for similar reasons as described above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB B VOGT/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 02/17/2026
Read full office action

Prosecution Timeline

Dec 08, 2023
Application Filed
Aug 27, 2025
Non-Final Rejection — §103
Nov 26, 2025
Examiner Interview Summary
Nov 26, 2025
Applicant Interview (Telephonic)
Dec 29, 2025
Response Filed
Feb 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12505279
METHOD AND SYSTEM FOR DOMAIN ADAPTATION OF SOCIAL MEDIA TEXT USING LEXICAL DATA TRANSFORMATIONS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+100.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month