DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 7/8/2025 and 9/24/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Status of Claims
Claims 1-20 are pending in this application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 11-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lewis et al. (Non-Patent Literature “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks”, cited in IDS dated 9/24/2025) in view of Neerukona et al. (U.S. Patent Application Publication 2025/0156644).
As per claims 1, 11 and 20, Lewis et al. discloses:
A computer system comprising:
a processing unit configured to execute computer-readable instructions to cause the system (Implementation section C – Training setup details – shows code executing on a CPU for implementation) to:
responsive to a user input, obtain an input embedding associated with the user input (Figure 1, Query Encoder and section 2 - Methods);
retrieve a source text based on a similarity to the input embedding (Figure 1, Retriever pn & Document Index and Section 2.2);
using a large language model (LLM), generate a textual response to the user input, based on the user input and the at least one source text (Figure 1, pre-trained seq2seq model (Generator) and Section 2.3); and
provide the generated textual response for display via a user device (Figure 1, Question Answering: Answer Generation and Section 3).
Lewis et al. fails to disclose but Neerukonda et al. in the same field of endeavor teaches:
retrieve a synthetic question embedding from an embeddings database based on a similarity to the input embedding (Paragraphs [0017-0019], [0022], [0026-0027],[0033], [0035-0036] & [0039-0044] – Synthetic Questions, answers and sources are stored as a triplet and can be retrieved based on similarity);
Identify at least one source text based on a mapping that maps synthetic question embeddings to corresponding source texts (Paragraphs [0017-0019], [0022], [0026-0027],[0033], [0035-0036] & [0039-0044] – Synthetic Questions, answers and sources are stored as a triplet and can be retrieved based on similarity);
It would be obvious for a person with ordinary skill in the art at the effective filing date of the invention to modify the system, method and computer readable medium of Lewis et al. with the synthetic questions of Neerukonda et al. because it balances the precision of exact matching with the flexibility of approximate matching, ensuring relevance and accuracy in real-time processing.
Claim 1 is directed to the method of using the system of claim 11, so is rejected for similar reasons.
Claim 20 is directed to a computer readable medium containing instruction that cause a processing unit to act as the system of claim 11, so is rejected for similar reasons.
As per claims 2 and 12, the combination of Lewis et al. and Neerukonda et al. teaches all of the limitations of claims 1 and 11 above. Neerukonda et al. in the combination further discloses:
question embedding[[s]] is an embedding representation of a corresponding synthetic question, and wherein the mapping includes information identifying one or more source texts from which the synthetic question was generated (Paragraphs [0017-0019], [0022], [0026-0027],[0033], [0035-0036] & [0039-0044] – Synthetic Questions, answers and sources are stored as a triplet and can be retrieved based on similarity).
As per claims 3 and 13, the combination of Lewis et al. and Neerukonda et al. teaches all of the limitations of claims 2 and 12 above. Neerukonda et al. in the combination further discloses:
prior to receiving the user input: using the LLM, generating a set of synthetic questions based on the source text; applying an embedding transformation to generate a set of synthetic question embeddings; and storing the set of synthetic question embeddings in the embeddings database. (Paragraphs [0017-0019], [0022], [0026-0027],[0033], [0035-0036] & [0039-0044] – Synthetic Questions, answers and sources are stored as a triplet and can be retrieved based on similarity)
As per claims 4 and 14, the combination of Lewis et al. and Neerukonda et al. teaches all of the limitations of claims 4 and 11 above. Neerukonda et al. in the combination further discloses:
the embeddings database stores a plurality of embeddings defining an embedding space and wherein retrieving the synthetic question embedding from the embeddings database comprises: performing a vector similarity search operation within the embedding space to identify the synthetic question embedding, based on similarity measures between embeddings of the plurality of synthetic question embeddings and the input embedding (Paragraph [0039] – synthetic questions are selected based on cosine similarity).
As per claims 5 and 15, the combination of Lewis et al. and Neerukonda et al. teaches all of the limitations of claims 4 and 14 above. Neerukonda et al. in the combination further discloses:
ranking a plurality of candidate synthetic question embeddings based on contextual information for the user. (Paragraphs [0014-0019]).
As per claims 6 and 16, the combination of Lewis et al. and Neerukonda et al. teaches all of the limitations of claims 4 and 14 above. Neerukonda et al. in the combination further discloses
the similarity measure is a cosine similarity (Paragraph [0039] – synthetic questions are selected based on cosine similarity).
As per claims 7 and 17, the combination of Lewis et al. and Neerukonda et al. teaches all of the limitations of claims 1 and 11 above. Lewis et al. in the combination further discloses:
obtaining an input embedding associated with the user input comprises: applying an embedding transformation to the user input to generate the input embedding (Figure 1, Query Encoder and section 2 – Methods).
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Lewis et al. (Non-Patent Literature “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks”, cited in IDS dated 9/24/2025) and Neerukona et al. (U.S. Patent Application Publication 2025/0156644) in view of Wang et al (Chinese Patent Application Publication 116932708, listed in IDS dated 9/24/2025).
As per claims 8 and 18, the combination of Lewis et al. and Neerukona et al. teaches all of the limitations of claims 7 and 17 above. The combination fails to explicitly disclose, but Wang et al. in the same field of endeavor teaches:
prior to applying the embedding transformation to generate the input embedding: determine whether the user input is phrased in a question format; generate, based on the determining, a prompt to the LLM including the user input, the prompt for instructing the LLM to generate an updated user input that is phrased in a question format; and provide the prompt to the LLM to generate the updated user input (Paragraph [0075] - “question rewriting module 1 is mainly responsible for rewriting an inputted question. After the user inputs the questions, the question and answer system firstly translates the Chinese questions into English, then uses the question rewrite module 1 to process the input questions, rewrites the questions into a form which is convenient for the question and answer module to process, and uses the questions as the inputs of the central computing management module and each question and answer module”).
It would be obvious for a person with ordinary skill in the art at the effective filing date of the invention to modify the system, method and computer readable medium of Lewis et al. and Neerukona et al. with the question rewriting capabilities of Wang et al. because it is a case of combining prior art elements according to known methods to yield predictable results.
Claims 9-10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lewis et al. (Non-Patent Literature “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks”, cited in IDS dated 9/24/2025) and Neerukona et al. (U.S. Patent Application Publication 2025/0156644) in view of Cui et al (U.S. Patent Application Publication 2025/0077940).
As per claims 9 and 19, the combination of Lewis et al. and Neerukona et al. teaches all of the limitations of claims 1 and 11 above. The combination fails to explicitly disclose, but Cui et al. in the same field of endeavor teaches:
generating a prompt to the LLM, the prompt including the user input and the source text; and providing the prompt to the LLM to generate the textual response (Paragraphs [0020-0023]).
It would be obvious for a person with ordinary skill in the art at the effective filing date of the invention to modify the system, method and computer readable medium of Lewis et al. and Jindal et al. with the context provisions of Cui et al. because it helps catch potential hallucinations. See abstract of Cui et al.
As per claim 10, the combination of Lewis et al., Neerukona et al. and Cui et al. teaches all of the limitations of claim 9 above. Cui et al. in the combination further discloses:
the prompt includes information about the user’s recent viewing or search history (Paragraphs [0032-0037]).
Response to Arguments
Applicant’s arguments, see pages 7-10, filed 2/3/2026, with respect to the rejections of claims 1-20 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Neerukonda et al.
Examiner Notes
The Examiner cites particular columns and line numbers in the references as applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully considers the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or as disclosed by the Examiner.
Communications via Internet e-mail are at the discretion of the applicant and require written authorization. Should the Applicant wish to communicate via e-mail, including the following paragraph in their response will allow the Examiner to do so:
“Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with me concerning any subject matter of this application by electronic mail. I understand that a copy of these communications will be made of record in the application file.”
Should e-mail communication be desired, the Examiner can be reached at Edwin.Leland@USPTO.gov
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWIN S LELAND III whose telephone number is (571)270-5678. The examiner can normally be reached 8:00 - 5:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDWIN S LELAND III/Primary Examiner, Art Unit 2654