DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/29/2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-9, 11-16, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cook (US 2023/0169276 A1), hereinafter “Cook”, and in view of Japa et al. (US 2022/0292262 A1), hereinafter “Japa”.
As per claim 1, Cook teaches a method comprising:
“generating a question-answer mapping by reading a question from a question bank,
(Cook teaches generating a mapping of question and answers and storing the mapping in the database 108 and also update the search index to include the updated questions and answers. The switchboard servers utilize the search index and the database 108 to ensure that future user questions are mapped accordingly to the updated data)
“receiving, by a processor, a query from a user device” at [0038] and Figs.2, 5;
(Cook teaches at 200, the system obtains a question from a user device)
“generating, by the processor, a query embedding representing the query” at [0039]-[0040];
(Cook teaches at 204, the Model 1 transforms the user question into one or more question embeddings)
“identifying, by the processor, at least one question corresponding to the query by comparing the query embedding to a plurality of embeddings of prior questions in the cached question-answer mapping” at [0041]-[0043];
(Cook teaches at 206, the embeddings of the user question are compared against a search index. The search index contains all previously submitted questions that are stored in embedded form. At 208, the system identifies prior questions in the search index that are closely related to the user question)
“transmitting, by the processor to the user device in response to the query, an answer extracted from the candidate documents by the transformer model, the answer corresponding to the at least one question” at [0047].
(Cook teaches the system identifies an answer associated with the related question, and returns the answer to the user that submitted the question)
Cook does not teach generating a question-answer mapping by “identifying candidate documents for the question, and inputting candidate documents into a transformer model to extract answers from the candidate documents” as claimed. However, Japa teaches a method for generating a question-answer mapping including the step of “identifying candidate documents for the question, and inputting candidate documents into a transformer model to extract answers from the candidate documents” at [0050]-[0058] and Figs. 2B, 2D. Particularly, Japa teaches at [0050] the step of receiving a natural language question, identifying candidate set of answers responsive to the question by querying a knowledge graph. Japa then teaches at [0055]-[057] the step of inputting the candidate set of answers to extract particular answers, the output of the transformer model is a set of question-answer (q, a1), (q, a2)… and their associated similarity scores.
Thus, it would have been obvious to one of ordinary skill in the art to combine Japa with Cook’s teaching in order to provide an automate method for generating the question-answer mapping using the knowledge graph and the BERT machine learning transformer model, as suggested by Japa, which is much faster and more efficient, as compare to Cook’s human-authored answers.
As per claim 2, Cook and Japa teach the method of claim 1 discussed above. Cook also teaches: wherein “comparing the query embedding to a plurality of embeddings of questions comprises determining distances between the query embedding and each of the plurality of embeddings” at [0012], [0041]-[0045].
As per claim 4, Cook and Japa teach the method of claim 1 discussed above. Japa also teaches:
“reading a question from a question bank” at [0050];
(Japa teaches a question module 212 reads a natural language question)
“identifying a plurality of candidate documents for a given question in the question” at [0050];
(Japa teaches the candidate set module 212 generates a candidate set of answers responsive to a natural language question by querying a knowledge base, such as knowledge graph 201)
“inputting the plurality of candidate documents into a transformer model, the transformer model outputting one or more answers present in the plurality of candidate documents” at [0055]-[0057];
(Japa teaches inputting the question and the candidate set of answers into a BERT language model. The BERT language model outputs a set of answers and their ranking according to similarity score between the question and each corresponding candidate answer)
“selecting a subset of the one or more answers as answers to the given question, storing the question and the subset of the one or more answers as a mapping for the question” at [0057]-[0058].
(Japa teaches the candidates with the highest score will be selected as the final answer)
As per claim 5, Cook and Japa teach the method of claim 4 discussed above. Japa also teaches: wherein “identifying a plurality of candidate documents comprises performing a search on a document corpus using the question” at [0050].
As per claim 6, Cook and Japa teach the method of claim 4 discussed above. Japa also teaches: wherein “the transformer model outputs a location of the answer within a given candidate document” at [0057]-[0058].
As per claim 7, Cook and Japa teach the method of claim 4 discussed above. Japa also teaches: “training the transformer model by loading a generic transformer model, annotating a knowledge base with questions and answers, and re-training the generic transformer model using the knowledge base” at [0022]-[0025], [0055]-[0082].
Claims 8-9, 11-16, 18-20 recite similar limitations as in claims 1-2, 4-7 and are therefore rejected by the same reasons.
Response to Arguments
Applicant’s arguments with respect to claims 1, 8 and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Examiner's Note: Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHANH B PHAM whose telephone number is (571)272-4116. The examiner can normally be reached Monday - Friday, 8am to 4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached at (571)272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KHANH B PHAM/Primary Examiner, Art Unit 2166
February 23, 2026