DETAILED ACTION
Introduction
This office action is in response to applicant’s claims filed 3/18/2024. Claims 1-20 are currently pending and have been examined. Applicant’s IDS have been considered. There is no claim to foreign priority.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 4, 5, 8, 9, 11, 12, 15-18 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Brown et al. (Brown, US 2024/0377792).
As per claim 1, Brown teaches a method to facilitate building a knowledge-retrieval augmented Large Language Model pipeline to automate maintenance, repair, and overhaul recommendations for parts of an apparatus, comprising:
by a control circuit (paragraph [0032, 0144]-as his control circuit):
receiving as input a textual description of an issue pertaining to at least part of the apparatus (paragraph [0136]-his input prompt, such as text, indicative of condition, error, etc. of the equipment/apparatus);
accessing at least one data store and retrieving, as a function, at least in part, of information that corresponds to the input, a plurality of knowledge documents (ibid-paragraph [0097, 0100, 0077, 0078, 0046]-his retrieval augmented generation (RAG), and data from a data repository which includes a plurality of knowledge documents);
generating, as a function, at least in part, of both the input and the plurality of knowledge documents, a language generation prompt (ibid-see above RAG discussion, paragraph [0097, 0093-0098]-his prompt, including retrieved data from the repository, user input and generated prompt); and
outputting the language generation prompt to a task-specific decoder that generates, as a function, at least in part, of the language generation prompt, at least one candidate recommendation to address the issue that pertains to at least part of the apparatus (ibid-see his RAG discussion, Figs. 3-8, paragraphs [0004, 00035-0039, 0093-0098, 0105, 0018]-his prompt, decoder, role/task-specific learning model, and generated response, his generated recommendations, technical reports, and services to be performed among a plurality of recommendations), which at least one candidate recommendation is output to at least one human reviewer who reviews the at least one candidate recommendation as a function, at least in part, of at least part of the plurality of knowledge documents and who provides a corresponding human-validated recommendation to address the issue that pertains to at least part of the apparatus (ibid, see also paragraph [0024, 0118]-his output to the service technician, or human experts, from the generative model, wherein the human evaluates/validates the data/documentation, recommendation, via feedback, which is used to update the learning model);
wherein the task-specific decoder is re-trained, at least in part, using the corresponding human-validation recommendation coupled with the corresponding textual description input (ibid-see his updating the trained model, based on the human recommendation/validation information as discussed).
As per claims 4, 11 and 17, Brown teaches the method of claim 1 wherein the control circuit comprises, at least in part, a part of a retrieval augmented generation model (ibid- paragraph [0097, 0100, 0077, 0078, 0046]-his retrieval augmented generation (RAG)).
As per claims 5, 12 and 18, Brown teaches the method of claim 4 wherein outputting the language generation prompt to the task-specific decoder comprises outputting the language generation prompt and at least some of the plurality of knowledge documents to the task-specific decoder (ibid-paragraph [0097, 0100, 0077, 0078, 0046]-his retrieval augmented generation (RAG), and data from a data repository which includes a plurality of knowledge documents, ibid-see his RAG discussion, the generated prompt, which comprises the data repository documents are send to the role/task-specific model to generate the recommendations, see also Figs. 3-8, paragraphs [0004, 00035-0039, 0093-0098, 0105, 0018]-his prompt, decoder, role/task-specific learning model, and generated response, his generated recommendations, technical reports, and services to be performed among a plurality of recommendations).
As per claims 8 and 15, Brown teaches the method of claim 1 further comprising:
extracting semantic context information from, at least in part, the input, to provide extracted semantic context information (ibid-see claim 1, input discussion, see also paragraph [0025]-his “extracting semantic information” and corresponding causal and semantic associations, as the extracted semantic context information from the input); and
wherein accessing the at least one data store and retrieving, as a function, at least in part, of the information that corresponds to the input, a plurality of knowledge documents comprises accessing the at least one data store and retrieving, as a function, at least in part, of the extracted semantic context information, the plurality of knowledge documents (ibid-see claim 1, input, knowledge documents and recommendations discussion, see also paragraphs [0025, 0097]-his target semantic concepts, from inputs, used to retrieve the documents form the data repository, together used as prompt to the language model).
As per claim 9, claim 9 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein Brown teaches a method to facilitate building a knowledge-retrieval augmented Large Language Model pipeline to automate maintenance, repair, and overhaul recommendations for parts of an apparatus, comprising:
by a control circuit (ibid-see claim 1, corresponding and similar limitation):
receiving as input a textual description of an issue pertaining to at least part of the apparatus (ibid);
accessing at least one data store and retrieving, as a function, at least in part, of information that corresponds to the input, a plurality of knowledge documents (ibid);
generating, as a function, at least in part, of both the input and the plurality of knowledge documents, a language generation prompt (ibid); and
outputting the language generation prompt to a task-specific decoder that generates, as a function, at least in part, of the language generation prompt, at least one candidate recommendation to address the issue that pertains to at least part of the apparatus (ibid);
by a human reviewer (ibid-see claim 1, human reviewer):
accessing the at least one candidate recommendation and reviewing the at least one candidate recommendation as a function, at least in part, of at least part of the plurality of knowledge documents (ibid-see claim 1, corresponding and similar limitation);
providing a corresponding human-validated recommendation to address the issue that pertains to at least part of the apparatus (ibid-see claim 1, corresponding and similar limitation); and
re-training the task-specific decoder, at least in part, using the corresponding human-validation recommendation coupled with the corresponding textual description input (ibid).
As per claim 16, claim 16 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein the apparatus is deemed to embody the method, such that Brown teaches an apparatus to facilitate building a knowledge-retrieval augmented Large Language Model pipeline to automate maintenance, repair, and overhaul recommendations for parts of an apparatus, comprising (Figs. 1, paragraph [0144]): a control circuit configured to (ibid-see claim 1, corresponding and similar limitation):
receive as input a textual description of an issue pertaining to at least part of the apparatus (ibid-see claim 1, corresponding and similar limitation); access at least one data store and retrieving, as a function, at least in part, of information that corresponds to the input, a plurality of knowledge documents (ibid);
generate, as a function, at least in part, of both the input and the plurality of knowledge documents, a language generation prompt (ibid);
output the language generation prompt (ibid); and
a task-specific decoder configured to receive the language generation prompt and to responsively generate (ibid), as a function, at least in part, of the language generation prompt, at least one candidate recommendation to address the issue that pertains to at least part of the apparatus, which at least one candidate recommendation is output to at least one human reviewer who reviews the at least one candidate recommendation as a function, at least in part, of at least part of the plurality of knowledge documents and who provides a corresponding human-validated recommendation to address the issue that pertains to at least part of the apparatus (ibid);
wherein the task-specific decoder is re-trained, at least in part, using the corresponding human-validation recommendation coupled with the corresponding textual description input (ibid).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 3, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown as applied to claim 1 above, and further in view of Meyers et al. (Meyers, US 2018/0126489).
As per claims 2 and 10, Brown teaches the method of claim 1, but lacks teaching that which Meyers teaches, wherein the apparatus comprises a jet turbine engine (paragraph [0024-0026]-his jet turbine engine).
Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Brown and Meyers to combine the prior art element of a textual prompt input, with respect to servicing equipment as taught by Brown with jet components, as equipment, such as a jet turbine engine, as taught by Meyers as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating recommendations for equipment, that requires maintenance, repair or modification (see Brown, claim 1, maintenance, repair, servicing and recommendations with respect to equipment issues, and Meyers, paragraphs [0003, 0004]-his jet requiring maintenance, repair or modification).
As per claim 3, Brown with Meyers make obvious the method of claim 2, wherein Meyers further teaches that which Brown lacks, wherein the parts of the apparatus include at least some of a compressor, a heat exchanger, a turbine, and an exhaust nozzle (paragraph [0024-0026]-his compressor, heat exchanger, turbine and exhaust nozzle). The Examiner notes, the claims are similarly motivated and combined, wherein the additional parts of the apparatus are also included as parts requiring maintenance (the Examiner notes, the motivation and combination would also apply to any other parts, assembly or equipment-broadly defined as the necessary items for a particular purpose, see claim 2, combination and motivation).
Claim(s) 6, 13 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown as applied to claim 1 above, and further in view of Mukherjee et al. (Mukherjee, US 2024/0354436).
As per claims 6, 13 and 19, Brown teaches the method of claim 5, but lack teaching that which Mukherjee teaches, wherein outputting at least some of the plurality of knowledge documents to the task-specific decoder comprises outputting the at least some of the plurality of knowledge documents without having specified any length limits (paragraph [0007, 0008, 0013]-his LLMs, and large set of documents wherein the prompts to the LLM, based on the knowledge documents are “without being constrained by a size limit”, and corresponding output generated).
Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Brown and Mukherjee to combine the prior art element of a textual prompt input and RAG, having corresponding source documents used to generate an output by the task-specific decoder, as taught by Brown with not specifying the length of the documents being output and presented to the user, as taught by Mukherjee as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating an output using a large language model that is not constrained by size limits, thus retrieving results that are “more relevant” and with “similarity” (see Brown, claim 1, maintenance, repair, servicing and recommendations with respect to equipment issues, and Mukherjee-similarity discussion).
Claim(s) 7, 14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brown as applied to claim 1 above, and further in view of Massie et al. (Massie, US 2025/027581).
As per claims 7, 14 and 20, Brown teaches the method of claim 5 wherein outputting at least some of the plurality of knowledge documents to the task-specific decoder comprises [outputting the at least some of the plurality of knowledge documents each as a single large language model knowledge item] (ibid-see claim 1, outputting discussion).
Massie teaches that which Brown lacks, outputting the at least some of the plurality of knowledge documents each as a single large language model knowledge item (paragraph [0081, 0037]-his document, from his knowledge base of documents, as a single document vector, as the single large language model knowledge item, see his LLM, receiving the single language model knowledge item, and utilizing the item in generating a response, paragraphs [0081-0088]).
Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Brown and Massie to combine the prior art element of a textual prompt input and RAG, having corresponding source documents used to generate an output by the task-specific decoder, as taught by Brown with utilizing knowledge documents as a prompt, wherein the documents are input into a LLM as a single large language knowledge item as taught by Massie as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating an output using a single large language model item that is not constrained by size limits, by reducing the entire document to a single item, vector, thus retrieving improved results and responses (see Brown, claim 1, maintenance, repair, servicing and recommendations with respect to equipment issues, and ibid, Massie- improved response discussion).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (See PTO-892).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAMONT M SPOONER whose telephone number is (571)272-7613. The examiner can normally be reached 8:00 AM -5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LAMONT M SPOONER/ Primary Examiner, Art Unit 2657
1/10/2026