DETAILED ACTION
This Final Office Action is in response to the argument and amendment filed December 09, 2025.
Claims 1, 5, 6, 8, 12, 13 and 15 are amended.
Claims 2, 3, 4, 7, 9, 10, 11 and 14 are originals.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1 -15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to
an abstract idea without significantly more.
Step 1 (The Statutory Categories): Is the claim to a process, machine, manufacture, or composition of matter? MPEP 2106.03.
Per Step 1: Claims 1-15 are directed to a method, system, and non-transitory computer readable medium. Thus, each of the claims falls within one of the four statutory categories (step 1). However, the claims also fall within the judicial exception of an abstract idea (step 2). While claims 1, 8, and 15, are directed to different categories, the language and scope are substantially the same and have been addressed together below.
The analysis proceeds to Step 2A Prong One.
Step 2A Prong One: Does the claim recite an abstract idea, law of nature, or natural phenomenon? MPEP 2106.04.
The abstract idea of claim 1 and 10 is (claim 1, 8 and 15 being representative):
A method, comprising: for receipt of a text input requesting a recommendation for an equipment based on underlying conditions:
processing the text input with a fine-tuned generative language model (GLM) that is fined tuned to the equipment, wherein the fine-turned GLM leverages pre-existing knowledge encoded during initial training to enable equipment-specific optimization with fewer labeled examples of historical repairs compared to training from scratch, the fine-turned GLM configured to output raw text representative of the recommendation for the equipment based on the underlying conditions; and
processing the output raw text into a recommendation mapping process configured to map the output raw text to one or more of a checklist of recommendations conforming to end user standard, or a list of codes related to the recommendation for the equipment conforming to the end user standard.
The abstract idea steps italicized above are those which could be performed mentally, including with pen and paper. The steps describe, at a high level, receiving, recommending, processing, storing, fine-tuning, converting, repairing and providing an output that facilitates service repair operations for the equipment. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, including observations, evaluations, judgements, and/or opinions, then it falls within the Mental Processes – Concepts Performed in the Human Mind grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Additionally, and alternatively, the abstract idea steps italicized above relate to receiving, recommending, processing, storing, fine-tuning, converting, repairing, and providing an output that facilitates service repair operations for the equipment which constitutes a process that, under its broadest reasonable interpretation, covers commercial activity. This is further supported by [0014] of applicant’s specification as filed. If a claim limitation, under its broadest reasonable interpretation, covers commercial interactions, including contracts, legal obligations, advertising, marketing, sales activities or behaviors, and/or business relations, then it falls within the Certain Methods of Organizing Human Activity – Commercial or Legal Interactions grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? MPEP 2106.04.
This judicial exception is not integrated into a practical application because the additional elements are merely instructions to apply the abstract idea to a computer, as described in MPEP 2106.05(f).
Claim 1 recites the following additional elements: a fine-tuned generative language model (GLM), a list of codes.
Claim 8 recites the following additional elements: non-transitory computer readable medium, a fine-tuned generative language model (GLM), a list of codes.
Claim 15 recites the following additional elements: Apparatus, processor, a fine-tuned generative language model (GLM), a list of codes.
These elements are merely instructions to apply the abstract idea to a computer, per MPEP 2106.05(f). Applicant has only described generic computing elements in their specification, as seen in [0014] of applicant’s specification as filed, for example.
Further, the combination of these elements is nothing more than a generic computing system applied to the tasks of the abstract idea. Because the additional elements are merely instructions to apply the abstract idea to a generic computing system, they do not integrate the abstract idea into a practical application, when viewed in combination. See MPEP 2106.05(f).
Therefore, per Step 2A Prong Two, the additional elements, alone and in combination, do not integrate the judicial exception into a practical application. The claim is directed to an abstract idea.
Step 2B (The Inventive Concept): Does the claim recite additional elements that amount to significantly more than the judicial exception? MPEP 2106.05.
Step 2B involves evaluating the additional elements to determine whether they amount to significantly more than the judicial exception itself.
The examination process involves carrying over identification of the additional element(s) in the claim from Step 2A Prong Two and carrying over conclusions from Step 2A Prong Two pertaining to MPEP 2106.05(f).
The additional elements and their analysis are therefore carried over: applicant has merely recited elements that facilitate the tasks of the abstract idea, as described in MPEP 2106.05(f).
Further, the combination of these elements is nothing more than a generic computing system. When the claim elements above are considered, alone and in combination, they do not amount to significantly more.
Therefore, per Step 2B, the additional elements, alone and in combination, are not significantly more. The claims are not patent eligible.
The analysis takes into consideration all dependent claims as well:
Dependent claim 2-7 and 9-14 contain additional steps that further narrow the abstract idea above.
Claim 2 recites the following additional elements: audio input. Applicant has only described generic computing elements in their specification, as seen in {[0037]} of applicant’s specification as filed. This does not integrate the abstract idea into practical application and/or add significantly more. The claim is ineligible. Refer to MPEP 2106.05(F).
Claim 3, 4, 5, 6, 7 recites the following additional elements: a fine-tuned generative language model (GLM), Codes. Applicant has only described generic computing elements in their specification, as seen in {[0022]} of applicant’s specification as filed. This does not integrate the abstract idea into practical application and/or add significantly more. The claim is ineligible. Refer to MPEP 2106.05(F).
Claim 9 recites the following additional elements: non-transitionary computer readable medium, a fine-tuned generative language model (GLM). Applicant has only described generic computing elements in their specification, as seen in {[0015]} of applicant’s specification as filed. This does not integrate the abstract idea into practical application and/or add significantly more. The claim is ineligible. Refer to MPEP 2106.05(F).
Claim 13 recites the following additional elements: Natural language. Applicant has only described generic computing elements in their specification, as seen in {[0022]} of applicant’s specification as filed. This does not integrate the abstract idea into practical application and/or add significantly more. The claim is ineligible. Refer to MPEP 2106.05(F).
In conclusion the claims do not provide an inventive concept, because the claims do not recite additional elements or a combination of elements that amount to significantly more than the judicial exception of the claims. Therefore, whether taken individually or as an order combination, the claims are nonetheless rejected under 35 U.S.C. 101 as being directed to non - statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1- 15 are rejected under 35 U.S.C. 103 as being unpatentable over Gardner et al [US12,008,332], hereafter Gardner, in view of Cella et al [US2023/0246437], hereafter Cella, in further view of McQuown et al [US2005/0144183], hereafter McQuown.
As per claim 1, 8 and 15 (Similar scope and language);
Gardner does disclose the processing of text input with a fine-tuned generative language model (GLM)
processing the text input with a fine-tuned generative language model (GLM) that is fined tuned to the equipment,
{[Col.6 Line 36-39] LLMs like GPT-3 expose their generative capabilities through APIs that allow sending a text prompt and receiving back model-generated completions. The prompts can provide context and specify desired attributes of the output text.
[Col. 9 Line 44-54] The training module 280 enables fine-tuning or training of the language models 230 on custom datasets relevant to the summarization task and domain. While not required, additional training can improve performance by adapting the models to the terminology, writing style, and abstraction levels preferred for the target use case. For example, legal summarization may benefit from models fine-tuned on legal documents to better handle long, complex content with precise definitions and citations. The training module 280 may be configured to fine-tune language models 230 on domain-specific datasets.}
Gardner discloses;
wherein the fine-turned GLM leverages pre-existing knowledge encoded during initial training to enable equipment-specific optimization with fewer labeled examples of historical repairs compared to training from scratch, the fine-turned GLM configured to output raw text representative of the recommendation for the equipment based on the underlying conditions; and
{[Col. 10, Line 17 – 28] After pre-training, the LLM can be fine-tuned on a dataset of document-summary pairs in the target domain. For legal summarization, thousands of contracts, cases, and filings with human-written abstracts provide examples for specialization. Data cleaning prepares the domain documents. During fine-tuning, the model may be trained to generate summaries similar to the examples using maximum likelihood estimation and backpropagation. Hyperparameters like learning rate, dropout, and epochs may be optimized for the dataset. Augmentation techniques like paraphrasing may be used to expand the training set. The optimized model architecture may be selected by evaluating on a holdout set.}
Cella discloses;
A method, comprising: A non-transitory computer readable medium: An apparatus, comprising: a processor, configured to, for receipt of a text input requesting a recommendation for an equipment based on underlying conditions:
{[0098] A user of the set of enterprise energy digital twin solutions 520 may, for example, view a set of factories that are consuming energy and be presented with a view that indicates the relative efficiency of each factory, of individual machines within the factory, or of components of the machines, such as to identify inefficient assets or components that should be replaced because the cost of replacement would be rapidly recouped by reduced energy usage. The digital twin, in such example, may provide a visual indicator of inefficient assets, such as a red flag, may provide an ordered list of the assets most benefiting from replacement, may provide a recommendation that can be accepted by the user (e.g., triggering an order for replacement), or the like. Digital twins may be role-based, adaptive based on context or market conditions, personalized, augmented by artificial intelligence, and the like, in the many ways described herein and, in the documents, incorporated by reference herein.
[0119] In embodiments, energy consumption orchestration may take inputs from operational prioritization to provide a set of recommendations or control instructions to optimize energy consumption by a machine, components, a set of machines, a factory, or a fleet of assets.}
Cella discloses the output raw text representative of the recommendation for the equipment based on the underlying conditions;
{[0098] A user of the set of enterprise energy digital twin solutions 520 may, for example, view a set of factories that are consuming energy and be presented with a view that indicates the relative efficiency of each factory, of individual machines within the factory, or of components of the machines, such as to identify inefficient assets or components that should be replaced because the cost of replacement would be rapidly recouped by reduced energy usage. The digital twin, in such example, may provide a visual indicator of inefficient assets, such as a red flag, may provide an ordered list of the assets most benefiting from replacement, may provide a recommendation that can be accepted by the user (e.g., triggering an order for replacement), or the like. Digital twins may be role-based, adaptive based on context or market conditions, personalized, augmented by artificial intelligence, and the like, in the many ways described herein and, in the documents, incorporated by reference herein.}
Cella discloses;
processing the output raw text into a recommendation mapping process configured to map the output raw text to one or more of a checklist of recommendations conforming to end user standard, or a list of codes related to the recommendation for the equipment conforming to the end user standard.
{[0098] The digital twin, in such example, may provide a visual indicator of inefficient assets, such as a red flag, may provide an ordered list of the assets most benefiting from replacement, may provide a recommendation that can be accepted by the user (e.g., triggering an order for replacement), or the like. Digital twins may be role-based, adaptive based on context or market conditions, personalized, augmented by artificial intelligence, and the like, in the many ways described herein and, in the documents, incorporated by reference herein.}
Cella disclose the mapping process
{[0164] A set of neurons may learn to map points in an input space to coordinates in an output space. The input space can have different dimensions and topology from the output space, and the SOM may preserve these while mapping phenomena into groups.}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combined/modify/adjust Gardner et al’sgenerative language model to include Cella et al’s receipt of a text input requesting a recommendation for an equipment based on underlying conditions, output raw text representative of the recommendation, checklist of recommendations conforming to end user standard, since Gardner teaches processing the text input with a fine-tuned generative language model (GLM) that is fined tuned to the equipment (See Gardner [Col.6 Line 36-39, Col.9 Line 44-54, Col. 10, Line 17 – 28]). The combination would have been obvious because a person of ordinary skill in the art since the system and method provide recommendation solution using the generative language models the inclusion to provide a visual indicator of inefficient assets, such as a red flag, an ordered list of the assets most benefiting from replacement, a set of recommendations or control instructions to optimize energy consumption by a machine, components, or a set of machines. See Cella [0098, 0119].
Cella does not explicitly disclose the codes, however, McQuown does disclose the following limitations:
{[0055] The portable unit 14 also offers an instant messaging feature allowing the technician to quickly communicate repair information (for example, fault codes, diagnostic readings, or simple descriptive text) to a repair expert at the monitoring and diagnostic service center 20. The repair expert can respond directly to the technician through the portable unit 14. This feature is intended for use during the collection of additional diagnostic information or when problems are encountered during the course of a repair.
[0056] The portable unit 14 includes a graphical user interface. An exemplary screen is shown in FIG. 4. The information is presented in a clear and concise style so that users with all ranges of experience can adequately use and understand the displayed information. The portable unit 14 offers short cut links to commonly used data and functions for experienced users, with more detailed instructional links for less experienced users.}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combined/modify/adjust Gardner et al’sgenerative language model to include McQuown et al’s codes related to the recommendation, since Gardner teaches processing the text input with a fine-tuned generative language model (GLM) that is fined tuned to the equipment (See Gardner [Col.6 Line 36-39, Col.9 Line 44-54, Col. 10, Line 17 – 28]). The combination would have been obvious because a person of ordinary skill in the art since the system and method provide recommendation solution using the generative language models the inclusion of communicating repair information (for example, fault codes, diagnostic readings, or simple descriptive text) to a repair expert at the monitoring and diagnostic service center enables collection of additional diagnostic information encountered during repair. See McQuown [0056].
As per Claim 2 and 9 (Similar scope and language)
Gardner discloses;
The method of claim 1. wherein the text input is converted from audio input from the end user.
{[Col. 12 Line 28-30] The system may employ optimized speech-to-text models that are fine-tuned on the target audio domains to maximize transcription accuracy.
[Col. 12 Line 31-33] For image and video inputs, multimedia analysis techniques may be applied to extract salient objects, people, scenes, and text segments from the visual content.}
As per Claim 3 and 10 (Similar scope and language):
Gardner discloses;
The method of claim 1, further comprising processing the text input with a pre- processing layer configured to remove edge cases, vagueness and implicitness from the text input and is further configured to formulate the text input into a specific problem and repair case of the equipment for input into the fine-tuned GLM.
{[Col. 9 Line 54-59] Data preprocessing may include techniques like formatting, cleaning, and sampling representative content. Various model architectures like BERT or GPT-3 may be used for abstractive summarization. Models may be trained using maximum likelihood estimation and backpropagation to optimize abstraction performance.
[Col. 46 Line 30-32] Raw input documents are cleaned, parsed, segmented, tokenized etc. to normalize them for downstream summarization steps.
[Col. 10 Line 9 -16] For pre-training, billions of text documents are utilized to develop general language understanding. Data sources include web pages, books, academic papers, news articles, forums, and more. Documents are diversity-filtered to improve coverage of topics, writing styles, and terminology. Cleaning, shuffling, and sampling techniques prepare the pre-training data.}
As per Claim 4 and 11 (Similar scope and language);
Gardener discloses;
The method of claim 1, wherein the fine-tuned GLM is curated with structured and unstructured data related to the equipment.
{[Col. 8 Line 12-16] During further training and fine-tuning of the models for the summarization task, curated datasets of document-summary pairs in the target domain can be used. These provide examples for training the models to produce summaries with the appropriate length, perspective, and/or other attributes.}
Gardner, does not disclose the structured and unstructured data, However, Cella does disclose the following limitations:
{[0402] Problem domains include nearly any situation where structured data can be collected, and includes natural language processing (NLP), computer vision (CV), classification, image recognition, etc.}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combined/modify/adjust Gardner et al’sgenerative language model to include Cella et al’s structured and unstructured data, since Gardner teaches processing the text input with a fine-tuned generative language model (GLM) that is fined tuned to the equipment (See Gardner [Col.6 Line 36-39, Col.9 Line 44-54, Col. 10, Line 17 – 28]). The combination would have been obvious because a person of ordinary skill in the art since the system and method provide recommendation solution using the generative language models with the inclusion to collect a structured data, and a natural language processing (NLP), to produce raw text repair recommendation. See Cella [0042].
As per Claim 5 and 12 (Similar scope and language);
Cella discloses the classifier,
The method of claim 1, wherein the recommendation mapping process comprises a classifier configured to output the codes in response to the output raw text based on a temporal state of the equipment,
{[0142] Training may include training in optimization, such as training a neural network to optimize one or more systems based on one or more optimization approaches, such as Bayesian approaches, parametric Bayes classifier approaches, k-nearest-neighbor classifier approaches, iterative approaches, interpolation approaches, Pareto optimization approaches, algorithmic approaches, and the like. Feedback may be provided in a process of variation and selection, such as with a genetic algorithm that evolves one or more solutions based on feedback through a series of rounds.}
Cella discloses the recommendation;
{[0184] The DPLF 902 provides a framework by which the ANN creates outputs such as predictions, classifications, recommendations, conclusions and/or other outputs based on a historic inputs, new inputs, and new outputs (including outputs configured for specific use cases, including ones determined by parameters of the context of utilization (which may include performance parameters such as latency parameters, accuracy parameters, consistency parameters, bandwidth utilization parameters, processing capacity utilization parameters, prioritization parameters, energy utilization parameters, and many others).}
Cella discloses the mapping process;
{[0164] A set of neurons may learn to map points in an input space to coordinates in an output space. The input space can have different dimensions and topology from the output space, and the SOM may preserve these while mapping phenomena into groups.}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combined/modify/adjust Gardner et al’sgenerative language model to include Cella et al’s recommendation mapping process comprising a classifier configured to output the codes in response to the output raw text based on a temporal state of the equipment, since Gardner teaches processing the text input with a fine-tuned generative language model (GLM) that is fined tuned to the equipment (See Gardner [Col.6 Line 36-39, Col.9 Line 44-54, Col. 10, Line 17 – 28]). The combination would have been obvious because a person of ordinary skill in the art since the system and method provide recommendation solution using the generative language models with the inclusion of a classifier, recommendation, conclusions and/or other outputs based on a historic inputs, new inputs, and new outputs to enable an output of diagnostic information encountered during repair inform of code. See Cella [0142, 0184].
McQuown discloses the output of codes;
{[0032] The on-board monitoring system identifies faulty components and provides fault codes for use by the repair technician in diagnosing the problem.}
McQuown discloses;
Wherein the classifier is trained against historical repairs and historical temporal states of the equipment to output repair codes based on temporal state and symptom correlation, and
{[0072] As illustrated in FIG. 7, the recommendation authoring subsystem 182 also interfaces with the repair status subsystem 184. The recommendation authoring subsystem 182 allows selection of existing general repair recommendations for a specific problem or repair code. Also, the recommendation authoring subsystem 182 inputs a summary of the repair recommendation to the repair status subsystem 184 so that the latter can create an entry in the repair status database for each repair. The repair status subsystem 184 responds to the recommendation authoring subsystem 182 when the repair entry is created.}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combined/modify/adjust Gardner et al’sgenerative language model to include McQuown et al’s output of code, recommendation mapping process comprising a classifier configured to output the codes in response to the output raw text, the classifier is trained against historical repairs and historical temporal states of the equipment to output repair codes based on temporal state and symptom, since Gardner teaches processing the text input with a fine-tuned generative language model (GLM) that is fined tuned to the equipment (See Gardner [Col.6 Line 36-39, Col.9 Line 44-54, Col. 10, Line 17 – 28]). The combination would have been obvious because a person of ordinary skill in the art since the system and method provide recommendation solution using the generative language models the inclusion of identifying faulty components and providing fault codes for diagnosing a problem and recommending authoring subsystem enable a selection of a repair recommendation for a specific problem. See McQuown [0032, 0072].
Gardner discloses;
a summarizer configured to summarize the codes into the checklist of recommendations and is configured to provide a measure of similarity to known recommendations.
{[Col 10 Line 22- 28] During fine-tuning, the model may be trained to generate summaries similar to the examples using maximum likelihood estimation and backpropagation. Hyperparameters like learning rate, dropout, and epochs may be optimized for the dataset. Augmentation techniques like paraphrasing may be used to expand the training set. The optimized model architecture may be selected by evaluating on a holdout set.}
As per Claim 6 and 13 (Similar scope and language);
Gardner discloses;
The method of claim I, further comprising converting structured, non-textual equipment data into natural language descriptions and combining with textual fault codes for input to the fine-tuned GLM, wherein the GLM produces raw text repair recommendations from both structured and unstructured data.
{[Col. 6, line 26-33] Over many iterations through the huge dataset, the network learns complex statistical relationships and nuances of natural language to build a robust internal representation of linguistic context, grammar, meaning, and fluency.
The trained model can generate coherent, human-like text by iteratively sampling from its predicted next word distributions to continue growing new sequences.}
Cella discloses the raw text repair recommendations from both structured and unstructured data.
{[0098] A user of the set of enterprise energy digital twin solutions 520 may, for example, view a set of factories that are consuming energy and be presented with a view that indicates the relative efficiency of each factory, of individual machines within the factory, or of components of the machines, such as to identify inefficient assets or components that should be replaced because the cost of replacement would be rapidly recouped by reduced energy usage. The digital twin, in such example, may provide a visual indicator of inefficient assets, such as a red flag, may provide an ordered list of the assets most benefiting from replacement, may provide a recommendation that can be accepted by the user (e.g., triggering an order for replacement), or the like. Digital twins may be role-based, adaptive based on context or market conditions, personalized, augmented by artificial intelligence, and the like, in the many ways described herein and, in the documents, incorporated by reference herein.}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combined/modify/adjust Gardner et al’sgenerative language model to include Cella et al’s raw text repair recommendations from both structured and unstructured data, since Gardner teaches processing the text input with a fine-tuned generative language model (GLM) that is fined tuned to the equipment (See Gardner [Col.6 Line 36-39, Col.9 Line 44-54, Col. 10, Line 17 – 28]). The combination would have been obvious because a person of ordinary skill in the art since the system and method provide recommendation solution using the generative language models the inclusion to provide a visual indicator of inefficient assets, such as a red flag, an ordered list of the assets most benefiting from replacement, a set of recommendations or control instructions to optimize energy consumption by a machine, components, or a set of machines. See Cella [0098].
As per Claim 7 and 14 (Similar scope and language);
Cella discloses;
The method of claim 1, wherein the checklist of recommendations or the list of codes is output according to a sequence to conduct the recommendations or the list of codes.
{[0098] The digital twin, in such example, may provide a visual indicator of inefficient assets, such as a red flag, may provide an ordered list of the assets most benefiting from replacement, may provide a recommendation that can be accepted by the user (e.g., triggering an order for replacement), or the like. Digital twins may be role-based, adaptive based on context or market conditions, personalized, augmented by artificial intelligence, and the like, in the many ways described herein and, in the documents, incorporated by reference herein.}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combined/modify/adjust Gardner et al’sgenerative language model to include Cella et al’s checklist of recommendations or the list of codes is output according to a sequence to conduct the recommendations, since Gardner teaches processing the text input with a fine-tuned generative language model (GLM) that is fined tuned to the equipment (See Gardner [Col.6 Line 36-39, Col.9 Line 44-54, Col. 10, Line 17 – 28]). The combination would have been obvious because a person of ordinary skill in the art since the system and method provide recommendation solution using the generative language models the inclusion to provide a visual indicator of inefficient assets, such as a red flag, an ordered list of the assets most benefiting from replacement, a set of recommendations or control instructions to optimize energy consumption by a machine, components, or a set of machines. See Cella [0098].
Response to Arguments
In response to the argument filled December 09, 2025, regarding the 101 rejections, the Examiner Respectfully disagrees.
Applicant argues that the claims are not directed to a commercial activity and mathematical concepts or abstract idea, however it addresses technical problems in equipment repairs systems.
The Examiner respectfully disagrees.
The Examiner notes that the aspect pertaining to the fine-tuning, converting, mapping, codes, and provision of an output of raw text, the examiner viewed as steps of the identified abstract idea in the Step 2A Prong 1 Analysis and the codes as an additional element in the Step 2A Prong 2 Analysis. Therefore, the examiner maintains the claims recite Certain Methods of Organizing Human Activity – Commercial or Legal Interactions grouping of abstract ideas. See Applicant specification [0014].
Applicant states that the claims are technical improvements related to repair domain, Generative language model (GLM), encoding and fine-tuning generative language models, training to enable equipment specific optimization. Applicant further argues that the claimed invention provides an improvement to how machine learning model operates through equipment specific fine-tuning.
The Examiner respectfully disagrees.
The Examiner notes that the machine learning model, repair domain, Generative language model (GLM), encoding and fine-tuning generative language models, training to enable equipment specific optimization are generic tools are merely generic technology with no technical improvement rather an improvement to the abstract idea using generic technology. See specification [0029].
The Examiner notes that fine-tuning for customer specific applications, converting structured, non-textual equipment data into natural language, the pre-training of the GLM are all directed to a mental process.
The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 (2012) Mental processes [] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 19 3, 197 (1978).
The Examiner maintains these claims recite an abstract idea.
Therefore, for the foregoing reasons the Examiner has maintained the 35 USC 101 rejection.
Regarding the prior art rejections, the Examiner respectfully disagrees.
Applicant argues that the prior art of record fails to teach the claims “recommendation of mapping process that transforms raw GLM output into structured equipment repair recommendations conforming to the end standards”.
The Examiner respectfully disagrees.
In response to the amended claims, Cella does disclose the recommendation of mapping process. See Cella [0098, 0164] and Gardner discloses the transformation of the GLM output. See Gardner [Col. 9 Line 44-54].
In terms of the arguments Cella, Gardner, McQuown does teach specific limitations such as amended. The prior art discloses the checklist of recommendations See Cella [0098] and a list of codes related to the recommendation. See McQuown [0055].
Applicant argues that the Office has engaged in hindsight. According to Applicant, one of ordinary skill in the art before the effective filing date of the claimed invention would not modify Gardner as the comparison process to rectify the recommendation mapping process.
The Examiner respectfully disagrees in response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
Based on the considered amendments cited, 35 USC 103 references have been utilized to
teach the claimed invention (claim 1, 8 and 15).
Lacking any further argument, claims 1-15 are maintaining the 35 USC 103 rejection, as considered above in light of the amended claim limitation above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to VICTOR CHIGOZIRIM ESONU whose telephone number is (571)272 -
4883. The examiner can normally be reached Monday - Friday 9:00 am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using
a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is
encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Sarah Monfeldt can be reached on (571) 270-1833. The fax phone number for the
organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, vis it: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VICTOR CHIGOZIRIM ESONU/
Examiner, Art Unit 3629
/SARAH M MONFELDT/Supervisory Patent Examiner, Art Unit 3629