Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The Final Office Action is in response to the arguments and amendments filed September 09, 2025.
Claims 2, 3, 4, 5, 6, 7, 9, 10, 11, 12, 13, 14, 16, 17, 18, and 20 are originals.
Claims 1, 8 and 15 have been amended.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture,
or composition of matter, or any new and useful improvement thereof, may obtain
a patent therefor, subject to the conditions and requirements of this title.
Claim 1 -20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to
an abstract idea without significantly more.
Step 1 (The Statutory Categories): Is the claim to a process, machine, manufacture, or composition of matter? MPEP 2106.03.
Per Step 1, claim 1-7 is to a method (i.e., a process), claim 8- 14 is to a system (i.e., a machine), and claim 15 - 20 is to a non-transitory computer-readable medium (i.e., a manufacture or machine). Thus, the claims are directed to statutory categories of invention. However, the claims are rejected under 35 U.S.C. 101 because they are directed to an abstract idea, a judicial exception, without reciting additional elements that integrate the judicial exception into a practical application.
The analysis proceeds to Step 2A Prong One.
Step 2A – Prong One: Does the claim recite an abstract idea, law of nature, or natural phenomenon? MPEP 2106.04.
The abstract idea of claim 1, 8 and 15 are (claim 1 being representative):
receiving an input including at least one of [textual data and error code] type data associated with a problem for an equipment into a [user interface];
inputting the input into a domain-specific [machine learning model] trained based on a structured and anonymized data set including timestamped service repair records, technician-authored annotations, and part replacement histories, wherein in response to receiving the input the domain-specific [machine learning model] is configured to:
grouping components of the equipment into groupings based on historical co-repair patterns and part dependencies;
portraying potential solutions for the problem as a decision tree;
identifying a particular solution of the potential solutions as output by traversing the decision tree based on the groupings; and
providing the output that facilitates service repair operations, wherein the outputs include at least one of ranked solution sets, suggested actions, and suggested parts for replacement.
The abstract idea steps italicized above are those which could be performed mentally, including with pen and paper. The steps describe, at a high level analyzing, collecting, and displaying the results. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, including determining, recording, identifying, grouping, portraying, analyzing and providing output that facilitates service repair operations for the equipment, observations, evaluations, judgements, and/or opinions, then it falls within the Mental Processes – Concepts Performed in the Human Mind grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Additionally, and alternatively, the abstract idea steps italicized above describe the steps that pertain to improving efficiencies in troubleshooting and providing solutions for problems with equipment for service technicians, which constitutes a process that, under its broadest reasonable interpretation, covers fundamental economic principles or practices. This is further supported by {[0042]} of applicant’s specification as filed. If a claim limitation, under its broadest reasonable interpretation, covers limitations relating to hedging, insurance, and/or mitigating risk, then it falls within the Certain Methods of Organizing Human Activity – Fundamental Economic Principles or Practices grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? MPEP 2106.04.
This judicial exception is not integrated into a practical application because the additional elements are merely instructions to apply the abstract idea to a computer, as described in MPEP 2106.05(f).
Claim 1 recites the following additional elements: machine learning model, User interface, Textual and error code data.
Claim 2 recites the following additional elements: Crowdsourced data
Claim 8 and 15 recites the following additional elements: Processor, memory storing computer-executable instructions
Claim 15 recites the following additional elements: A non-transitory computer-readable medium.
These elements are merely instructions to apply the abstract idea to a computer, per MPEP 2106.05(f). Applicant has only described generic computing elements in their specification, as seen in {[0035 - 0036]} of applicant’s specification as filed.
Further, these additional elements are not technical improvement and merely implementing the abstract idea using generic technology. As such additional elements are not significantly more or transformative into a practical application. (See MPEP 2106.05(f)).
Therefore, per Step 2A Prong Two, the additional elements, alone and in combination, do not integrate the judicial exception into a practical application. The claim is directed to an abstract idea.
Step 2B (The Inventive Concept): Does the claim recite additional elements that amount to significantly more than the judicial exception? MPEP 2106.05.
Step 2B involves evaluating the additional elements to determine whether they amount to significantly more than the judicial exception itself.
The examination process involves carrying over identification of the additional element(s) in the claim from Step 2A Prong Two and carrying over conclusions from Step 2A Prong Two pertaining to MPEP 2106.05(f).
The additional elements and their analysis are therefore carried over: applicant has merely recited elements that facilitate the tasks of the abstract idea, as described in MPEP 2106.05(f).
Further, the combination of these elements is nothing more than a generic computing system with machine learning models. When the claim elements above are considered, alone and in combination, they do not amount to significantly more. See {[Specification 0042, Figure 2]}.
Therefore, per Step 2B, the additional elements, alone and in combination, are not significantly more. The claims are not patent eligible.
The analysis also considers the dependent claims: claims 2-7, 9-14 and 16-20.
Accordingly, claims 1-20 are rejected under 35 USC § 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-6, 8-13, and 15- 19, are rejected under 35 U.S.C. 103 as being unpatentable over Trinh et al [2020/0379454], hereafter Trinh, in view of McQuown et al [2005/0144183], hereafter McQuown, in further view of Graham et al [2020/0213006], hereafter Graham, in further view of Kalinski et al [2023/0394411], hereafter Kalinski.
As per claim 1, 8 and 15 (Similar scope and language);
Trinh discloses:
A system comprising: a processor; and a memory storing computer-executable instructions, which, when executed by the processor, cause the processor to perform operations comprising: (Claim 8):
And
A non-transitory computer-readable medium storing instructions thereon, wherein the instructions, when executed by one or more processors, cause the one or more processors to perform operations comprising: (Claim 15):
{[0007] In one embodiment, a non-transitory computer readable medium that is configured to store instructions is described. The instructions, when executed by one or more processors, cause the one or more processors to perform a process that includes steps described in the above computer-implemented methods or described in any embodiments of this disclosure
{[0035] Parts of the predictive maintenance server 110 may also include a memory that stores computer code including instructions that may cause the processors to perform certain actions when the instructions are executed, directly or indirectly by the processors.}
inputting the input into a domain-specific machine learning model trained based on
{[0049] After a machine learning model is trained, the model scoring engine 240
may use the trained machine learning model to determine a score associated with an input
dataset. An input dataset may be a set of newly generated sensor data from a piece of
equipment 150. Based on the trained model, an anomaly score may be generated using the sensor
and setting data from the equipment 150 as the input of the trained model}
timestamped service repair records, technician-authored annotations, and part replacement histories, wherein in response to receiving the input the domain-specific machine learning model is configured to:
{[0044] The predictive maintenance server 110 receives and analyzes the data transmitted from various sensors 154 and settings 152. The predictive maintenance server 110 may train one or more machine learning models that assign anomaly scores to a piece of equipment 150. The anomaly scores may include an overall anomaly score and individual anomaly scores each corresponding to a component, a measurement, or an aspect of the equipment 150. When the anomaly scores are determined to be beyond a specific range such as above a predetermined threshold, the predictive maintenance server 110 identifies a particular facility site 140 and a particular piece of equipment 150 and provides an indication that the equipment 150 may need an inspection and possible repair. The predictive maintenance server 110 may also train additional models such as classifiers and regressors that can identify a specific component of the equipment 150 that may need an inspection, repair and/or replacement.
{[0067] FIG. 5 is a flowchart depicting a process of generating a first example model of anomaly detection, according to an embodiment. The first example model of anomaly detection may be referred to as a predictive power parity (PPP) model. The PPP model may be an unsupervised learning model that is trained based on training data that does not include labels or only includes a small number of labels on whether a piece of equipment is normal or defective or on the repair history of the equipment.
{[0081] FIG. 9 is a block diagram illustrating a structure of a second example model of anomaly detection, according to an embodiment. The second example model of anomaly detection may be referred to as a variational autoencoder (VAE) model. The VAE model may be an unsupervised learning model that is trained based on training data that does not include labels or only includes a small number of labels on whether a piece of equipment is normal or defective or on the repair history of the equipment.}
grouping components of the equipment into groupings based on historical co-repair patterns and part dependencies;
{[0109] FIGS. 19A-19C illustrate user interfaces for displaying anomalies, according to an embodiment. FIG. 19A shows a list of anomalies in a tabular form at store/equipment level. The table shows fields including store name 1900, a channel id 1902 (identifying a sensor), a device type group 1904 (metadata describing device), duration 1904 (time interval associated with the anomaly), status 1908 indicating whether the anomaly is on-going, an average risk score 1910, the last risk score 1912 that was determined for the anomaly, and diagnosis status 1914.
{[0049] An input dataset may be a set of newly generated sensor data from a piece of equipment 150. Based on the trained model, an anomaly score may be generated using the sensor and setting data from the equipment 150 as the input of the trained model. In some cases, the trained model may be a classifier or a regression model. For example, a classifier may be trained to determine which component of the equipment 150 may need an inspection, repair or general follow up. A regression model may provide a prediction of the score that corresponds to
the likelihood of a piece of equipment (or a component thereof) is abnormal.
{[0051] The failure classification and prediction model store 260 stores machine learning models that are used to identify specific components or aspects of a piece of equipment 150 that may need inspection and/or repair. For example, a trained classifier model may be stored in the failure classification and prediction model store 260. The trained classifier model such as a neural network or a random forest model may receive newly generated sensor data as input and determine a component that most likely needs further inspection. Another trained classifier model may also determine the type of defect of an identified component. The models that are trained to classify failures may estimate failure probabilities of the equipment 150 or of a particular component of the equipment 150.}
providing the output that facilitates service repair operations, wherein the outputs include at least one of ranked solution sets, suggested actions, and suggested parts for replacement.
{[0035] A predictive maintenance server 110 provides predictive maintenance
information to various operators of the facility sites 140. A predictive maintenance server 110 may simply be referred to as a computing server 110. Maintenance information may include information on diagnostics, anomaly, inspection, repair, replacement, etc. The predictive maintenance server may generate one or more metrics that quantify anomaly of a piece of equipment at a facility site 140, may identify one or more pieces of equipment and/or the equipment's components that may need maintenance or repair, and may provide recommendations on recourses and actions that should be taken for particular.}
{[0044] When the anomaly scores are determined to be beyond a specific range such as above a predetermined threshold, the predictive maintenance server 110 identifies a particular facility site 140 and a particular piece of equipment 150 and provides an indication that the equipment 150 may need an inspection and possible repair. The predictive maintenance server 110 may also train additional models such as classifiers and regressors that can identify a specific component of the equipment 150 that may need an inspection, repair and/or replacement.}
Trinh, does not disclose the following limitations. However, McQuown does disclose the
following limitations:
McQuown discloses:
receiving an input including at least one of textual data and error code type data associated with a problem for an equipment into a user interface;
{[0055] The portable unit 14 also offers an instant messaging feature allowing the technician to quickly communicate repair information (for example, fault codes, diagnostic readings, or simple descriptive text) to a repair expert at the monitoring and diagnostic service center 20. The repair expert can respond directly to the technician through the portable unit 14. This feature is intended for use during the collection of additional diagnostic information or when problems are encountered during the course of a repair.}
Motivation: It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to modify the system for troubleshooting and error analysis
as disclosed by Trinh with the inclusion of a fault code as taught by McQuown, to provide repair
solutions, thereby allowing the technician to quickly communicate repair information {[0055] of McQuown}
The combination of Trinh and McQuown, does not disclose the following limitations. However,
Kalinski does disclose the following limitations: Kalinski discloses:
a structured and anonymized data set including
{[0211] In step 506, at least a portion of the data received by the PSCI risk modeling engine 402 in step 504 (e.g., the information received from third party databases and/or other external sources) is anonymized and stored in one or more PSCI databases 150 a- 150 n, and/or one or more remote PSCI servers 175 to be utilized by the PSCI machine learning engine 405 in conjunction with artificial intelligence (e.g., a ANN 480) to generate increasingly better dynamic pricing models.}
Motivation: It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to modify the system for troubleshooting and error analysis
as disclosed by Trinh and McQuown in view of Kalinski with the inclusion of anonymizing the data as taught by Kalinski, to secure the information {[0211] of Kalinski}.
The combination of Trinh, McQuown and Kalinski, does not disclose the following limitations.
However, Graham does disclose the following limitations:
Graham discloses:
portraying potential solutions for the problem as a decision tree;
identifying a particular solution of the potential solutions as output by traversing the decision tree based on the groupings; and
{[0242] In some embodiments of the present invention, decision trees are used. Decision tree algorithms belong to the class of supervised learning algorithms. The aim of a decision tree is to induce a classifier (a tree) from real-world example data. This tree can be used to classify unseen examples which have not been used to derive the decision tree. A decision tree is derived from training data. An example contains values for the different attributes and what class the example belongs. In one embodiment, the training data is data representative of a plurality of users for whom a mental health status is known with respect to associated user behaviors.}
Motivation: It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to modify the system for troubleshooting and error analysis
as disclosed by Trinh, McQuown, Kalinski in view of Graham with the inclusion of a filtering technique as taught by Graham, to identify a repair solution {[0242] of Graham}.
As per claim 2, 9 and 16 (Similar scope and language);
McQuown discloses:
The method of claim 1, wherein the service repair records are crowdsourced data from
different users and stored in a shared database.
{[0036] An expert repository 42 stores the repair recommendations authored at the MDSC 20. These recommendations include: suggested repairs based on operational and/or
failure information extracted from the on-board monitoring system of the locomotive derived
from symptoms reported by the repair technician, or planned maintenance actions, or f ield
modifications or upgrades. The recommendation can include suggested trouble shooting actions
to further refine the repair recommendation and links to appropriate repair instructions,
schematics, wiring diagrams, parts catalogs, and troubleshooting guides to make the diagnosis and repair process easier.}
Motivation: It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to modify the system for troubleshooting and error analysis
as disclosed by Trinh, Kalinski, Graham in view of McQuown with the inclusion of a crowdsourced data as taught by McQuown, to identify a repair solution and recommendation {[0036] of McQuown}.
As per claim 3, 10, 17 (Similar scope and language);
Trinh discloses:
The method of claim 2, wherein the machine learning model is updated based on additional
service repair records stored in the shared database.
{[0064] The predictive maintenance server 110 may also retrieve the whole dataset on-demand at the beginning of the training stage or when summary statistics 434 needs to be updated (e.g., a new column is added). The training data 422 is used to train 439 various machine learning models. The predictive maintenance server 110 uses the scoring data 424 to generate 448 one or more anomaly scores 450 for equipment 150.
{[0100] The predictive maintenance server 110 may update the reference
histogram based on the received second set of sensor data, wherein the updated histogram is used
for subsequent anomaly detection. Accordingly, the histogram-based model can be updated
constantly as new sensor data is received and does not require a periodic training step that is
required, for example, by machine learning based models.
{[0050] The models may be trained using unsupervised learning techniques
or semi-supervised learning techniques. For example, in semi-supervised learning, as additional
repair and operator inspection records are available, more labeled training data may be provided
to a machine learning model to improve the score prediction results.}
As per claim 4, 11, 18 (Similar scope and language);
Trinh discloses:
The method of claim 1, wherein each service repair record includes at least one of a type of
problem, a length of time for addressing the problem, steps for troubleshooting the
problem, steps for solving the problem, parts used to address the problem, and an error
code associated with the problem.
{[0051] Another trained classifier model may also determine the type of defect of an identified component. The models that are trained to classify failures may estimate failure probabilities of the equipment 150 or of a particular component of the equipment 150. The models may also provide priority rankings among different pieces of equipment 150 and among different components of a piece of equipment 150.
{[0057] While not all repair data 320 might include the repaired component 322 or the repair reason 326, some of the repair data 320 may include repair date and time 324. Combining the sensor data 310 and the repair data and time 324, the predictive maintenance server 110 may generate some training data 340 that includes repair date and time.}
As per claim 5, 12, 19 (Similar scope and language);
Trinh discloses:
The method of claim 1, wherein the machine learning model is periodically re-trained
based on an amount of data in the data set.
{[0064] FIG. 4B is another example of training and scoring pipeline, according to an embodiment. The pipeline shown in FIG. 4B is similar to the pipeline shown in FIG. 4A.
Similar blocks are not repeatedly discussed. The pipeline shown in FIG. 4B may include a more
thorough and computing-intensive data pre-processing such as data cleaning and time-window
creation. The pipeline shown in FIG. 4B differs from the pipeline shown in FIG. 4A in that
the predictive maintenance server 110 may separate data using a sliding window approach
(in blocks 436 and 442) for both training data 422 and scoring data 424 for particular time
frames corresponding to windows. The predictive maintenance server 110 may also store
processed and windowed data as datasets in data store 438 and data store 444. The predictive
maintenance server 110 may perform merging of data 446 when newly generated scoring
data 424 is received. Newly generated scoring data 424 can become part of the training
data 422 to reinforce the training of the machine learning models.}
As per claim 6, 13, 20 (Similar scope and language);
Trinh discloses:
The method of claim 1, wherein the machine learning model is configured to adjust the
groupings based on particular parts or combination of parts for replacement.
{[0049] An input dataset may be a set of newly generated sensor data from a piece of equipment 150. Based on the trained model, an anomaly score may be generated using the sensor and setting data from the equipment 150 as the input of the trained model. In some cases, the trained model may be a classifier or a regression model. For example, a classifier may be trained to determine which component of the equipment 150 may need an inspection, repair or general follow up. A regression model may provide a prediction of the score that corresponds to
the likelihood of a piece of equipment (or a component thereof) is abnormal.
{[0051] The failure classification and prediction model store 260 stores machine learning models that are used to identify specific components or aspects of a piece of equipment 150 that may need inspection and/or repair. For example, a trained classifier model may be stored in the failure classification and prediction model store 260. The trained classifier model such as a neural network or a random forest model may receive newly generated sensor data as input and determine a component that most likely needs further inspection. Another trained classifier model may also determine the type of defect of an identified component. The models that are trained to classify failures may estimate failure probabilities of the equipment 150 or of a particular component of the equipment 150.
{[0052] The maintenance recommendation engine 270 may provide one or more alerts (e.g., in the form of recommendations) for inspecting or repairing of pieces of equipment 150. For example, for a particular equipment 150 that newly generates a set of sensor data, the predictive maintenance server 110 may retrieve one or more machine learning models stored in the anomaly detection model store 250 and/or in the failure classification and prediction model store 260. One or more anomaly scores may be determined for the particular equipment 150.}
Claim(s) 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over
Trinh et al [2020/0379454], hereafter Trinh in view of Cella et al [2019/0129404],
hereafter Cella, in further view of McQuown et al [2005/0144183], hereafter
McQuown, in further view of Graham et al [2020/0213006], hereafter Graham, in
further view of Kalinski et al [2023/0394411], hereafter Kalinski.
As per claim 7, 14 (Similar scope and language);
Trinh discloses:
The method of claim 1, wherein the machine learning model is further configured to
predict parts consumption, repair time, and a difficulty associated with the output.
{[0035] A predictive maintenance server 110 provides predictive maintenance information to various operators of the facility sites 140. A predictive maintenance server 110 may simply be referred to as a computing server 110. Maintenance information may include information on diagnostics, anomaly, inspection, repair, replacement, etc. The predictive maintenance server may generate one or more metrics that quantify anomaly of a piece of equipment at a facility site 140, may identify one or more pieces of equipment and/or the equipment's components that may need maintenance or repair, and may provide recommendations on recourses and actions that should be taken for particular equipment. The predictive maintenance server 110 may take the form of software, hardware, or a combination thereof (e.g., a computing machine of FIG. 20).
The combination above does not disclose the repair time and the difficulty associated with the output. However, Cella does disclose the following limitations:
Cella discloses:
Repair time
{[0500] The monitoring application 8150 may provide recommendations regarding scheduling repairs and/or maintenance. The monitoring application 8150 may provide recommendations regarding replacing a sensor. The replacement sensor may match the sensor being replaced or the replacement sensor may have a different range, sensitivity, sampling frequency, and the like.
Difficulty associated with the output
{[0580] The impact of a failure, time response of a failure (e.g., warning time and/or off-optimal modes occurring before failure), likelihood of failure, extent of impact of failure, and/or sensitivity required and/or difficulty to detection failure conditions may drive the extent to which a component or piece of equipment is monitored with more sensors and/or higher capability sensors being dedicated to systems where unexpected or undetected failure would be costly or have severe consequences.}
Motivation: It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to modify the system to identify and predict one or more
pieces of equipment for repair as disclosed by the combination of Trinh, McQuown, Graham and Kalinski in view of Cella with the inclusion of a repair time and an output as taught by Cella, to monitor, identify and recommend repair solution {[0500 and 0580] of Cella}.
Response to Arguments
In response to the argument filled September 09, 2025, regarding the 101 rejections, the Examiner Respectfully disagrees.
Applicant argues that grouping, identifying, and portraying the decision tree are not mental process since they require computational analysis of large- scale structured data, which cannot be performed manually. “The model's configuration and training data are specific to the domain of equipment repair, and the output is generated in real-time, reflecting adaptive learning from prior repair outcomes”.
Examiner Respectfully disagrees.
Examiner notes that the aspects pertaining the determining, recording, identifying, grouping, portraying, analyzing and providing output that facilitates service repair operations for the equipment recites an abstract idea consistent with the “mental process” groupings and a byproduct of human mental work. The examiner reviewed these as steps of the identified abstract idea in the Step 2A Prong 1 Analysis and additional elements in Step 2A Prong 2 Analysis.
Applicant argues that the claims do not recite an abstract idea, and as amended claims are directed to a technological solution in the field of equipment diagnostics and repair. Applicant argues that the claims recite a domain-specific machine learning model trained on a structured and anonymized dataset comprising timestamped service records, technician-authored annotations, and part replacement histories. According to the applicant argument the model performs a series of computational steps including:
Grouping components based on historical co-repair patterns and part dependencies.
Portraying potential solutions as a decision tree.
Identifying a solution by traversing the decision tree.
The applicant moreover states that these steps and the use of machine learning are not mental processes.
The Examiner respectfully disagrees.
Examiner notes that the system is directed to a mental process. The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 (2012) Mental processes [] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 19 3, 197 (1978).
Furthermore, the Examiner notes that the Machine learning model is used for training, grouping, portraying, refining, analyzing, repairing, deciding tree generation, refining an output, displaying, and updating service repair operations and potential solutions. These elements are not with respect to the abstract idea but additional elements considered under Step 2(a)(II) and 2(b) prong analysis. These are merely generic technology with no technical improvement rather an improvement to the abstract idea using generic technology. The specification states, “Furthermore, AI/ML model 110 can be periodically retrained or continuously learning. For example, AI/ML model 110 can be updated with additional user repair data 116 that is received over time periodically or continuously. [0036] API 112 provides an interface for a user device 114 to provide a query or input to AI/ML model 110. For example, a technician using user device 114 can send a query or input identifying symptoms of a problem for a machine through API 112 and receive an output from AI/ML model 110”. See {[0035 - 0036]} under specification”.
The Examiner maintains the claims recite an abstract idea.
Therefore, for the foregoing reasons the Examiner has maintained the 35 USC 101 rejection.
Regarding the prior art rejections, the Examiner respectfully disagrees.
Applicant argues that the prior art of record fails to teach the claims, specifically that the amended claim limitations: are directed to a technological solution in the field of equipment diagnostics and repair specifically “reciting a domain-specific machine learning model, trained on a structured and anonymized dataset comprising timestamped service repair records, technician-authored annotations, Part replacement histories. This model performs a series of computational steps that include: Group components based on historical co-repair patterns and part dependencies, portraying potential solutions as a decision tree, identifying a solution by traversing the decision tree”. The applicant argues that these features are not disclosed or suggested in the cited prior art and represent a non- obvious integration of domain-specific data, machine learning, and decision logic.
Examiner respectfully disagrees.
In response to applicant argument the prior art used does teach these limitations. Trinh does teach technician-authored annotations or part replacement histories. See Trinh {[0067 and 0081]}. Trinh also teaches the grouping of components based on co-repair patterns and decision tree. See Trinh {[0049, 0051 and 0109]}.
The Examiner is citing McQuown for its teaching of technician messaging and fault code communication detection, and repair solutions, provision of a database and servicing of a selected equipment and system, and an expert rule-based troubleshooting wizard for eliciting information regarding the selected equipment and system and for providing troubleshooting instructions to determine the nature of the equipment fault and the servicing required for the selected equipment and system as taught by McQuown. McQuown discloses offers an instant messaging feature allowing the technician to quickly communicate repair information (for example, fault codes, diagnostic readings, or simple descriptive text) to a repair expert at the monitoring and diagnostic service center 20. See McQuown {[0055]}.
The Examiner is citing Graham for its teaching of providing an intuitive and semi-automated means of collecting and managing crowdsourced data via a crowdsourcing platform, and further leveraging the crowdsourced data to provide various features and services in the field of workplace and building automation and management and also the use of decision trees. Graham does teach potential solutions using a decision tree. See Graham {[0242]}.
The Examiner is citing Kalinski and Cella for their teaching of anonymized data for risk modeling and scheduling and repair time as taught by Kalinski and Cella to identify a repair solution and secure an information. See Kalinski {[0211]} and See Cella {[0500 and 0580]}.
In terms of the arguments, Trinh, McQuown, Graham, Kalinski, and Cella does teach specific limitations as amended
Applicant argues that the Office has engaged in hindsight. According to the Applicant, one of
ordinary skill in the art before the effective filing date of the claimed invention would not modify
Trinh, as the comparison process does not rectify the disclosure of a technician-authored annotations or part replacement histories and grouping components based on co-repair patterns or decision tree traversal.
The Examiner respectfully disagrees.
In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning; it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
Based on the considered amendments cited, 35 USC 103 references have been utilized to
teach the claimed invention (claim 1-20). As such claim 1, 8 and 15 are maintaining the 35 USC 103 rejection as considered above in light of the amended claim limitation. Lacking any further argument, claims 1-20 are maintaining the 35 USC 103 rejection, as considered above in light of the amended claim limitation above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTOR CHIGOZIRIM ESONU whose telephone number is (571)272 - 4883. The examiner can normally be reached Monday - Friday 9:00 am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Sarah Monfeldt can be reached on (571) 270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from
Patent Center. Unpublished application information in Patent Center is available
to registered users. To file and manage patent submissions in Patent Center, vis it:
https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more
information about Patent Center and https://www.uspto.gov/patents/docx for information about
filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC)
at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service
Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VICTOR CHIGOZIRIM ESONU/
Examiner, Art Unit 3629
/SARAH M MONFELDT/Supervisory Patent Examiner, Art Unit 3629