Prosecution Insights
Last updated: April 19, 2026
Application No. 18/723,984

FAILURE PREDICTION DEVICE

Final Rejection §101§103
Filed
Jun 25, 2024
Examiner
MOLNAR, SIDNEY LEIGH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fanuc Corporation
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
7 granted / 13 resolved
+1.8% vs TC avg
Strong +86% interview lift
Without
With
+85.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
31 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This correspondence is in response to amendments filed on December 29, 2025. Claims 1, 3, 5-7, and 9 are amended. Claims 2, 4, and 8 are cancelled. Amendments obviate the 112f claim interpretations and therefore those interpretations have been withdrawn. Arguments with respect to the U.S.C. 101 rejection and prior art rejections have been addressed below in “Response to Arguments”. Response to Arguments In Remarks, Applicant argues that Examiner should withdraw associated 35 U.S.C. 101 rejections, as the invention of the instant application allegedly integrates a judicial exception into a practical Application. Specifically, Applicant argues that the limitations of the claims do not recite concepts which can be practically performed by the human mind (Remarks Pages 7-10), do not recite explicit mathematical relationships/formulae/calculations (Remarks Pages 10-12), further recite features which reflect obvious improvements to the relevant existing technology (Remarks Pages 12-22), and that elements of additional limitations of the claim are not well-understood, routine, and conventional (Remarks Pages 22-25). Given the length of such arguments, Examiner has elected to reiterate their position with respect to the claim limitations and will support their assertions with excerpts from the MPEP. Regarding Step 2A - Prong 1, Examiner maintains that limitations of the claim recite mathematical concepts. Claim 1 recites “…derive a first evaluation equation and a second evaluation equation for evaluating the evaluation data…; derive a first threshold based on a difference between the evaluation data and a value of the first evaluation equation, and derive a second threshold based on a difference between the evaluation data and a value of the second evaluation equation…” (emphasis added) in lines 8-15. Regarding MPEP 2106.04(a)(2), “It is important to note that a mathematical concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula."” Regarding mathematical formulas or equations, the MPEP further recites “A claim that recites a numerical formula or equation will be considered as falling within the "mathematical concepts" grouping. In addition, there are instances where a formula or equation is written in text format that should also be considered as falling within this grouping. For example, the phrase "determining a ratio of A to B" is merely using a textual replacement for the particular equation (ratio = A/B). Additionally, the phrase "calculating the force of the object by multiplying its mass by its acceleration" is using a textual replacement for the particular equation (F= ma).” Even further, with regard to mathematical calculations, the MPEP recites “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Thus, Examiner considers the derivation of equations and additionally derivation of threshold values based on a difference between data and values to be clear textual replacements mathematical concepts. To “derive” is merely considered as a synonymous placeholder for “calculate” and additionally generic terms such as “equation”, “threshold”, and “difference” are mere textual descriptors indicative of a formula, result, or mathematical operation. Therefore, the claim recites limitations which are at their broadest reasonable interpretation reciting explicit calculations and formulae through textual descriptors and not merely alluding to such concepts. Further, regarding additional limitations which are considered under Step 2A - Prong 1 as reciting judicial exceptions directed to abstract ideas, Examiner maintains that the limitations recite mental processes. Claim 1 recites, “…determine that the evaluation data is an evaluation data value due to a factor other than the failure of the robot when the first threshold is equal to or greater than the second threshold, and determine that the evaluation data is an evaluation data due to a factor causing the failure of the robot when the first threshold is smaller than the second threshold; predict a failure of the drive shaft based on the evaluation data…” (emphasis added) in lines 16-20. Regarding again MPEP 2106.04(a)(2), “The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea… Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions.” Specifically with regard to limitations that require a computer, the MPEP reads “In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept.” In the above cited limitation for determining whether a result is due to a factor causing a failure of the robot or something else, a simple judgement result comparing two values is exemplified. A human, through the power of their own mind, observe a first and second value and make a determination as to whether the first value is greater than, equal to, or less than the second value. In the limitation, the computer which applies the concept is merely given rules, which were decided upon by the human programmer, to assess whether one outcome is determined over the other based on the relationship between the first and second value. Additionally, with regard to the prediction step, a human can merely observe a data set and predict based on that data set through an observation whether or not a failure will occur. This prediction is a mere opinion and evaluation, and when applied to a computer, the computer is given rules implemented by a human programmer the observed trends which predict failures, i.e., the rules regarding the determination step which determine that there is an outlier between threshold values. Therefore, it would be improper to state that the provided limitations are not practically performed by the human mind and that the invention does not merely use a computer as a tool to perform such concepts. With regard to Step 2A – Prong 2, Examiner determined that the steps of “…collect[ing] evaluation data of at least a drive shaft of a robot working based on a work program…” and “…notify[ing] a user that a failure is predicted…” are additional elements which do not integrate the judicial exceptions (i.e., mathematical concepts and mental processes) into practical applications. Applicant argues that such features are “improvements” to the technology. However, per MPEP 2106.04(d), “The courts have also identified limitations that did not integrate a judicial exception into a practical application: Merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g); and Generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h).” Specifically, the steps of collecting data and notifying a user of the result are extra-solution activity, which is referred to in MPEP 2106.05(g): “The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent. An example of post-solution activity is an element that is not integrated into the claim as a whole, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent.” Thus, collecting data may be considered pre-solution activity of data gathering and producing a notification may be considered as post-solution activity of data output. No such improvements have been made to these specific steps that would differentiate the features from those of the current technology. Additionally, the claim as a whole is not considered as integrating the judicial exception into a practical application, as the claim merely uses a computer as a tool to implement the claimed features. Provided that Examiner additionally found art which maps to the claims, such solutions presented are not considered to be “improvements” to the art, deeming Applicant’s assertions that such improvements are made to be insufficient in overcoming the rejection (see MPEP 2106.05(a) for guidance on improvements to technology and computers). Features specific to failure analysis of a robotic arm are merely linking the judicial exceptions to a particular field of use (see MPEP 2106.05(h) Example (vi) which generally links a specific collection, analysis, and display of data to a specific technological field). Additionally, Examiner did not indicate any specific limitations as relevant under Step 2B, only that the “apply it” logic used in Step 2A – Prong 2 would also be relevant in Step 2B. Examiner thus considers Applicant’s argument regarding the rejection under 35 U.S.C. 101 to be UNPERSUASIVE and the 35 U.S.C. 101 rejection is therefore upheld. Amendments to the previous 101 rejection have been made only in light of the amendments to the claims to include limitations which are newly added for the analysis. With regard to the prior art, Applicant further argues that it would not be obvious to combine teachings of Fortuny with teachings of Hatanaka because Fortuny requires operation of two redundant devices, while Hatanaka does not teach such redundant devices (see Remarks Page 26). Examiner respectfully disagrees. In response to applicant's argument that Fortuny teaches an analysis of two redundant devices while Hatanaka merely teaches a single device, the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). Fortuny is merely relied upon for the exemplified data analysis teachings which predict a failure of a machine. In Fortuny, there is a data set which is compared by a linear regression analysis and additionally a stored history of data collected by the system when the device(s) is not in a failed state. Using this linear regression, thresholds based on the operating parameter(s) evaluated for the equipment are determined. Various thresholds are then compared to one another to determine if the outlying data is merely noise which is contained within a determined threshold range, or if such outlying data would be indicative of a malfunctioning device. Thus, for the system of Hatanaka which collects a history of data and superimposes the data and trends between cycles, it would be obvious to one of ordinary skill in the art to analyze the superimposed data to instead include a linear regression analysis, by which threshold values determine whether or not a failure is occurring in the robot or if such noise is related to a different factor. Therefore, Applicant’s argument is NOT PERSUASIVE. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. On January 7, 2019, the USPTO released new examination guidelines setting forth a two-step inquiry for determining whether a claim is directed to non-statutory subject matter. According to the guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1 and 9 are directed toward non-statutory subject matter, as shown below: STEP 1: Do claims 1 and 9 fall within one of the statutory categories? Claims 1 and 9 are each directed to a device and as such fall within one of the statutory categories. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? Yes, claim 1 is directed to mathematical concepts and mental processes. With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). Regarding claim 1, the limitations which recite “…derive a first evaluation equation and a second evaluation equation for evaluating the evaluation data using the collected evaluation data; derive a first threshold based on a difference between the evaluation data and a value of the first equation, and derive a second threshold based on a difference between the evaluation data and a value of the second evaluation equation…” are exemplary of mathematical concepts. Deriving a first and second equation and deriving a first and second threshold based on a difference are clear mathematical operations which determine formulae and values based on input data and values to construe mathematical relationships. Further, the equations are used to derive values which then indicate the threshold values which are determined by a difference. Thus, the equations referred to are textual replacements for what would otherwise be an alphanumeric expression. As such, these limitations are considered to be at their broadest reasonable interpretation mathematical concepts of deriving (i.e., calculating), not just mere allegations in reference to concepts. Additionally, with regard to claim 1, the limitations which recite “…determine that the evaluation data is an evaluation data value due to a factor other than the failure of the robot when the first threshold is equal to or greater than the second threshold, and determine that the evaluation data is an evaluation data value due to a factor causing the failure of the robot when the first threshold is smaller than the second threshold; predict a failure of the drive shaft based on the evaluation data…” are mental processes. A human, through simple observation and judgement of the human mind, could first determine that a first threshold is greater than or equal to a second threshold value. Then, based on an opinion, i.e., a rule-based determination, a human could, practically with their mind, evaluate the results and the data and determine based on the judgement that a failure is either due to a factor other than the failure of the robot or due to a factor causing the failure of the robot, and from this opinion further predict that a failure is going to occur. Therefore, the steps of determining a failure based on observations of threshold values and further predicting a failure based on the observation of the data are mere opinionated judgements which can be practically performed by the human mind. The use of a computer to implement such steps is merely a tool used to mitigate human error, but are not necessary to making these determinations. Thus, for those reasons stated above, claim 1 recites limitations which are directed to a combination of mathematical concepts and mental processes. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? No, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Claim 1 does not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Also, as noted above, merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea is indicative that the judicial exception has not been integrated into a practical application. Claim 1 recites “…collect evaluation data of at least a drive shaft of a robot working based on a work program…”. In each of these limitations, collecting evaluation data of a robot and/or a drive shaft of a robot is mere data gathering for the purpose of providing inputs for the mathematical concepts described in rejections under Step 2A (Prong 1), which has been demonstrated to be insignificant extra-solution activity, specifically pre-solution activity (see MPEP 2106.05(g)). Claim 1 further recites “…when it is determined that the evaluation data is the evaluation data value due to the factor causing the failure of the robot and when the failure of the drive shaft is predicted, notify a user that failure is predicted…”. Similar to the collection of data, a notification of an output (the output which has been determined by a mental process of evaluation and judgement), mere data output for the purpose of providing results determined by a judicial exception is additionally demonstrated to be insignificant extra-solution activity, specifically post-solution activity (see MPEP 2106.05(g)). STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No, the claims do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Claim 1 does not recite any specific limitation or combination of limitations that are not well-understood, routine, conventional (WURC) activity in the field. Limitations identified as “apply it” in step 2A qualify as apply it in step 2B as well. As described above, the collection of data from a robot and the outputting of the results of the evaluation are also determined as WURC activities, as they are examples of receiving and transmitting data over a network, i.e., passing of information between the control device, the failure prediction device, and the monitor (see MPEP 2106.05(d)). CONCLUSION Thus, since claim 1 is: (a) directed toward an abstract idea, (b) does not recite additional elements that integrate the judicial exception into a practical application, and (c) does not recite additional elements that amount to significantly more than the judicial exception, it is clear that claims 1 are directed towards non-statutory subject matter. DEPENDENT CLAIMS Dependent claims 3, 5-7, and 9 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of the dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application as each of these claims further provide abstract ideas and/or data gathering processes. Therefore, dependent claims 3, 5-7, and 9 are not patent eligible under the same rationale as provided for in the rejection of claim 1. Therefore, claims 1, 3, 5-7, and 9 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 5-7, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Hatanaka (US 2020/0198128 A1) in view of Fortuny (US 2021/0232133 A1). Regarding claim 1, Hatanaka teaches a failure prediction device (“In the embodiment described above, the robot control device 2, the learning data confirmation support device 3, the machine learning device 4, and the failure predicting device 5 are established as respectively independent devices; however, the present invention is not limited thereto. For example, the robot control device 2 may include the learning data confirmation support device 3, the machine learning device 4, and the failure predicting device 5” [0061]. Thus, any function of the control device, machine learning device, learning data confirmation support device, and failure predicting device will be considered as a single failure predicting device.) comprising: a processor; a non-volatile storage device; program instructions stored in the non-volatile storage device that, when executed by the processor (“The components included in the robot control device 2, the learning data confirmation support device 3, the machine learning device 4, and the failure predicting device 5 can be realized by hardware, software, or combinations thereof… Here, the term “realized by software” indicates being realized by a computer reading and executing programs” [0056]. “The programs may be stored and supplied to the computer using various types of non-transitory computer readable medium” [0057]. Thus, there is a computer, i.e., processor, and a non-volatile storage device, wherein there are programs stored on the storage device that are executed by the computer to perform the functions of the disclosure.), cause the failure prediction device to: collect evaluation data of at least a drive shaft of a robot working based on a work program (“As described above, when the robot control device 2 causes the robot 1 to perform a certain operation in accordance with the health check program according to a preset schedule or the like, the data acquisition unit 31 acquires measurement data including time-series data representing at least one of a predetermined state quantity and control quantity relating to the control when the operation is performed, and stores the acquired time-series data as learning data or failure diagnosis data in the measurement data storage unit 361 together with the acquisition time (time stamp)” [0036]. Thus, the data acquisition unit acquires measurement data of state quantities and control quantities, inclusive of those of the drive shaft, which relate to the control when the operation is being performed, i.e., collects evaluation data of at least the drive shaft of a robot working based on a work program.); derive a first evaluation equation for evaluating the evaluation data using the collected evaluation data (Fig. 2A-4B show the alignment of the acquired data in waveforms which are linear estimates of the groups of data. The alignment/linear estimation/waveform is the derived equation in this case.), … predict a failure of the drive shaft based on the evaluation data (“…an anomaly diagnosis unit (for example, an anomaly diagnosis unit 51) that performs, in response to an input of the measurement data acquired by the data acquisition unit, anomaly diagnosis of the industrial machine on a basis of a learning model created by the learning unit” [0019]. Thus, the anomaly diagnosis unit, i.e., prediction determination unit, which is a part of the failure prediction device as indicated in rejection of claim 1, predicts a failure of the robot, inclusive of failures of the drive shaft (disclosed as a drive unit), based on the learned model acquired by training the evaluation data.); and when it is determined that the evaluation data is the evaluation data value due to the factor causing the failure of the robot and when the failure of the drive shaft is predicted, notify a user that failure is predicted (“The anomaly notification unit 52 outputs the diagnosis information of the robot 1 to, for example, the display unit 57 on the basis of the anomaly diagnosis result by the anomaly diagnosis unit 51. With such a configuration, the failure predicting device 5 can output the anomaly diagnosis information relating to the presence or absence of an anomaly of, for example, the drive unit of the robot 1, that is, the anomaly diagnosis information as to whether there is a defect, a failure, or a sign of a defect or a failure, by inputting the failure diagnosis data on the basis of the learned model (normal model)” [0048]. Thus, there is a notification to a user via a display that failure of the drive unit or other part of the robot is predicted when it is determined that the evaluation data is due to a factor causing failure of the robot, i.e., defect/failure or sign of such defect failure, and that failure of the drive unit, i.e., drive shaft, is predicted.). However, Hatanaka does not explicitly teach …derive a first evaluation equation and a second evaluation equation for evaluating the evaluation data using the collected evaluation data, derive a first threshold based on a difference between the evaluation data and a value of the first evaluation equation, and derive a second threshold based on a difference between the evaluation data and a value of the second evaluation equation; determine that the evaluation data is an evaluation data value due to a factor other than the failure of the robot when the first threshold is equal to or greater than the second threshold, and determine that the evaluation data is an evaluation data value due to a factor causing the failure of the robot when the first threshold is smaller than the second threshold; … Fortuny, pertinent to the problem at hand, teaches …derive a first evaluation equation (“the method further comprises the steps of determining the equation of the linear regression between the first operating parameter of the first equipment and the first operating parameter of the second equipment for one or several operating cycles, or one or several parts of the cycle(s)” [0021]. Thus, there is a first equation which is obtained by linear regression of the evaluation data, i.e., operating parameter.) and a second evaluation equation (“This first determined threshold is a function of the redundant equipments that are tracked. It is preferably established based on one or several operating cycles in which neither of the two redundant equipments have experienced a malfunction or a failure. Preferably, this first determined threshold is updated continuously, as a function of the first coefficients of determination established for operating cycles without failures” [0058]. Thus, given that the threshold is a constant which is updated continuously as a function of the first coefficients of determination for operating cycles without failure, this threshold value is obtained over time through a step function processing, in which the value is updated between constant threshold values after cycles without failure.) for evaluating the evaluation data using the collected evaluation data (All derived equations are those due to a collection of data over one or multiple cycles.), derive a first threshold based on a difference between the evaluation data and a value of the first evaluation equation (The coefficient of determination, i.e., first threshold, is derived from the square of the coefficient of correlation, which is the difference between the evaluation data and the value of the linear regression, i.e., first equation. See [0053].), and derive a second threshold based on a difference between the evaluation data and a value of the second evaluation equation (The threshold, i.e., second threshold, is derived based on a comparison, i.e., difference, between the evaluation data and the continuous updating of the coefficients of determination from previous cycles which did not experience failure, i.e., the second evaluation equation. See [0058].); determine that the evaluation data is an evaluation data value due to a factor other than the failure of the robot when the first threshold is equal to or greater than the second threshold (“…if the first coefficient of determination is greater than or equal to the first threshold, emitting a notification indicating an absence of malfunction of the first and/or second equipment item(s)…” [0018]. Thus, in the event that the resulting coefficient, i.e., first threshold, is greater than or equal to the threshold, i.e., second threshold, then the value is due to an absence of a malfunction, i.e., a factor other than the failure of the robot. ), and determine that the evaluation data is an evaluation data value due to a factor causing the failure of the robot when the first threshold is smaller than the second threshold (“…if, for one or several operating cycles, or one or several parts of the cycle(s), the first coefficient of determination is below a first determined threshold, emitting a notification indicating the malfunction of the first and second equipment(s)…” [0018]. Thus, in the event that the resulting coefficient, i.e., first threshold, is smaller than the determined threshold, i.e., second threshold, then the value is due to a malfunction of the equipment, i.e., a factor causing the failure of the robot.)… Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the false detection determination unit as taught by Hatanaka and instead implement the data analysis methods for false detection determination as taught by Fortuny with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because by updating data for analysis through every cycle of operation and making a direct comparison to the continuous distribution of the data, such a method for predicting failures may improve reliability and performance of the system/assembly, as well as reduce the rate of false failure notifications (Fortuny, [0016-0017]). Further motivation for using the methods of determination as described by Fortuny would be through a simple substitution of known determination methods to obtain predictable results (see MPEP 2143.I(B)). Regarding claim 3, Hatanaka as modified by Fortuny (references made to Hatanaka) teaches the failure prediction device according to claim 1, wherein the program instructions further cause the failure prediction device to: predict a failure of the drive shaft based on the evaluation data when it is determined that the evaluation data is the evaluation data value due to the factor causing the failure of the robot (“…an anomaly diagnosis unit (for example, an anomaly diagnosis unit 51) that performs, in response to an input of the measurement data acquired by the data acquisition unit, anomaly diagnosis of the industrial machine on a basis of a learning model created by the learning unit” [0019]. The anomaly diagnosis unit predicts a failure of the drive shaft based on the evaluation data when it is provided a model derived from the data which is deemed to be appropriate, i.e., when it is determined that the evaluation data is an evaluation data value due to the factor causing the failure of the robot.). Regarding claim 5, Hatanaka as modified by Fortuny (references made to Fortuny) teaches the failure prediction device according to claim 1, wherein the evaluation equation includes a first evaluation equation obtained by linear regression of the evaluation data (“the method further comprises the steps of determining the equation of the linear regression between the first operating parameter of the first equipment and the first operating parameter of the second equipment for one or several operating cycles, or one or several parts of the cycle(s)” [0021]. Thus, there is a first equation which is obtained by linear regression of the evaluation data, i.e., operating parameter.), and a second evaluation equation obtained by performing step function or binarization processing on the evaluation data (“This first determined threshold is a function of the redundant equipments that are tracked. It is preferably established based on one or several operating cycles in which neither of the two redundant equipments have experienced a malfunction or a failure. Preferably, this first determined threshold is updated continuously, as a function of the first coefficients of determination established for operating cycles without failures” [0058]. Thus, given that the threshold is a constant which is updated continuously as a function of the first coefficients of determination for operating cycles without failure, this threshold value is obtained over time through a step function processing, in which the value is updated between constant threshold values after cycles without failure.). Regarding claim 6, Hatanaka as modified by Fortuny (references made to Hatanaka) teaches the failure prediction device according to claim 3, wherein the program instructions further cause the failure prediction device to: skip the prediction of the failure of the drive shaft when it is determined that the evaluation data is the evaluation data value due to the factor other than the failure of the robot (“ The data selection unit 33 excludes, from the measurement data storage unit 361, the time-series data which is determined to be inappropriate data as the learning data from the plurality of time-series data displayed by the display control unit 32” [0042]. Thus, the inappropriate data, i.e., the evaluation data in which the value of the data is due to a factor other than the failure of the robot, is skipped in the prediction determination which determines a failure of the drive shaft.). Regarding claim 7, Hatanaka as modified by Fortuny (references made directly in citation) teaches the failure prediction device according to claim 1, wherein the program instructions further cause the failure prediction device to: predict a failure of the drive shaft based on the evaluation data (“…an anomaly diagnosis unit (for example, an anomaly diagnosis unit 51) that performs, in response to an input of the measurement data acquired by the data acquisition unit, anomaly diagnosis of the industrial machine on a basis of a learning model created by the learning unit” (Hatanaka, [0019]). Thus, the anomaly diagnosis unit predicts a failure of the robot, inclusive of failures of the drive shaft (disclosed as a drive unit), based on the learned model acquired by training the evaluation data.), and determine whether the evaluation data is the evaluation data value due to the factor other than the failure of the robot or the evaluation data value due to the factor causing the failure of the robot after the failure of the drive shaft is predicted (“In this case, a second and third coefficients of determination are calculated and/or evaluated for one of the equipments, preferably both redundant equipments, in order to identify which of the two equipment items is not operating normally” [0062]. “If the second coefficient of determination is greater than or equal to a second threshold, which is a function of the first operating parameter of the equipment and the second considered parameter, and the third coefficient of determination is greater than or equal to a third threshold, which is a function of the first operating parameter of the equipment and the third considered parameter, this means that the evaluated equipment is operating normally. Otherwise, the evaluated equipment is suffering from a malfunction and a notification is emitted to report it and so that an inspection, maintenance or a replacement is done” (Fortuny, [0067]). Thus, the system which determines whether or not failure is predicted for at least one of the equipment by use of a first coefficient of determination and a first threshold, thereafter determines a second and third coefficient of determination in order to evaluate whether the result is due to a malfunction present in the equipment, i.e., value due to a factor causing a failure of the robot, or if the result indicates normal operation, i.e., value due to a factor other than the failure of the robot.). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date to have modified the prediction analysis processes of Hatanaka to include the prediction of failure before determining whether or not the evaluation values are due to a factor causing the failure of the robot, or something else as taught by Fortuny with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because by determining the precise source/cause of the failure after a failure has been predicted, false alarms may be reduced or eliminated (Fortuny, [0060]), therefore increasing system efficiency and reducing downtime for repairs and inspection. Regarding claim 9, Hatanaka as modified by Fortuny (references made to Hatanaka) teaches the failure prediction device according to claim 1, wherein the program instructions further cause the failure prediction device to: display, on a display device, at least one of a prediction result of a failure of the robot predicted based on the evaluation data (“The anomaly notification unit 52 outputs the diagnosis information of the robot 1 to, for example, the display unit 57 on the basis of the anomaly diagnosis result by the anomaly diagnosis unit 51” [0048]. Thus, the anomaly notification unit outputs the resulting diagnosis information derived from the learned model based on measurement data, i.e., prediction result of a failure of the robot predicted based on the evaluation data.) or information indicating the determination result (“The display control unit 32 aligns a plurality of pieces of time-series data acquired by the data acquisition unit 31 in the direction of the time axis and, in such a state, superimposes the same type of data thereof on the display unit 37 for display in a graph” [0038]. Thus, this superimposed data is information indicating a determination result of the false detection determination unit, and is caused to be displayed on the display unit by the display control unit such that an evaluation of the data can be made by the operator. Such information will indicate if the data is appropriate (see Fig. 2A and 3A) or inappropriate (see Fig. 2B and 3B).), wherein, when the evaluation data value due to the factor other than the failure of the robot is determined, preferentially display information indicating the evaluation data value due to the factor other than the failure of the robot (“More specifically, when time-series data (waveforms) relating to motor velocities, which are control quantities, stored in the measurement data storage unit 361 as learning data are superimposed and displayed on the display unit 37, and when different waveforms are displayed without all waveforms overlapping on about one line, all of the measurement data corresponding to the measurement times of the waveforms selected by the operator from among these waveforms are excluded from the measurement data storage unit 361 as inappropriate data as learning data” [0042]. Thus, when there exists inappropriate data which presents as multiple waveforms, the resulting waveforms are displayed on the display unit to indicate such data values which are inappropriate, i.e., due to factors other than the failure of the robot.). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY L MOLNAR whose telephone number is (571)272-2276. The examiner can normally be reached 8 A.M. to 3 P.M. EST Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jonathan (Wade) Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.L.M./Examiner, Art Unit 3656 /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Jun 25, 2024
Application Filed
Sep 19, 2025
Non-Final Rejection — §101, §103
Dec 29, 2025
Response Filed
Feb 05, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600039
ROBOT, CONVEYING SYSTEM, AND ROBOT-CONTROLLING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12533807
ROBOTIC APPARATUS AND CONTROL METHOD THEREOF
2y 5m to grant Granted Jan 27, 2026
Patent 12479098
SURGICAL ROBOTIC SYSTEM WITH ACCESS PORT STORAGE
2y 5m to grant Granted Nov 25, 2025
Patent 12384048
TRANSFER APPARATUS
2y 5m to grant Granted Aug 12, 2025
Patent 12376922
TOOL HEAD POSTURE ADJUSTMENT METHOD, APPARATUS AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+85.7%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month