DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) the abstract idea of a mathematical and/or mental activity algorithm for verifying and/or validating the performance of a machine learning model system relative to an ordinary model system.
This judicial exception is not integrated into a practical application because no specific improvement to the model system is realized through the verifying and/or validating process. Claims 5 and 6 recite that a model or component of the underlying system is “improved,” but no specific improvement is recited so these limitations amount to the recitation of to “use” the algorithm result [See MPEP 2106.05(h) – “As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application.”]. The recitation of the “control signal” of Claim 7 is also a non-specific “field of use” limitation.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the recitation of “obtaining models” amounts to the recitation of the mental activity of choosing models for the validation/verification. The recitations of “obtaining measurements” and “obtaining/determining test outputs” amounts to the recitation of necessary data gathering that must be performed in the implementation of the algorithm and does not serve to amount to significantly more than the recitation of the abstract idea itself. The recitations of Claim 8 regarding the storage medium, computer program, and processor amount to the recitation of general-purpose computer elements for implementing the algorithm and do not serve to amount to the recitation of significantly more than the abstract idea itself (see Alice Corp. v. CLS Bank International, 573 U.S. 208 (2014)).
Claim 8 is rejected under 35 U.S.C. § 101 based upon consideration of the claim as a whole. Independent claim 8 is directed to non-statutory subject matter. Applicant's "machine-readable storage medium" encompasses both statutory and non-statutory media including but not limited to carrier waves. Applicant's invention constitutes software per se, void of any hardware components, and as such fails to fall within any of the statutory classes of invention set forth by 35 U.S.C. § 101. Applicant is required to amend his claims to make it clear that his computer-readable medium includes only non-transitory media. See 1351 Off. Gaz. Pat. Office 212 (February 23, 2010).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, and 5-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aslandere (US 20220108172 A1) and Shoham et al. (US 20230004857 A1)[hereinafter “Shoham”].
Regarding Claims 1 and 8, Aslandere discloses a method (and corresponding machine-readable storage medium on which is stored a computer program [Paragraph [0027]]) for verifying and/or validating whether a technical system fulfills a desired criterion, wherein the technical system emits output signals based on input signals supplied to the technical system [Abstract – “Generating a simplified model for an XiL system includes determining a stipulated parameter characterizing model complexity, for a starting model; generating starting model input and output data; … testing the generated simplified model using a test set of the generated starting model input and output data, … if the determined reliability of the simplified model exceeds the stipulated threshold value, outputting the simplified model.”], the method comprising the following steps:
obtaining models for a plurality of components included in the technical system, wherein a connection between the obtained models characterizes which component passes which signal to which other component [Paragraph [0036] – “A method for generating a simplified model for use in an XiL system is explained in more detail below on the basis of FIG. 1. In a first step 1, at least one stipulated parameter, which quantitatively characterizes the complexity of a model, is determined for at least one starting model. In this case, the parameter may be stipulated during the method or may have already been stipulated and predefined. It is also possible to determine a plurality of stipulated parameters for the starting model which quantitatively characterize the complexity of the latter.”See Fig. 2.Paragraph [0041] – “Block 21 is an XiL system or an arrangement for carrying out XiL tests, which comprises a number of XiL models.”Paragraph [0042] – “An XiL model A is identified using the reference numeral 11. An XiL model B is identified using the reference numeral 12 and an XiL model C is identified using the reference numeral 13. Furthermore, the XiL system 21 comprises a vehicle hardware component 14. The XiL system 21 is configured in such a manner that output data of the XiL model A 11 constitute the input data of the XiL model B 12 and output data of the XiL model B 12 form the input data of the XiL model C 13. The output data of the XiL model C 13 form the input data for the hardware component 14.”Paragraph [0043] – “The XiL units shown and the underlying models 11, 12 and 13 can be assigned to specific vehicle components or vehicle functions.”];
obtaining a plurality of validation measurements, wherein each validation measurement includes a measurement input and a measurement output, wherein the measurement output is obtained from a component of the technical system for the measurement input when the measurement input is provided to the component [Paragraph [0037] – “In a next step 2, input data and output data of the at least one starting model are generated.”];
for each respective component of the components, training a respective machine learning model to predict outputs of the respective component based on inputs of the respective component, wherein at least parts of the validation measurements are used as training dataset and wherein the machine learning model corresponds to the model obtained for the respective component [Paragraph [0037] – “In step 3, a neural network is trained using a training set of the generated input data and output data of the at least one starting model in order to generate or develop a simplified model, wherein the simplified model has a lower degree of complexity than the at least one starting model, and wherein a stipulated lower threshold value for at least one parameter quantitatively characterizing the reliability of a model is exceeded.”];
obtaining first test outputs from a last model [Paragraph [0036] – “It is also possible to determine a plurality of stipulated parameters for the starting model which quantitatively characterize the complexity of the latter.” This discloses the testing of the starting models initially.] based on test inputs [Paragraph [0039] – “In a sixth step 6, a check is carried out in order to determine whether the determined complexity of the generated simple model is lower than that of the starting model. If this is the case, the generated simplified model is tested in step 7 using a test set of the generated input data and output data of the at least one starting model, which test set differs from the training set, and the at least one parameter of the generated simplified model, which characterizes the reliability, is determined.”Paragraph [0044] – “The model simplifier 22 is used to convert the complex XiL model B 12 into an AI-based model, that is to say a model based on artificial intelligence, which can be implemented in real time.”], wherein the first test outputs are obtained by propagating the test inputs through the connection of models [Paragraph [0042] – “The XiL system 21 is configured in such a manner that output data of the XiL model A 11 constitute the input data of the XiL model B 12 and output data of the XiL model B 12 form the input data of the XiL model C 13.”];
determining second test outputs from the machine learning model corresponding to the last model and based on the test inputs of the models [Paragraph [0050] – “After the method has been started 50, an XiL system is initialized in step 51, for example an XiL system identified using the reference numeral 21 in FIG. 2 or an XiL system identified using the reference numeral 31 in FIG. 3. In step 52, input and output data of the model are then generated, which model is too complex for a real-time application, that is to say for the XiL model B 12 in FIG. 2 or the XiL models B 42 and C 43 in FIG. 3, for example.”], wherein the second test outputs are obtained by propagating the test inputs through a connection of the machine learning models, wherein the connection of the machine learning models is according to the connection of the models the respective machine learning models correspond to [Paragraph [0039] – “In a sixth step 6, a check is carried out in order to determine whether the determined complexity of the generated simple model is lower than that of the starting model.”Paragraph [0051] – “In step 54, a model generated using the neural network is tested with respect to its reliability using the input and output data generated in step 52.”Paragraph [0042] – “The XiL system 21 is configured in such a manner that output data of the XiL model A 11 constitute the input data of the XiL model B 12 and output data of the XiL model B 12 form the input data of the XiL model C 13.”];
determining a deviation, wherein the deviation characterizes a difference between the first test outputs determined from the last model and the second test outputs determined by the machine learning model corresponding to the last model [Paragraph [0039] – “In a sixth step 6, a check is carried out in order to determine whether the determined complexity of the generated simple model is lower than that of the starting model.”Paragraph [0051] – “In step 55, the reliability is checked with regard to a defined limit value.”]; and
verifying and/or validating whether the technical system fulfills the criterion [Paragraph [0039] – “In a sixth step 6, a check is carried out in order to determine whether the determined complexity of the generated simple model is lower than that of the starting model. If this is the case, the generated simplified model is tested in step 7 using a test set of the generated input data and output data of the at least one starting model, which test set differs from the training set, and the at least one parameter of the generated simplified model, which characterizes the reliability, is determined. If the determined complexity of the generated simplified model is not lower than that of the starting model in step 6, the method jumps back to step 3 or 4.”Paragraph [0051] – “If the reliability undershoots the stipulated limit value, the method jumps back to step 52. If the reliability exceeds the stipulated limit value, the complex starting model, that is to say the XiL model B 12 in FIG. 2 or the XiL models B 42 and C 43 in FIG. 3 for example, is replaced in step 56 with the generated AI-based XiL model, that is to say the XiL model 15 in FIG. 2 or the XiL model 45 in FIG. 3 for example.”], but fails to disclose that the verifying and/or validating is characterized by determining a fraction of the first test outputs that fulfill an offset criterion, wherein the offset criterion is determined by offsetting the criterion by the determined deviation.
However, Shoham discloses a method for validating ML models [See Fig. 3] in which standard deviation thresholds for an underperforming model (an offset criterion determined based on result deviation that is used to evaluate other deviations) are used in validating the performance of the model [See Paragraphs [0041]-[0042]]. It would have been obvious to use such an approach to determine if the simplified models have suitably simplified the complex models or if further training/refinement of the simplified models is needed [Paragraph [0039] of Aslandere – “If the determined complexity of the generated simplified model is not lower than that of the starting model in step 6, the method jumps back to step 3 or 4.”]
Regarding Claim 2, the combination would disclose that the deviation is determined by determining differences for a plurality of the first test outputs and corresponding second test outputs and providing a predefined quantile of the differences as deviation [Use of the standard deviation thresholds of Shoham per Paragraphs [0041]-[0042] with regards to the complex model performance and simplified model performance of Aslandere].
Regarding Claim 5, Aslandere discloses that a model of the obtained models is improved if the criterion cannot be verified and/or validated [Paragraph [0039] – “If the determined complexity of the generated simplified model is not lower than that of the starting model in step 6, the method jumps back to step 3 or 4.”Paragraph [0051] – “If the reliability undershoots the stipulated limit value, the method jumps back to step 52.”].
Regarding Claim 6, Aslandere discloses that at least one of the components of the technical system is improved if the desired criterion cannot be verified and/or validated [Paragraph [0039] – “If the determined complexity of the generated simplified model is not lower than that of the starting model in step 6, the method jumps back to step 3 or 4.”Paragraph [0051] – “If the reliability undershoots the stipulated limit value, the method jumps back to step 52.”].
Regarding Claim 7, Aslandere discloses that the technical system is configured to provide a control signal to a manufacturing machine and/or a robot [Paragraph [0019] – “The method can be designed to generate a model for simulating at least one function of a motor vehicle, preferably a self-driving motor vehicle. This may be, for example, a driver assistance function and/or a function of the drive train of the motor vehicle.” An autonomous vehicle being a robot, consistent with Page 2 first paragraph of the instant Specification.].
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aslandere (US 20220108172 A1), Shoham et al. (US 20230004857 A1)[hereinafter “Shoham”], and Sinn et al. (US 20210312336 A1)[hereinafter “Sinn”].
Regarding Claim 3, Aslandere fails to disclose that at least one of the machine learning models is or includes a Gaussian process. However, Sinn discloses that such machine learning models are viable for performing machine learning [Paragraph [0065]]. It would have been obvious to use, as appropriate, and evaluate such a machine learning model in order to simplify the complex models.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aslandere (US 20220108172 A1), Shoham et al. (US 20230004857 A1)[hereinafter “Shoham”], and Vasisht et al. (US 20220012632 A1)[hereinafter “Vasisht”].
Regarding Claim 4, Aslandere fails to disclose that the test inputs and the first test outputs are determined by synthesizing inputs of the technical system and forwarding the synthesized inputs through the models. However, Vasisht discloses the use of synthesized inputs (and corresponding outputs) for evaluating model performance [See Abstract]. It would have been obvious to use synthesized inputs/outputs to evaluate the models of Aslandere, where the inputs are forwarded through the connected system, because doing so would have presented an effective manner of evaluating such models without the need for an extensive collection of real inputs/outputs.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Aichernig et al., Learning a Behavior Model of Hybrid Systems Through Combining Model-Based Testing and Machine Learning, arXiv, 2019
Vanslette, A General Model Validation and Testing Tool, arXiv, 2019
US 20220300857 A1 – SYSTEM AND METHOD FOR VALIDATING UNSUPERVISED MACHINE LEARNING MODELS
US 20230004856 A1 – TECHNIQUES FOR VALIDATING FEATURES FOR MACHINE LEARNING MODELS
US 20230280705 A1 – METHOD FOR VALIDATING OR VERIFYING A TECHNICAL SYSTEM
US 20230376837 A1 – DEPENDENCY CHECKING FOR MACHINE LEARNING MODELS
US 20220157085 A1 – METHOD AND DEVICE FOR CREATING AN EMISSIONS MODEL OF AN INTERNAL COMBUSTION ENGINE
US 20210089937 A1 – METHODS FOR AUTOMATICALLY CONFIGURING PERFORMANCE EVALUATION SCHEMES FOR MACHINE LEARNING ALGORITHMS
US 20240085897 A1 (for double patenting concerns) – METHOD FOR VALIDATING OR VERIFYING A TECHNICAL SYSTEM
US 20200210848 A1 – DEEP LEARNING TESTING
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE ROBERT QUIGLEY whose telephone number is (313)446-4879. The examiner can normally be reached 9AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen Vazquez can be reached at (571) 272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KYLE R QUIGLEY/Primary Examiner, Art Unit 2857