Prosecution Insights
Last updated: April 19, 2026
Application No. 17/854,722

DEVICE, MEMORY MEDIUM, COMPUTER PROGRAM AND COMPUTER-IMPLEMENTED METHOD FOR VALIDATING A DATA-BASED MODEL

Non-Final OA §101§103
Filed
Jun 30, 2022
Examiner
KARAVIAS, DENISE R
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Robert Bosch GmbH
OA Round
3 (Non-Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
98%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
84 granted / 134 resolved
-5.3% vs TC avg
Strong +35% interview lift
Without
With
+34.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
17 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Application 17/854,722 filed on 06/30/2022, claims foreign priority to GERMANY 10 2021 207 246.1 filed on 07/08/2021 and GERMANY 10 2021 207 008.6 filed on 07/05/2021. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/5/2025 has been entered. Response to Amendment This office action is in response to amendments submitted on 10/15/2025 wherein claims 14-29 are pending and ready for examination. Claims 1-13 were previously canceled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 14-29 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite an abstract idea as discussed below. This abstract idea is not integrated into a practical application for the reasons discussed below. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the reasons discussed below. Step 1 of the 2019 Guidance requires the examiner to determine if the claims are to one of the statutory categories of invention. Applied to the present application, the claims belong to one of the statutory classes of a process or product as a computer implemented method or a computer system/product. Step 2A of the 2019 Guidance is divided into two Prongs. Prong 1 requires the examiner to determine if the claims recite an abstract idea, and further requires that the abstract idea belong to one of three enumerated groupings: mathematical concepts, mental processes, and certain methods of organizing human activity. Claim 14 is copied below, with the limitations belonging to an abstract idea being underlined. A computer-implemented method for validating a data-based model for classifying an object into a class for an object type or a function type for a driver assistance system of a vehicle, the method comprising the following steps: determining the classification as a function of a digital signal, using the data- based model, the digital signal being a digital image or a radar spectrum or a LIDAR spectrum or a segment of a radar spectrum or a segment of a LIDAR spectrum; determining, using the data-based model, a reference classification for the object as a function of the digital signal; checking, using a reference model, as a function of the classification and the reference classification, whether or not the classification of the data-based model for the object is correct; and validating or not validating the data-based model, depending on whether or not the classification of the data-based model for the object is correct; wherein the classification and the reference classification are determined for a set of digital signals that are associated with different distances between the object and a reference point, and wherein for each digital signal from the set, a measure of confidence is determined, and the data-based model being validated when the classification of the data-based model for the object in the digital signals is correct, whose measure of confidence meets a condition, wherein the measure of confidence is a distance of the object from the reference point, and wherein the condition is that the distance is within a reference distance from the reference point. wherein the measure of confidence is provided by the reference model, and wherein the measure of confidence is based on a duration of the reference classification being greater than a threshold. Claim 25 is copied below, with the limitations belonging to an abstract idea being underlined. A device for validating a data-based model for classifying an object, the device comprising: at least one processor; and at least one memory; wherein the device is configured to: determine a classification of the object as a function of a digital signal, using the data-based model, the digital signal being a digital image or a radar spectrum or a LIDAR spectrum or a segment of a radar spectrum or a segment of a LIDAR spectrum; determine, using the data-based model, a reference classification for the object as a function of the digital signal; check, using a reference model, as a function of the classification and the reference classification, whether or not the classification of the data-based model for the object is correct; and validate or not validate the data-based model, depending on whether or not the classification of the data-based model for the object is correct; wherein the classification and the reference classification are determined for a set of digital signals that are associated with different distances between the object and a reference point, and wherein for each digital signal from the set, a measure of confidence is determined, and the data-based model being validated when the classification of the data-based model for the object in the digital signals is correct, whose measure of confidence meets a condition, wherein the measure of confidence is a distance of the object from the reference point, and wherein the condition is that the distance is within a reference distance from the reference point. wherein the measure of confidence is provided by the reference model, and wherein the measure of confidence is based on a duration of the reference classification being greater than a threshold. Claim 26 is copied below, with the limitations belonging to an abstract idea being underlined. A non-transitory memory medium on which is stored a computer program for validating a data-based model for classifying an object into a class for an object type or a function type for a driver assistance system of a vehicle, the computer program, when executed by a computer, causing the computer to perform the following steps: determining the classification as a function of a digital signal, using the data- based model, the digital signal being a digital image or a radar spectrum or a LIDAR spectrum or a segment of a radar spectrum or a segment of a LIDAR spectrum; determining, using the data-based model, a reference classification for the object as a function of the digital signal; checking, using a reference model, as a function of the classification and the reference classification, whether or not the classification of the data-based model for the object is correct; and validating or not validating the data-based model, depending on whether or not the classification of the data-based model for the object is correct; wherein the classification and the reference classification are determined for a set of digital signals that are associated with different distances between the object and a reference point, and wherein for each digital signal from the set, a measure of confidence is determined, and the data-based model being validated when the classification of the data-based model for the object in the digital signals is correct, whose measure of confidence meets a condition, wherein the measure of confidence is a distance of the object from the reference point, and wherein the condition is that the distance is within a reference distance from the reference point. wherein the measure of confidence is provided by the reference model, and wherein the measure of confidence is based on a duration of the reference classification being greater than a threshold. The limitations underlined can be considered to describe a series of mental and/or mathematical concepts as “determining/determined/determine,” “checking/check,” and “validating/validate” disclose a set of programming routines and patterns which are algorithms or instructions which is mathematical routines or a sets of mental steps. They may include a series of calculations leading to one or more numerical results or answers, obtained by a sequence of mathematical operations on numbers or may include an observation, evaluation, judgement, and/or opinion which are concepts performed in the human mind. The lack of a specific equation in the claim merely points out that the claim would monopolize all possible appropriate equations/two-group significance tests for accomplishing this purpose in all possible systems. These steps recited by the claim therefore amount to a series of mathematical and/or mental steps, making these limitations amount to an abstract idea. In summary, the underlined steps in the claim above therefore recite an abstract idea at Prong 1 of the 101 analysis. The additional elements in the claim have been left in normal font. The additional limitations in relation to the computer, computer product, or computer system does not offer a meaningful limitation beyond generally linking the use of the method to a computer (see Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 217, 110 USPQ2d 1976, 1981 (2014)). The claim does not recite a particular machine applying or being used by the abstract idea. The claims do not integrate the abstract idea into a practical application. Various considerations are used to determine whether the additional elements are sufficient to integrate the abstract idea into a practical application. The claim does not recite a particular machine applying or being used by the abstract idea. The claim does not effect a real-world transformation or reduction of any particular article to a different state or thing. (Manipulating data from one form to another or obtaining a mathematical answer using input data does not qualify as a transformation in the sense of Prong 2.) The claim does not contain additional elements which describe the functioning of a computer, or which describe a particular technology or technical field, being improved by the use of the abstract idea. (This is understood in the sense of the claimed invention from Diamond v Diehr, in which the claim as a whole recited a complete rubber-curing process including a rubber-molding press, a timer, a temperature sensor adjacent the mold cavity, and the steps of closing and opening the press, in which the recited use of a mathematical calculation served to improve that particular technology by providing a better estimate of the time when curing was complete. Here, the claim does not recite carrying out any comparable particular technological process.) In all of these respects, the claim fails to recite additional elements which might possibly integrate the claim into a particular practical application. Instead, based on the above considerations, the claim would tend to monopolize the abstract idea itself, rather than integrate the abstract idea into a practical application. Step 2b of the 2019 Guidance requires the examiner to determine whether the additional elements cause the claim to amount to significantly more than the abstract idea itself. The considerations for this particular claim are essentially the same as the considerations for Prong 2 of Step 2a, and the same analysis leads to the conclusion that the claim does not amount to significantly more than the abstract idea. Therefore, claims 14, 25, and 26 are rejected under 35 U.S.C. 101 as directed to an abstract idea without significantly more. Dependent claims 15-24 and 27-29 are similarly ineligible. The dependent claims merely add limitations which further detail the abstract idea with limitations such as “detecting,” “stored,” “determined,” and “validated” constitute extra solution activity. These do not help to integrate the claim into a practical application or make it significant more than the abstract idea (which is recited in slightly more detail, but not in enough detail to be considered to narrow the claim to a particular practical application itself). Considering all the limitations individually and in combination, the claimed additional elements do not show any inventive concept to applying algorithms such as improving the performance of a computer or any technology, and do not meaningfully limit the performance of the application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 14-15, 17, 20, and 22-26 are rejected under 35 U.S.C. 103 as being unpatentable over Surnilla et al., hereinafter Surnilla, U.S. Pub. No. 2018/0121763 A1 in view of Gyllenhammar et al., hereinafter Gyllenhammar, U.S. Pub. NO. 2022/0270356 A1 . Regarding Independent claim 14 Surnilla teaches: “A computer-implemented method for validating a data-based model for classifying an object into a class for an object type or a function type for a driver assistance system of a vehicle” (Surnilla, Abstract). “the method comprising the following steps: determining the classification as a function of a digital signal, using the data-based model, the digital signal being a digital image or a radar spectrum or a LIDAR spectrum or a segment of a radar spectrum or a segment of a LIDAR spectrum” (Surnilla teaches “An example disclosed vehicle includes a camera to detect an object and an object classifier that determine first classification data associated with the object” (¶ 0015) where the “object classifier” “via a processor” (¶ 0006) uses algorithms (¶ 0034) with data from the sensor, camera, and/or GPS receiver to identify and classify an object (¶ 0027) where the sensor may be radar or lidar disclosing an “image” or a “radar spectrum or a LIDAR spectrum or a segment of a radar spectrum or a segment of a LIDAR spectrum” as it is well known that radar is radio detection and ranging which uses radio waves of the electromagnetic spectrum and LIDAR is light detection and ranging using near infrared waves of the electromagnetic spectrum. Surnilla does not explicitly teach a digital signal however, a person of ordinary skill in the art would understand signals manipulated by a computer must be digital signals. As Surnilla teaches computer instructions that determine, by a processor, first classification data and a classification confidence from data collect by the camera (¶ 0006, ¶ 0015) therefore the data must be digital. Additionally, the processor uses data therefore for the classification and confidence score therefore there must be a “data-based model” that is used. If the above explanation with regard to “digital signal” were to be challenged then the following rejection should be applied in order to ensure compact prosecution. Gyllenhammar teaches in ¶ 0036-¶ 0037 the sensor data from cameras and radars are stored digitally. It would have been obvious to one of ordinary skill in to art to use digital data for manipulating the data from sensors using a computer as it is well known that computers use digital data which is precise and accurate and more easily stored than analog data. Surnilla teaches: “determining, using the data-based model, a reference classification for the object as a function of the digital signal” (Surnilla teaches “the communications module of the vehicle collects, retrieves and/or otherwise obtains the additional classification data from the other nearby vehicles. For example the vehicle collects second classification data” (¶ 0016) where the “object associator 204” which is part of the “object classifier” (see fig. 2) uses, via a processor, algorithms to identify the classification data retrieved from other sources (¶ 0035) which is the “second classification data” of ¶ 0016, thereby disclosing “a reference classification.” As the algorithms in ¶ 0034 and ¶ 0035 are the same, the “object classifier” with the processor acts as the “data based model.”) “checking, using a reference model, as a function of the classification and the reference classification, whether or not the classification of the data-based model for the object is correct” (Surnilla teaches “the object classifier of the vehicle associates the first classification data with the additional classification data” (¶ 0017) where the “additional classification data” discloses the “reference classification.” The “the object classifier of the vehicle correlates the first classification data with the other classification data based on a comparison of location data of the classification data” (¶ 0017) and “Based on the additional classification data of the nearby vehicles, the classification identifier of the vehicle adjusts the classification and/or the classification confidence of the first classification data associated with the object” (¶ 0018) where “the object classifier” “via a processor” (¶ 0006) uses algorithms (¶ 0036) to “incorporate the classification data of the other vehicles 104, 106 into the classification data of the vehicle 102” (¶ 0036) where the algorithms of ¶ 0036 are different from the algorithms of ¶ 0034. A person of ordinary skill in the art would understand that one processor can be used to perform the work of two processors, the “object classifier” discloses a “reference model” which “provides the classification data to the systems of the vehicle if the adjusted classification confidence is greater than a threshold value” (¶ 0038) disclosing using a “reference classification” to determine if the “classification” “for the object is correct”) “validating or not validating the data-based model, depending on whether or not the classification of the data-based model for the object is correct” (Surnilla, ¶ 0017-¶ 0018, ¶ 0032-¶ 0033: Surnilla teaches determining if the additional classification data from a nearby vehicle is reliable or not. If the additional classification data is determined to be reliable, the adjustment to the classification confidence score is increased and if the additional classification data is determined to be unreliable, the adjustment to the classification confidence score is reduced (¶ 0018) where if the classification confidence is above a threshold, the object is classified and if the classification confidence is below a threshold, the object is not classified (¶ 0033) where classifying or not classifying discloses “validating or not validating the data-based model”). “wherein the classification and the reference classification are determined for a set of digital signals that are associated with different distances between the object and a reference point, and wherein for each digital signal from the set, a measure of confidence is determined” (Surnilla teaches the classification data includes a classification, a classification confidence score, and location data (¶ 0015) which is based on data from a camera (118) and sensors (116) (fig. 1, ¶ 0021) from vehicle 102 of fig. 1. Additional classification data, which discloses the “reference classification,” is based on data from a camera (118) and sensors (116) of different vehicles (104 and 106 of fig. 1) and includes a classification, a classification confidence score, and location data. Vehicles 102, 104, and 106 are at “different distances” from the object (126 of fig. 1). Data is digital, see above. “the data-based model being validated when the classification of the data-based model for the object in the digital signals is correct, whose measure of confidence meets a condition” (Surnilla teaches “the object classifier of the vehicle associates the first classification data with the additional classification data”(¶ 0017) where “Based on the additional classification data of the nearby vehicles, the classification identifier of the vehicle adjusts the classification and/or classification confidence of the first classification data associated with the object” (¶ 0018). Moreover, “the classification determiner classifies the detected object when the classification confidence is greater than or equal to a confidence threshold” (¶ 0033) where the “classification confidence” discloses “measure of confidence” and “confidence threshold” discloses a “condition.” Therefore, Surnilla discloses the classification of the object is correct when the “measure of confidence meets a condition.”) “wherein the measure of confidence is provided by the reference model” (Surnilla teaches “the object classifier 124 may increase the classification confidence above a confidence threshold based on the additional data from the other vehicles” (¶ 0029) where the “object classifier” using the “additional data from the other vehicles” discloses “reference model” and “classification confidence” discloses “measure of confidence.”) While Surnilla teaches a “reliability score of a vehicle based on past history” including “a distance between the vehicle and the detected object” (¶ 0037) however, Surnilla does not explicitly teach “the measure of confidence is based on a duration of the reference classification being greater than a threshold.” Gyllenhammar teaches: “the measure of confidence is based on a duration of the reference classification being greater than a threshold” (Gyllenhammar teaches when the vehicle is sufficiently close to an object, accuracy of the classification and position increases (¶ 0043) and increasing knowledge of the surroundings of the vehicle as time passes increases the accuracy of determining classes and positions of objects (¶ 0046) where an increase in accuracy disclosing an increase in the “measure of confidence.” Moreover, “the area within which the evaluations process 105 is performed, such that one only compares the two datasets represented an area within a specific viewing frustum or within a certain distance from the vehicle” therefore the classification is performed only when the objects are “within a certain distance from the vehicle” disclosing “being greater than a threshold.”) It would have been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla by including a threshold dependent on the distance between the vehicle and object as taught by Gyllenhammar. Surnilla teaches accurate classification is difficult when the objects are beyond a certain distance (Surnilla, ¶ 0014). Gyllenhammar teaches the importance of accurately identifying objects that are close to the vehicle (Gyllenhammar, ¶ 0063) as it is more important to respond correctly to objects close to the vehicle. One way Gyllenhammar does this is by setting a threshold dependent on the distance between the vehicle and object which improves the system by achieving both “cost reductions as well as performance improvements” (Gyllenhammar, ¶ 0005). Regarding claim 15 Surnilla as modified teaches: “the reference point is the vehicle or a sensor for detecting the set of digital signals” (Surnilla, fig. 1, ¶ 0027-¶ 0028: Fig. 1 depicts a vehicle 102 with a sensor 116 and a camera 118 collect data which is transferred to the object classifier 124). Regarding claim 17 Surnilla as modified teaches: “wherein for the set, a value pair that includes a first value and a second value is determined, the first value indicating a distance within which the reference classification for the object is correct, and the second value indicating a distance within which the classification of the data-based model for the object is correct, or a spacing of the distance from the reference distance” (Surnilla, ¶ 0017: Surnilla teaches first classification data that includes location of an object and second classification data, reference classification data, that includes location of the object where the “object classifier associates the first classification data of the vehicle with the classification data of the other vehicle (the second classification data) if the location data of the other vehicle is similar to that of the first classification data” (¶ 0017). A person of ordinary skill in the art would understand that location information would include a distance. The “object classified,” using a “data-based model” determines a classification of the object only if the location information of both classifications is “similar” increasing the confidence score if the source is “reliable” disclosing a correct distance in reference to the “reference classification” and the “data-based model”). Regarding claim 20 Surnilla as modified teaches: “for a plurality of sets of digital signals, their classifications and their reference classifications are determined, and it is checked whether or not the classification of the data-based model for the object is correct” (Surnilla, fig. 4, ¶ 0054: Surnilla teaches “the object classifier 124 determines whether there is another object detected by the sensors, cameras, and/or receivers of the vehicle 102” (¶ 0054) and then repeats the process of steps 404-422 of fig. 4 disclosing classifying multiple sets of signals, where the signals are “digital signals” (see claim 1 above), and checking to determine if the classifications are correct (see claim 1 above)). Regarding claim 22 Surnilla as modified teaches: “for each digital signal, a position is detected and/or stored using a system for satellite navigation, the distance being determined as a function of the position” (Surnilla, fig 1, ¶ 0021: Surnilla teaches using a GPS receiver to monitor the location of the vehicles 102, 104, 106). Regarding claim 23 Surnilla as modified does not teach: “when the validation of the data-based model fails: (i) the data-based model is retrained or trained with different data, and/or (ii) a different data-based model is used.” Gyllenhammar teaches: “when the validation of the data-based model fails: (i) the data-based model is retrained or trained with different data, and/or (ii) a different data-based model is used” (Gyllenhammar, ¶ 0098: Gyllenhammar teaches when a “perfect match” for the detected objects is not obtained, the parameters of the “perception model” are updated by using a “weakly supervised learning algorithm” where “the second set of perception data and the baseline worldview together form a ‘training dataset” (¶ 0100) disclosing retraining). It would have been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla as modified by including a machine learning model and retraining the machine learning model when validations fails as taught by Gyllenhammar to “improve the system in a safe and efficient manner” to provide “performance improvements” (Gyllenhammar, ¶ 0005) as machine learning aims to predict outcomes with higher accuracy and discerns trends humans would like miss when relying solely on conventional statistical methods. Regarding claim 24 Surnilla as modified teaches: “when the validation of the data-based model is successful, the data-based model is used in a system for classifying objects in the driver assistance system” (Surnilla, ¶ 0038: Surnilla teaches when the adjusted classification confidence is greater than a threshold value disclosing the validation of the data-based model, the “classification determiner 202 of the object classifier 124 provides the classification data (which includes the classification, see claim 1 above), to the systems of the vehicle 102” (¶ 0038) disclosing the “data-based model” is used for “classifying objects in the driver assistance system”). Regarding Independent claim 25 Surnilla teaches: “A device for validating a data-based model for classifying an object, the device comprising: at least one processor; and at least one memory” (Surnilla, Abstract, ¶ 0041). “wherein the device is configured to: determine a classification of the object as a function of a digital signal, using the data-based model, the digital signal being a digital image or a radar spectrum or a LIDAR spectrum or a segment of a radar spectrum or a segment of a LIDAR spectrum” (Surnilla teaches “An example disclosed vehicle includes a camera to detect an object and an object classifier that determine first classification data associated with the object” (¶ 0015) where the “object classifier” uses data from the sensor, camera, and/or GPS receiver to identify and classify an object ¶ 0027) where the sensor may be radar or lidar disclosing an “image” or a “radar spectrum or a LIDAR spectrum or a segment of a radar spectrum or a segment of a LIDAR spectrum” as it is well known that radar is radio detection and ranging which uses radio waves of the electromagnetic spectrum and LIDAR is light detection and ranging using near infrared waves of the electromagnetic spectrum. Surnilla does not explicitly teach a digital signal however, a person of ordinary skill in the art would understand signals manipulated by a computer must be digital signals. As Surnilla teaches computer instructions that determine, by a processor, first classification data and a classification confidence from data collect by the camera (¶ 0006, ¶ 0015) therefore the data must be digital. Additionally, the processor uses data therefore for the classification and confidence score therefore there must be a “data-based model” that is used. If the above explanation with regard to “digital signal” were to be challenged then the following rejection should be applied in order to ensure compact prosecution. Gyllenhammar teaches in ¶ 0036-¶ 0037 the sensor data from cameras and radars are stored digitally. It would have been obvious to one of ordinary skill in to art to use digital data for manipulating the data from sensors using a computer as it is well known that computers use digital data which is precise and accurate and more easily stored than analog data. Surnilla teaches: “determine, using the data-based model, a reference classification for the object as a function of the digital signal” (Surnilla teaches “the communications module of the vehicle collects, retrieves and/or otherwise obtains the additional classification data from the other nearby vehicles. For example the vehicle collects second classification data” (¶ 0016) where the “object associator 204” which is part of the “object classifier” (see fig. 2) uses, via a processor, algorithms to identify the classification data retrieved from other sources (¶ 0035) which is the “second classification data” of ¶ 0016, thereby disclosing “a reference classification.” As the algorithms in ¶ 0034 and ¶ 0035 are the same, the “object classifier” with the processor acts as the “data based model.”) “check, using a reference model, as a function of the classification and the reference classification, whether or not the classification of the data-based model for the object is correct” (Surnilla teaches “the object classifier of the vehicle associates the first classification data with the additional classification data” (¶ 0017) where the “additional classification data” discloses the “reference classification.” The “the object classifier of the vehicle correlates the first classification data with the other classification data based on a comparison of location data of the classification data” (¶ 0017) and “Based on the additional classification data of the nearby vehicles, the classification identifier of the vehicle adjusts the classification and/or the classification confidence of the first classification data associated with the object” (¶ 0018) where “the object classifier” “via a processor” (¶ 0006) uses algorithms (¶ 0036) to “incorporate the classification data of the other vehicles 104, 106 into the classification data of the vehicle 102” (¶ 0036) where the algorithms of ¶ 0036 are different from the algorithms of ¶ 0034. A person of ordinary skill in the art would understand that one processor can be used to perform the work of two processors, the “object classifier” discloses a “reference model” which “provides the classification data to the systems of the vehicle if the adjusted classification confidence is greater than a threshold value” (¶ 0038) disclosing using a “reference classification” to determine if the “classification” “for the object is correct.”) “validate or not validate the data-based model, depending on whether or not the classification of the data-based model for the object is correct” (Surnilla, ¶ 0017-¶ 0018, ¶ 0032-¶ 0033: Surnilla teaches determining if the additional classification data from a nearby vehicle is reliable or not. If the additional classification data is determined to be reliable, the adjustment to the classification confidence score is increased and if the additional classification data is determined to be unreliable, the adjustment to the classification confidence score is reduced (¶ 0018) where if the classification confidence is above a threshold, the object is classified and if the classification confidence is below a threshold, the object is not classified (¶ 0033) where classifying or not classifying discloses “validating or not validating the data-based model”). “wherein the classification and the reference classification are determined for a set of digital signals that are associated with different distances between the object and a reference point, and wherein for each digital signal from the set, a measure of confidence is determined” (Surnilla teaches the classification data includes a classification, a classification confidence score, and location data (¶ 0015) which is based on data from a camera (118) and sensors (116) (fig. 1, ¶ 0021) from vehicle 102 of fig. 1. Additional classification data, which discloses the “reference classification,” is based on data from a camera (118) and sensors (116) of different vehicles (104 and 106 of fig. 1) and includes a classification, a classification confidence score, and location data. Vehicles 102, 104, and 106 are at “different distances” from the object (126 of fig. 1). Data is digital, see above). “the data-based model being validated when the classification of the data-based model for the object in the digital signals is correct, whose measure of confidence meets a condition” (Surnilla teaches “the object classifier of the vehicle associates the first classification data with the additional classification data”(¶ 0017) where “Based on the additional classification data of the nearby vehicles, the classification identifier of the vehicle adjusts the classification and/or classification confidence of the first classification data associated with the object” (¶ 0018). Moreover, “the classification determiner classifies the detected object when the classification confidence is greater than or equal to a confidence threshold” (¶ 0033) where the “classification confidence” discloses “measure of confidence” and “confidence threshold” discloses a “condition.” Therefore, Surnilla discloses the classification of the object is correct when the “measure of confidence meets a condition”). “wherein the measure of confidence is provided by the reference model” (Surnilla teaches “the object classifier 124 may increase the classification confidence above a confidence threshold based on the additional data from the other vehicles” (¶ 0029) where the “object classifier” using the “additional data from the other vehicles” discloses “reference model” and “classification confidence” discloses “measure of confidence.”) While Surnilla teaches a “reliability score of a vehicle based on past history” including “a distance between the vehicle and the detected object” (¶ 0037) however, Surnilla does not explicitly teach “the measure of confidence is based on a duration of the reference classification being greater than a threshold.” Gyllenhammar teaches: “the measure of confidence is based on a duration of the reference classification being greater than a threshold” (Gyllenhammar teaches when the vehicle is sufficiently close to an object, accuracy of the classification and position increases (¶ 0043) and increasing knowledge of the surroundings of the vehicle as time passes increases the accuracy of determining classes and positions of objects (¶ 0046) where an increase in accuracy disclosing an increase in the “measure of confidence.” Moreover, “the area within which the evaluations process 105 is performed, such that one only compares the two datasets represented an area within a specific viewing frustum or within a certain distance from the vehicle” therefore the classification is performed only when the objects are “within a certain distance from the vehicle” disclosing “being greater than a threshold.”) It would have been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla by including a threshold dependent on the distance between the vehicle and object as taught by Gyllenhammar. Surnilla teaches accurate classification is difficult when the objects are beyond a certain distance (Surnilla, ¶ 0014). Gyllenhammar teaches the importance of accurately identifying objects that are close to the vehicle (Gyllenhammar, ¶ 0063) as it is more important to respond correctly to objects close to the vehicle. One way Gyllenhammar does this is by setting a threshold dependent on the distance between the vehicle and object which improves the system by achieving both “cost reductions as well as performance improvements” (Gyllenhammar, ¶ 0005). Regarding Independent claim 26 Surnilla teaches: “A non-transitory memory medium on which is stored a computer program for validating a data-based model for classifying an object into a class for an object type or a function type for a driver assistance system of a vehicle, the computer program, when executed by a computer” (Surnilla, Abstract). “causing the computer to perform the following steps: determining the classification as a function of a digital signal, using the data- based model, the digital signal being a digital image or a radar spectrum or a LIDAR spectrum or a segment of a radar spectrum or a segment of a LIDAR spectrum” (Surnilla teaches “An example disclosed vehicle includes a camera to detect an object and an object classifier that determine first classification data associated with the object” (¶ 0015) where the “object classifier” “via a processor” (¶ 0006) uses algorithms (¶ 0034) with data from the sensor, camera, and/or GPS receiver to identify and classify an object (¶ 0027) where the sensor may be radar or lidar disclosing an “image” or a “radar spectrum or a LIDAR spectrum or a segment of a radar spectrum or a segment of a LIDAR spectrum” as it is well known that radar is radio detection and ranging which uses radio waves of the electromagnetic spectrum and LIDAR is light detection and ranging using near infrared waves of the electromagnetic spectrum. Surnilla does not explicitly teach a digital signal however, a person of ordinary skill in the art would understand signals manipulated by a computer must be digital signals. As Surnilla teaches computer instructions that determine, by a processor, first classification data and a classification confidence from data collect by the camera (¶ 0006, ¶ 0015) therefore the data must be digital. Additionally, the processor uses data therefore for the classification and confidence score therefore there must be a “data-based model” that is used. If the above explanation with regard to “digital signal” were to be challenged then the following rejection should be applied in order to ensure compact prosecution. Gyllenhammar teaches in ¶ 0036-¶ 0037 the sensor data from cameras and radars are stored digitally. It would have been obvious to one of ordinary skill in to art to use digital data for manipulating the data from sensors using a computer as it is well known that computers use digital data which is precise and accurate and more easily stored than analog data. Surnilla teaches: “determining, using the data-based model, a reference classification for the object as a function of the digital signal” (Surnilla teaches “the communications module of the vehicle collects, retrieves and/or otherwise obtains the additional classification data from the other nearby vehicles. For example the vehicle collects second classification data” (¶ 0016) where the “object associator 204” which is part of the “object classifier” (see fig. 2) which uses, via a processor, algorithms to identify the classification data retrieved from other sources (¶ 0035) which is the “second classification data” of ¶ 0016, thereby disclosing “a reference classification.” As the algorithms in ¶ 0034 and ¶ 0035 are the same, the “object classifier” with the processor acts as the “data based model.”) “checking, using a reference model, as a function of the classification and the reference classification, whether or not the classification of the data-based model for the object is correct” (Surnilla teaches “the object classifier of the vehicle associates the first classification data with the additional classification data” (¶ 0017) where the “additional classification data” discloses the “reference classification.” The “the object classifier of the vehicle correlates the first classification data with the other classification data based on a comparison of location data of the classification data” (¶ 0017) and “Based on the additional classification data of the nearby vehicles, the classification identifier of the vehicle adjusts the classification and/or the classification confidence of the first classification data associated with the object” (¶ 0018) where “the object classifier” “via a processor” (¶ 0006) uses algorithms (¶ 0036) to “incorporate the classification data of the other vehicles 104, 106 into the classification data of the vehicle 102” (¶ 0036) where the algorithms of ¶ 0036 are different from the algorithms of ¶ 0034. A person of ordinary skill in the art would understand that one processor can be used to perform the work of two processors, the “object classifier” discloses a “reference model” which “provides the classification data to the systems of the vehicle if the adjusted classification confidence is greater than a threshold value” (¶ 0038) disclosing using a “reference classification” to determine if the “classification” “for the object is correct.”) “validating or not validating the data-based model, depending on whether or not the classification of the data-based model for the object is correct” (Surnilla, ¶ 0017-¶ 0018, ¶ 0032-¶ 0033: Surnilla teaches determining if the additional classification data from a nearby vehicle is reliable or not. If the additional classification data is determined to be reliable, the adjustment to the classification confidence score is increased and if the additional classification data is determined to be unreliable, the adjustment to the classification confidence score is reduced (¶ 0018) where if the classification confidence is above a threshold, the object is classified and if the classification confidence is below a threshold, the object is not classified (¶ 0033) where classifying or not classifying discloses “validating or not validating the data-based model”). “wherein the classification and the reference classification are determined for a set of digital signals that are associated with different distances between the object and a reference point, and wherein for each digital signal from the set, a measure of confidence is determined” (Surnilla teaches the classification data includes a classification, a classification confidence score, and location data (¶ 0015) which is based on data from a camera (118) and sensors (116) (fig. 1, ¶ 0021) from vehicle 102 of fig. 1. Additional classification data, which discloses the “reference classification,” is based on data from a camera (118) and sensors (116) of different vehicles (104 and 106 of fig. 1) and includes a classification, a classification confidence score, and location data. Vehicles 102, 104, and 106 are at “different distances” from the object (126 of fig. 1). Data is digital, see above. “the data-based model being validated when the classification of the data-based model for the object in the digital signals is correct, whose measure of confidence meets a condition” (Surnilla teaches “the object classifier of the vehicle associates the first classification data with the additional classification data”(¶ 0017) where “Based on the additional classification data of the nearby vehicles, the classification identifier of the vehicle adjusts the classification and/or classification confidence of the first classification data associated with the object” (¶ 0018). Moreover, “the classification determiner classifies the detected object when the classification confidence is greater than or equal to a confidence threshold” (¶ 0033) where the “classification confidence” discloses “measure of confidence” and “confidence threshold” discloses a “condition.” Therefore, Surnilla discloses the classification of the object is correct when the “measure of confidence meets a condition”). “wherein the measure of confidence is provided by the reference model” (Surnilla teaches “the object classifier 124 may increase the classification confidence above a confidence threshold based on the additional data from the other vehicles” (¶ 0029) where the “object classifier” using the “additional data from the other vehicles” discloses “reference model” and “classification confidence” discloses “measure of confidence.”) While Surnilla teaches a “reliability score of a vehicle based on past history” including “a distance between the vehicle and the detected object” (¶ 0037) however, Surnilla does not explicitly teach “the measure of confidence is based on a duration of the reference classification being greater than a threshold.” Gyllenhammar teaches: “the measure of confidence is based on a duration of the reference classification being greater than a threshold” (Gyllenhammar teaches when the vehicle is sufficiently close to an object, accuracy of the classification and position increases (¶ 0043) and increasing knowledge of the surroundings of the vehicle as time passes increases the accuracy of determining classes and positions of objects (¶ 0046) where an increase in accuracy disclosing an increase in the “measure of confidence.” Moreover, “the area within which the evaluations process 105 is performed, such that one only compares the two datasets represented an area within a specific viewing frustum or within a certain distance from the vehicle” therefore the classification is performed only when the objects are “within a certain distance from the vehicle” disclosing “being greater than a threshold.”) It would have been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla by including a threshold dependent on the distance between the vehicle and object as taught by Gyllenhammar. Surnilla teaches accurate classification is difficult when the objects are beyond a certain distance (Surnilla, ¶ 0014). Gyllenhammar teaches the importance of accurately identifying objects that are close to the vehicle (Gyllenhammar, ¶ 0063) as it is more important to respond correctly to objects close to the vehicle. One way Gyllenhammar does this is by setting a threshold dependent on the distance between the vehicle and object which improves the system by achieving both “cost reductions as well as performance improvements” (Gyllenhammar, ¶ 0005). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Surnilla in view of Gyllenhammar as applied to claim 14 above, and further in view of Toth et al., hereinafter Toth, U.S. Pub. No. 2021/0201054 A1, in further view of Sudhaker, U.S. Pub. No. 2018/0321377 A1. Regarding claim 16 Surnilla as modified teaches: “the set of digital signals and the reference classifications are stored in association with one another when the measure of confidence meets the condition and the classification deviates from the reference classification” (Surnilla teaches providing classification data to the systems of the vehicle if “the adjusted classification confidence is greater than a threshold” (¶ 0038) disclosing the “measure of confidence meets the condition” and when the classification confidence is less than the confidence threshold, the detect object remains unclassified” (¶ 0033) where the “classification confidence” is based on first classification data and additional classification data disclosing “the classification deviates from the reference classification. Moreover, Surnilla teaches the “previously unclassified object” may be classified (¶ 0029) where in order to classify a “previously unclassified object” the “previously unclassified object” must be stored. The signals are “digital signals” (see claim 1 above)). While Surnilla teaches providing classification to other systems, which would use a memory, Surnilla does not explicitly teach the “digital signals and reference classifications are stored in association with one another.” Toth teaches “memory 206’ storing instructions 208’ and data 210’ such as vehicle diagnostics, detected sensor data and/or one or more behavior/classification models used in conjunction with object detection and classification” (¶ 0054, fig. 2B) disclosing storing the “digital signals and reference classifications”). It would have been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla as modified by including storing data and methods as taught by Toth for quick and easy data recovery for future data analysis or study. Surnilla as modified by Toth does not teach: “the digital signals are discarded and/or not stored when the measure of confidence does not meet the condition or the classification does not deviate from the reference classification.” Sudhaker teaches: “the digital signals are discarded and/or not stored when the measure of confidence does not meet the condition or the classification does not deviate from the reference classification” (Sudhaker ¶ 0018: Sudhaker teaches if an object has been “reliably identified (classified) as a static object” (¶ 0018), where the classification is dependent on the differences between “a comparison of the curve of the distance values as a function of time with the reference curve” (¶ 0013), the object can be deleted (discarded) from the “digital map of the surroundings” (¶ 0018) disclosing “digital signals are discarded and/or not stored when. . . the classification does not deviate from the reference classification.”) It would have been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla as modified by including deleting data that is not needed as deleting data reduces the overall cost of storing data that is not necessary. Claims 18, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Surnilla as modified by Gyllenhammar as applied to claim 17 and 20 above, and further in view of Jiang et al., hereinafter Jiang, U.S. Pub. No. 2018/0248780 A1. Regarding claim 18 Surnilla as modified does not teach: “for the value pair, a memory location in a memory is determined, a value that is stored at the determined memory location being changed as a function of the values of the value pair.” Jiang teaches: “for the value pair, a memory location in a memory is determined, a value that is stored at the determined memory location being changed as a function of the values of the value pair” (Jiang, fig 4, ¶ 0046: Jiang teaches storing “intermediate or final results of the processing as an output vector 71 comprised of 1 to M data elements” after “some processing” has been performed, where “intermediate or final results” disclose the changed values stored in a memory location. Surnilla and Jiang both process large amounts of data therefore it would been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla as modified by including changing the value stored in memory as disclosed by Jiang in order to provide a system with a reduced amount of data in memory reducing the overall cost of storing data. Regarding claim 19 Surnilla as modified teaches: “the data-based model is validated as a function of the value” (Surnilla, ¶ 0017-¶ 0018, ¶ 0032-¶ 0033: Surnilla teaches determining if the additional classification data from a nearby vehicle is reliable or not. If the additional classification data is determined to be reliable, the adjustment to the classification confidence score is increased and if the additional classification data is determined to be unreliable, the adjustment to the classification confidence score is reduced (¶ 0018) where the adjustment to the classification confidence score is dependent on the location data (see claim 17 above). If the classification confidence is above a threshold, the object is classified and if the classification confidence is below a threshold, the object is not classified (¶ 0033) where classifying or not classifying discloses “validated”). Surnilla as modified does not teach: A value “stored at the determined memory location” Jiang teaches: A value “stored at the determined memory location” (Jiang, fig. 4, ¶ 0046: Jiang teaches storing “intermediate or final results of the processing as an output vector 71 comprised of 1 to M data elements” disclosing the changed values stored in a memory location. Surnilla and Jiang both process large amounts of data therefore it would been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla as modified by including changing the value stored in memory as disclosed by Jiang in order to provide a system with a reduced amount of data in memory reducing the overall cost of storing data. Regarding claim 21 Surnilla as modified teaches: “for each set from the plurality of sets, a value pair that includes a first value and a second value for the set is determined” (Surnilla, ¶ 0017: Surnilla teaches first classification data, a “first value,” and second classification data, a “second value” disclosing a “set” where many objects are detected generating many sets of “first value” and “second value.”). Surnilla as modified does not teach: “for each set, a memory location for the value pair determined for the set is determined, and a value stored at the determined memory location is changed as a function of the values of the value pair” Jiang teaches: “for each set, a memory location for the value pair determined for the set is determined, and a value stored at the determined memory location is changed as a function of the values of the value pair” (Jiang, fig 4, ¶ 0046: Jiang teaches storing “intermediate or final results of the processing as an output vector 71 comprised of 1 to M data elements” after “some processing” has been performed, where “intermediate or final results” disclose the changed values stored in a memory location). Surnilla and Jiang both process large amounts of data therefore it would been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla as modified by including changing the value stored in memory as disclosed by Jiang in order to provide a system with a reduced amount of data in memory reducing the overall cost of storing data. Claims 27-29 are rejected under 35 U.S.C. 103 as being unpatentable over Surnilla in view of Gyllenhammar as applied to claims 14, 25, and 26 respectively, above and further in view Khadloya et al., hereinafter Khadloya, U.S. Pub. No. 2019/0303684 A1. Regarding claim 27 Surnilla as modified does not teach: “the ccc value of the reference model and a difference between the ccc value of the reference model and the ccc value of the data-based model are stored in a two-dimensional array.” Khadloya teaches: “the ccc value of the reference model and a difference between the ccc value of the reference model and the ccc value of the data-based model are stored in a two-dimensional array” (Khadloya, fig. 7, ¶ 0126-¶ 0127: Khadloya teaches “the cost function can additionally or alternatively be based on a physical distance between the first object and the candidate objects” where “Values corresponding to the cost functions can be computed and stored, such as in a two dimensional array in a memory circuit” (¶ 0126).) It would have been obvious to one of ordinary skill in to art to have modified the method for classifying objects in the surroundings of a vehicle as taught by Surnilla as modified by including storing distance values (ccc values) in a two-dimensional matrix as matrices are a versatile tool for data analysis to improve a system where “Various algorithms can be employed by computers (devices, machines, systems, etc.) to automatically and accurately detect objects” (Khadloya, ¶ 0073). Regarding claim 28: Claim 28 cites analogous limitations to claim 27 above and is therefore rejected on the same premise. Regarding claim 29: Claim 29 cites analogous limitations to claim 27 above and is therefore rejected on the same premise. Response to Arguments Applicant’s arguments (remarks) filed on 10/15/2025 have been fully considered. Regarding Section 101 Rejection page 8-14 of Applicant’s arguments, Applicant argues “In short, the lesson that Desjardins draws from Enfish is that "Examiners and panels should not evaluate claims at such a high level of generality." Id. at 9. This lesson has particular salience in the field of artificial intelligence:"overbroad reasoning" that is typical of Section 101 rejections not informed by Enfish "is... troubling, because... [c]ategorically excluding Al innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology." With this in mind, Desjardins applied Prong Two by looking at the specification for a description of a technological improvement and then looking at the claims to see if this improvement is reflected in the claims. In particular, after noting that "one improvement identified in the Specification is to 'effectively learn new tasks in succession whilst protecting knowledge about previous tasks"' (quoting from page 21 of the specification at issue), as well as noting that "[t]he Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to 'us[e] less of their storage capacity' and enables 'reduced system complexity,"' the Desjardins ARP concluded that "at least the following limitation of independent claim 1 that reflects the improvement: 'adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task."' (remarks, page 10). Examiner respectfully disagrees. Examiner consulted a superior regarding Applicant’s arguments with respect to the 101 rejection and was advised to uphold the 101 rejection. Examiner finds no machine learning, no training, and no adjusting of data within the claim language therefore Desjardins does not apply. The claims are just classifying data which can be done with a set of programming routines and patterns which are algorithms or instructions which are mathematical routines or a sets of mental steps. Regardless of whether the abstract idea is novel or not, it is still an abstract idea. Additionally, the claim language does not provide an intended use for the claimed invention. Examiner suggests, with the proper support, a braking, steering, or other improvement due to the claimed invention. Regarding Section 103 Rejection Of Claims 14-15, 17, 20, and 22-26 Based On Surnilla And Gyllenhammar page 14-15 of Applicant’s remarks, Applicant argues “The Patent Office alleges at page 12 of the Office Action asserts that [0017] of Surnilla discloses the claimed reference model on account of [0038] supposedly disclosing "using a 'reference classification' to determine if the 'classification'' for the object is correct."' However, there is no disclosure in [0038] of a reference model. Moreover, the explanation provided in page 12 of the Office Action does not identify how [0038] discloses a reference model. Since the claim recites "reference classification" separately from "reference model," even if [0038] genuinely discloses the claimed reference classification, such a purported disclosure would not meet the claimed reference model. As to the claimed "measure of confidence," page 14 of the Office Action asserts that the "classification confidence" disclosed in [0033] of Surnilla meets this claim element. Even if the Patent Office were correct on this point, Surnilla would not disclose "wherein the measure of confidence is provided by the reference model" because Surnilla does not disclose a reference model. Moreover, even if the Patent Office were correct that Surnilla discloses a reference model and that the classification confidence that supposedly meets the claimed "measure of confidence" were provided by this alleged reference model in Surnilla, it still would not be the case that Surnilla discloses that its alleged measure of confidence ("classification confidence") "is based on a duration of the reference classification being greater than a threshold." (remarks, page 14). Examiner respectfully disagrees. Examiner has clarified the rejection regarding the “reference model” as taught by Surnilla in the rejections above. With respect to the amended language, “wherein the measure of confidence is based on a duration of the reference classification being greater than a threshold” Examiner agrees with Applicant and a new rejection is submitted above using Surnilla as modified by Gyllenhammar as Surnilla teaches a reliability score based on a distance between the detected object and the vehicle and Gyllenhammar teaches the classification only occurs if the objects are within a certain distance of the vehicle (see rejection above). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Moustafa et al., U.S. Pub. No. 2022/0126864 A1, teaches more fully verifying data before using the data in an autonomous vehicle. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Denise R Karavias whose telephone number is (469)295-9152. The examiner can normally be reached 7:00 - 3:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen M. Vazquez can be reached at 571-272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENISE R KARAVIAS/Examiner, Art Unit 2857 /MICHAEL J DALBO/Primary Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Jun 30, 2022
Application Filed
Oct 11, 2024
Non-Final Rejection — §101, §103
Feb 24, 2025
Response Filed
May 12, 2025
Final Rejection — §101, §103
Oct 15, 2025
Request for Continued Examination
Oct 21, 2025
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12571867
NUCLEAR MAGNETIC RESONANCE ANALYSIS SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12535809
MODULAR, GENERAL PURPOSE, AUTOMATED, ANOMALOUS DATA SYNTHESIZERS FOR ROTARY PLANTS
2y 5m to grant Granted Jan 27, 2026
Patent 12535374
SENSOR FOR PARALLEL MEASUREMENT OF PRESSURE AND ACCELERATION AND USE OF THE SENSOR IN A VEHICLE BATTERY
2y 5m to grant Granted Jan 27, 2026
Patent 12529625
IMPROVING DATA MONITORING AND QUALITY USING AI AND MACHINE LEARNING
2y 5m to grant Granted Jan 20, 2026
Patent 12461165
METHOD FOR BALANCING BATTERY MODULES
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
98%
With Interview (+34.9%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month