Prosecution Insights
Last updated: April 19, 2026
Application No. 17/984,413

MULTI-MODALITY DATA ANALYSIS ENGINE FOR DEFECT DETECTION

Non-Final OA §103§112
Filed
Nov 10, 2022
Examiner
SANTOS, AARRON EDUARDO
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Laboratories America Inc.
OA Round
4 (Non-Final)
45%
Grant Probability
Moderate
4-5
OA Rounds
3y 4m
To Grant
58%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
59 granted / 131 resolved
-7.0% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
63 currently pending
Career history
194
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 131 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 06-03-2025 has been entered. Response to Amendment No claims have been amended. There are no new claims. Claims 1-6, 8-13, 15-19, and 21 are currently pending. Interview Summary The examiner thanks the applicant’s representative for the interview, 09-18-2025, to provide clarification regarding the claimed function Defect Score = max (0, Residual A - Residual V). To the examiner’s best understanding, the function does not relate to an absolute value, wherein the further you are from 0 the greater the likelihood of a defect. But rather, A is real-time data (sensory/maneuvering) and V is predictive data. Further, whenever A>V there is a fault because the value is greater than 0, and whenever V>A there is no fault because the value is less than the value 0. Further still, the function is not allowed to go below zero, otherwise the evaluation itself is considered to be a defective score. Thus, all negative readings are disregarded because they fail to exceed a 0 threshold value. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 6 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 is rejected as failing to point out or define the claimed invention. The indefinite language or relationship is: “generating one or more keys, with the keys being matched with a query in a temporal attention stage”. The language as stated does not distinctly define what is meant by “generating one or more keys, with the keys being matched with a query in a temporal attention stage” or its essential quality, and does not clearly state the limitation of the claimed invention. Hereinafter “generating one or more keys, with the keys being matched with a query in a temporal attention stage” will be interpreted as “labeling, classifying, type”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-6, 8-13, and 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Avadhanam (US 20220012988 A1) in view of Qiu (US 20210048823 A1) in further of Li (US 20190191230 A1). REGARDING CLAIM 1, Avadhanam discloses, collecting a multiple modality input data stream from a plurality of different types of vehicle sensors (Avadhanam: [0026]); extracting one or more features from the input data stream using a grid-based feature extractor (Avadhanam: [FIG. 3C]; [0044]); retrieving spatial attributes of objects positioned in any of a plurality of cells of the grid-based feature extractor (Avadhanam: [0027]; [0040]; [0042]); detecting one or more anomalies based on residual scores generated by each of cross attention-based anomaly detection and time-series-based anomaly detection (Avadhanam: [0111]; see at least [0130], [0161] for updating original training); identifying one or more defects based on a generated overall defect score determined by integrating the residual scores for the cross attention-based anomaly detection and the time-series based anomaly detection being above a predetermined defect score threshold (Avadhanam: [0152-0153]; see at least [0130], [0161] for updating original training); and controlling operation of the vehicle based on the one or more defects identified (Avadhanam: [0154]; [0157]; [0164]). Avadhanam does not explicitly disclose that the threshold is a zero or greater value. However, Avadhanam, at ¶[0111], discloses, “This confidence value enables the system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections” and “the system may set a threshold value for the confidence and consider only the detections exceeding the threshold value as true positive detections”. In considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom. To the examiner’s best understanding, Avadhanam discloses the above claimed defect function in word form while leaving the threshold value to be chosen by one of ordinary skill. Avadhanam does not explicitly disclose, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data). However, in the same field of endeavor, Qiu discloses, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data) (Qiu: see [0017-0019] … (ii) determining maximum likelihood outcomes for modes of the multi-modal latent state associated with the initial observation; (iii) determining maximum likelihood … (ii) sampling possible outcomes from a belief distribution associated with the initial observation; (iii) sampling possible observations associated with the possible outcomes ... adjusting the current value of the continuous control based at least on a difference between the current observation and a possible value of the continuous observation associated with the particular node; See [0060] for matching precision), for the benefit of determining a control model within a state of uncertainties and imperfect sensors. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Avadhanam to include comparing predictions to actual taught by Qiu. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determining a control model within a state of uncertainties and imperfect sensors. The examiner respectfully submits, Avadhanam, as modified, discloses, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data) (see Qiu above). However, should it be determined that Avadhanam, as modified, fails to disclose, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data), in the same field of endeavor, Li discloses, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data) (Li: [0074] a prediction model for video may be used to remove intra-frame or inter-frame redundancy. The residual data, which is the difference between the actual data and the predicted data, may be mostly close to zero; [0076] the prediction errors, i.e. the difference between the prediction of either model and the actual data. If image compression encoder 215B2 determines that a frame is an I-frame, difference unit 630 determines a difference 631 (i.e., the residual) between the prediction 624 generated by intra-frame prediction model 620 and the actual frame data; [0078] Object database may include two categories of objects: pre-defined static objects and real-time adapted objects. An object-searching algorithm 730 may first search the real-time objects and then the predefined objects. An update algorithm 735 may update the real-time objects. The output of the intra-frame prediction model 620 1 may comprise predicted block which best matches objects from the database or the predicted block output by linear prediction model 720, based on whichever minimizes total prediction errors. As is known in the art, a segment may be an object with specific delineations and bounds, and a partition may be a portion of the image data that includes multiple objects that is used to calculate compression across two segments), for the benefit of determining a confidence using a prediction model when real-time data is limited. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by a modified Avadhanam to include comparing predictions to actual taught by Li. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determining a confidence using a prediction model when real-time data is limited. REGARDING CLAIM 2, Avadhanam, as modified, remains as applied above to claim 1, and further, Avadhanam also discloses, the cross attention-based anomaly detection utilizes the spatial attributes of the objects (Avadhanam: [0027]) and vehicle system data (Avadhanam: [0026]; [0045-0046]; [0055]), and the time-series-based anomaly detection utilizes vehicle system data during the detecting (Avadhanam: [0027]; [0164]). REGARDING CLAIM 3, Avadhanam, as modified, remains as applied above to claim 1, and further, Avadhanam also discloses, the objects are environmental objects representing one or more hazardous conditions (Avadhanam: [0024]; [0045]; [0070]; [0156]). REGARDING CLAIM 4, Avadhanam, as modified, remains as applied above to claim 1, and further, Avadhanam also discloses, the grid-based feature extractor includes nine of the cells, with a vehicle being positioned in a center cell of the grid-based feature extractor (Avadhanam: [0070] location of other vehicles (e.g., an occupancy grid); [0076] to help identify forward-facing paths and obstacles, as well aid in, with the help of one or more controllers 536 and/or control SoCs, providing information critical to generating an occupancy grid and/or determining the preferred vehicle paths; [0079] update the occupancy grid, as well as to generate side impact collision warnings [FIG. 3C]). In this case, applying the Broadest Reasonable Interpretation (BRI) doctrine, an "object" being centered is interpreted as parallel to a "vehicle" being centered. REGARDING CLAIM 5, Avadhanam, as modified, remains as applied above to claim 1, and further, Avadhanam also discloses, additional defects are continuously detected in real-time during operation of the vehicle by iteratively repeating the collecting, the extracting, the retrieving, the detecting, and the identifying during the operation of the vehicle (Avadhanam: [0070]; [0078]; [0086]; [0111]). REGARDING CLAIM 6, as best understood, Avadhanam, as modified, remains as applied above to claim 1, and further, Avadhanam also discloses, generating environmental attention weights in an attention computation stage by encoding received environmental data (Avadhanam: see [0012-0013, 0036-0037] rick assessment and attention; [0043] the pedestrian is at risk because they are very close to the edge of the road but lack the intention to cross the road) and generating one or more keys (Avadhanam: [0029] A machine learning model may be trained with a training dataset including a myriad of body poses, body types (including age variance), clothing or other articles, gestures, accessories, or environmental attributes … [0030] The processing circuitry may further determine the brand and type of clothing worn by the teenage boy as well as a backpack; see [0032] for classification of persons and objects via machine learning and [0033-0034, 0042] for further classifications), with the keys being matched with a query in a temporal attention stage (Avadhanam: [0045] a machine learning model may be trained with a dataset including combinations of trajectories and attribute classifications for various objects. A machine learning model may be further trained with safety data associated with each combination. The safety data may, for each combination, specify data (e.g., specific velocities and directionality of vehicles and objects) relating to a likelihood or potential for collision outcomes and other dangerous outcomes. The machine learning model may be able to determine a risk level associated for each of these combinations. In some embodiments, the processing circuitry may calculate the risk level based on a look-up table having corresponding risk level output based on trajectory and the one or more attributes); cross-applying the environmental attention weights to historical system data of the vehicle (Avadhanam: see [0045, 0048-0052] for applying weights by referencing training data (i.e., historical data)) to generate a prediction of a value at a next timestep (Avadhanam: see "table" cited above and pedestrian trajectory, risk level, and determining safest maneuvering); to generate a prediction of a value at a next timestep (Avadhanam: [0056] association between magnitude and risk level may be based on a predefined table stored in memory); and training a model by adjusting one or more parameters for the prediction to minimize a loss function between a real value and the predicted value (Avadhanam: [0079] side-view cameras) may be used for surround view, providing information used to create and update the occupancy grid, as well as to generate side impact collision warnings … [0080] Cameras with a field of view that include portions of the environment to the rear of the vehicle 500 (e.g., rear-view cameras) may be used for park assistance, surround view, rear collision warnings, and creating and updating the occupancy grid ... [0130] to train and/or update neural networks based on input (e.g., sensor data) from sensors of the vehicle; also see [0161-0164]). In this case, “attention weights” is interpreted as importance or relevance. In this case, “temporal attention stage” is not clearly defined in the specification, thus, interpreted with its common meaning as determining a relevance at any step in a sequence And not as a single stage. REGARDING CLAIM 8, Avadhanam discloses, one or more processors operatively coupled to a non-transitory computer-readable storage medium, the processors being configured for (Avadhanam: [0082]): collecting a multiple modality input data stream from a plurality of different types of vehicle sensors (Avadhanam: [0026]); extracting one or more features from the input data stream using a grid-based feature extractor (Avadhanam: [FIG. 3C]; [0044]); retrieving spatial attributes of objects positioned in any of a plurality of cells of the grid-based feature extractor (Avadhanam: [0027]; [0040]; [0042]); detecting one or more anomalies based on residual scores generated by each of cross attention-based anomaly detection and time-series-based anomaly detection (Avadhanam: [0111]; see at least [0130], [0161] for updating original training); identifying one or more defects based on a generated overall defect score determined by integrating the residual scores for the cross attention-based anomaly detection and the time-series based anomaly detection being above a predetermined defect score threshold (Avadhanam: [0152-0153]; see at least [0130], [0161] for updating original training); and controlling operation of the vehicle based on the one or more defects identified (Avadhanam: [0154]; [0157]; [0164]). Avadhanam does not explicitly disclose that the threshold is a zero or greater value. However, Avadhanam, at ¶[0111], discloses, “This confidence value enables the system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections” and “the system may set a threshold value for the confidence and consider only the detections exceeding the threshold value as true positive detections”. In considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom. To the examiner’s best understanding, Avadhanam discloses the above claimed defect function in word form while leaving the threshold value to be chosen by one of ordinary skill. Avadhanam does not explicitly disclose, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data). However, in the same field of endeavor, Qiu discloses, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data) (Qiu: see [0017-0019] … (ii) determining maximum likelihood outcomes for modes of the multi-modal latent state associated with the initial observation; (iii) determining maximum likelihood … (ii) sampling possible outcomes from a belief distribution associated with the initial observation; (iii) sampling possible observations associated with the possible outcomes ... adjusting the current value of the continuous control based at least on a difference between the current observation and a possible value of the continuous observation associated with the particular node; See [0060] for matching precision), for the benefit of determining a control model within a state of uncertainties and imperfect sensors. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Avadhanam to include comparing predictions to actual taught by Qiu. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determining a control model within a state of uncertainties and imperfect sensors. The examiner respectfully submits, Avadhanam, as modified, discloses, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data) (see Qiu above). However, should it be determined that Avadhanam, as modified, fails to disclose, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data), in the same field of endeavor, Li discloses, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data) (Li: [0074] a prediction model for video may be used to remove intra-frame or inter-frame redundancy. The residual data, which is the difference between the actual data and the predicted data, may be mostly close to zero; [0076] the prediction errors, i.e. the difference between the prediction of either model and the actual data. If image compression encoder 215B2 determines that a frame is an I-frame, difference unit 630 determines a difference 631 (i.e., the residual) between the prediction 624 generated by intra-frame prediction model 620 and the actual frame data; [0078] Object database may include two categories of objects: pre-defined static objects and real-time adapted objects. An object-searching algorithm 730 may first search the real-time objects and then the predefined objects. An update algorithm 735 may update the real-time objects. The output of the intra-frame prediction model 620 1 may comprise predicted block which best matches objects from the database or the predicted block output by linear prediction model 720, based on whichever minimizes total prediction errors. As is known in the art, a segment may be an object with specific delineations and bounds, and a partition may be a portion of the image data that includes multiple objects that is used to calculate compression across two segments), for the benefit of determining a confidence using a prediction model when real-time data is limited. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by a modified Avadhanam to include comparing predictions to actual taught by Li. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determining a confidence using a prediction model when real-time data is limited. REGARDING CLAIM 9, Avadhanam, as modified, remains as applied above to claim 8, and further, Avadhanam also discloses, the cross attention-based anomaly detection utilizes the spatial attributes of the objects (Avadhanam: [0027]) and vehicle system data (Avadhanam: [0026]; [0045-0046]; [0055]), and the time-series-based anomaly detection utilizes vehicle system data during the detecting (Avadhanam: [0027]; [0164]). REGARDING CLAIM 10, Avadhanam, as modified, remains as applied above to claim 8, and further, Avadhanam also discloses, the objects are environmental objects representing one or more hazardous conditions (Avadhanam: [0024]; [0045]; [0070]; [0156]). REGARDING CLAIM 11, Avadhanam, as modified, remains as applied above to claim 8, and further, Avadhanam also discloses, the grid-based feature extractor includes nine of the cells, with a vehicle being positioned in a center cell of the grid-based feature extractor (Avadhanam: [0070]; [0076]; [0079]). REGARDING CLAIM 12, Avadhanam, as modified, remains as applied above to claim 8, and further, Avadhanam also discloses, additional defects are continuously detected in real-time during operation of the vehicle by iteratively repeating the collecting, the extracting, the retrieving, the detecting, and the identifying during the operation of the vehicle (Avadhanam: [0070]; [0078]; [0086]; [0111]). REGARDING CLAIM 13, Avadhanam, as modified, remains as applied above to claim 8, and further, Avadhanam also discloses, generating environmental attention weights in an attention computation stage by encoding received environmental data (Avadhanam: [0012]; [0013]; [0036-0037]; [0043]) and generating one or more keys (Avadhanam: [0045]), with the keys being matched with a query in a temporal attention stage (Avadhanam: [0056]); cross-applying the environmental attention weights to historical system data of the vehicle to generate a prediction of a value at a next timestep (Avadhanam: see "table" cited above and pedestrian trajectory, risk level, and determining safest maneuvering); and training a model by adjusting one or more parameters for the prediction to minimize a loss function between a real value and the predicted value (Avadhanam: [0130]; [0156]). REGARDING CLAIM 15, Avadhanam discloses, a computer readable program operatively coupled to a processor device for defect detection for vehicle operations, wherein the computer readable program when executed on a computer causes the computer to perform the steps of (Avadhanam: [0082]): collecting a multiple modality input data stream from a plurality of different types of vehicle sensors (Avadhanam: [0026]); extracting one or more features from the input data stream using a grid-based feature extractor (Avadhanam: [FIG. 3C]; [0044]); retrieving spatial attributes of objects positioned in any of a plurality of cells of the grid-based feature extractor (Avadhanam: [0027]; [0040]; [0042]); detecting one or more anomalies based on residual scores generated by each of cross attention-based anomaly detection and time-series-based anomaly detection (Avadhanam: [0111]; see at least [0130], [0161] for updating original training); identifying one or more defects based on a generated overall defect score determined by integrating the residual scores for the cross attention-based anomaly detection and the time-series based anomaly detection being above a predetermined defect score threshold (Avadhanam: [0152-0153]; see at least [0130], [0161] for updating original training); and controlling operation of the vehicle based on the one or more defects identified (Avadhanam: [0154]; [0157]; [0164]). Avadhanam does not explicitly disclose that the threshold is a zero or greater value. However, Avadhanam, at ¶[0111], discloses, “This confidence value enables the system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections” and “the system may set a threshold value for the confidence and consider only the detections exceeding the threshold value as true positive detections”. In considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom. To the examiner’s best understanding, Avadhanam discloses the above claimed defect function in word form while leaving the threshold value to be chosen by one of ordinary skill. Avadhanam does not explicitly disclose, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data). However, in the same field of endeavor, Qiu discloses, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data) (Qiu: see [0017-0019] … (ii) determining maximum likelihood outcomes for modes of the multi-modal latent state associated with the initial observation; (iii) determining maximum likelihood … (ii) sampling possible outcomes from a belief distribution associated with the initial observation; (iii) sampling possible observations associated with the possible outcomes ... adjusting the current value of the continuous control based at least on a difference between the current observation and a possible value of the continuous observation associated with the particular node; See [0060] for matching precision), for the benefit of determining a control model within a state of uncertainties and imperfect sensors. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Avadhanam to include comparing predictions to actual taught by Qiu. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determining a control model within a state of uncertainties and imperfect sensors. The examiner respectfully submits, Avadhanam, as modified, discloses, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data) (see Qiu above). However, should it be determined that Avadhanam, as modified, fails to disclose, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data), in the same field of endeavor, Li discloses, wherein the overall defect score is determined as: Defect Score = max (0, Residual A - Residual V), where Residual A represents output of the cross attention-based anomaly detection, and Residual V represents output of the time-series-based anomaly detection (examiner: comparing a prediction data against actual data) (Li: [0074] a prediction model for video may be used to remove intra-frame or inter-frame redundancy. The residual data, which is the difference between the actual data and the predicted data, may be mostly close to zero; [0076] the prediction errors, i.e. the difference between the prediction of either model and the actual data. If image compression encoder 215B2 determines that a frame is an I-frame, difference unit 630 determines a difference 631 (i.e., the residual) between the prediction 624 generated by intra-frame prediction model 620 and the actual frame data; [0078] Object database may include two categories of objects: pre-defined static objects and real-time adapted objects. An object-searching algorithm 730 may first search the real-time objects and then the predefined objects. An update algorithm 735 may update the real-time objects. The output of the intra-frame prediction model 620 1 may comprise predicted block which best matches objects from the database or the predicted block output by linear prediction model 720, based on whichever minimizes total prediction errors. As is known in the art, a segment may be an object with specific delineations and bounds, and a partition may be a portion of the image data that includes multiple objects that is used to calculate compression across two segments), for the benefit of determining a confidence using a prediction model when real-time data is limited. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by a modified Avadhanam to include comparing predictions to actual taught by Li. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determining a confidence using a prediction model when real-time data is limited. REGARDING CLAIM 16, Avadhanam, as modified, remains as applied above to claim 15, and further, Avadhanam also discloses, the cross attention-based anomaly detection utilizes the spatial attributes of the objects (Avadhanam: [0027]) and vehicle system data (Avadhanam: [0026]; [0045-0046]; [0055]), and the time-series-based anomaly detection utilizes vehicle system data during the detecting (Avadhanam: [0027]; [0164]). REGARDING CLAIM 17, Avadhanam, as modified, remains as applied above to claim 15, and further, Avadhanam also discloses, the grid-based feature extractor includes nine of the cells, with a vehicle being positioned in a center cell of the grid-based feature extractor (Avadhanam: [0070]; [0076]; [0079]). REGARDING CLAIM 18, Avadhanam, as modified, remains as applied above to claim 15, and further, Avadhanam also discloses, additional defects are continuously detected in real-time during operation of the vehicle by iteratively repeating the collecting, the extracting, the retrieving, the detecting, and the identifying during the operation of the vehicle (Avadhanam: [0070]; [0078]; [0086]; [0111]). REGARDING CLAIM 19, Avadhanam, as modified, remains as applied above to claim 15, and further, Avadhanam also discloses, generating environmental attention weights in an attention computation stage by encoding received environmental data (Avadhanam: [0012]; [0013]; [0036-0037]; [0043]) and generating one or more keys (Avadhanam: [0045]), with the keys being matched with a query in a temporal attention stage (Avadhanam: [0056]); cross-applying the environmental attention weights to historical system data of the vehicle to generate a prediction of a value at a next timestep (Avadhanam: see "table" cited above and pedestrian trajectory, risk level, and determining safest maneuvering); and training a model by adjusting one or more parameters for the prediction to minimize a loss function between a real value and the predicted value (Avadhanam: [0130]; [0156]). Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Avadhanam (US 20220012988 A1) in view of Qiu (US 20210048823 A1) and Li (US 20190191230 A1) as applied to claim 4 above, and further in view of Kim (KR 20190043035 A). REGARDING CLAIM 21, Avadhanam, as modified, remains as applied above to claim 4, and further, Avadhanam does not explicitly disclose, the feature extractor generates a fixed number of features for the cells, regardless of a number of objects are detected. However, in the same field of endeavor, Kim discloses, the feature extractor generates a fixed number of features for the cells, regardless of a number of objects are detected (Kim: see at least [0023-0024] for threshold points per cell and determining removing background points), for the benefit of for the benefit of clustering objects to remove background objects. in this case, "points" are interpreted as parallel to "features". It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by a modified Avadhanam to include threshold points and removing background points taught by Kim. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to cluster objects to remove background objects. Response to Arguments Applicant’s arguments, beginning on page 10, filed 06-03-2025, with respect to the rejection under 35 USC §112(a) rejection of record have been fully considered and are persuasive. The rejection under 35 USC §112(a) rejection of record has been withdrawn. Applicant’s arguments with respect to the rejection of the independent claim(s) and claim 21 under 35 USC §103, obviousness, have been considered but are moot because the new ground of rejection does not rely on the reference combination applied in the prior rejection of record for matter specifically challenged in the argument. Applicant's arguments regarding the rejection of claim 4 under 35 USC §103, obviousness, have been fully considered but they are not persuasive. As cited above, Avadhanam discloses a grid feature extractor ([0070, 0076, 0079, FIG. 3C]), wherein a centered “pedestrian” is interpreted as parallel to a centered “vehicle”, and a plurality of cells. The examiner respectfully submits, to the examiner’s best understanding, “nine cells” in a routine customization. To the examiner’s best understanding, where the general conditions of a claim are disclosed in the prior art, it is not inventive to expand, in this case contract, a number of cells. In this case, Avadhanam disclosing more than nine cells is interpreted as a parallel disclosure. Applicant's arguments regarding claim 6 have been fully considered but they are not persuasive. As cited above, Avadhanam discloses, “attention weights” (see risk, attention, and relevance levels above), keys (see class, types, labels), cross-applying (see comparing historical data, query, and matching) determining a prediction value at a next step (see predicting magnitude of risk by matching a table stored in memory). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Gonzalez (US 20190135300 A1) Bush (US 20200307561 A1) Clayton (US 20170206464 A1) Hawley (US 20100305806 A1) Wang (US 20230139772 A1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARRON SANTOS whose telephone number is (571)272-5288. The examiner can normally be reached Monday - Friday: 8:00am - 4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANGELA ORTIZ can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.S./Examiner, Art Unit 3663 /ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Nov 10, 2022
Application Filed
Sep 30, 2024
Non-Final Rejection — §103, §112
Dec 11, 2024
Interview Requested
Jan 03, 2025
Applicant Interview (Telephonic)
Jan 03, 2025
Examiner Interview Summary
Jan 06, 2025
Response Filed
Feb 19, 2025
Final Rejection — §103, §112
May 22, 2025
Interview Requested
May 28, 2025
Applicant Interview (Telephonic)
May 28, 2025
Examiner Interview Summary
Jun 03, 2025
Request for Continued Examination
Jun 09, 2025
Response after Non-Final Action
Sep 18, 2025
Non-Final Rejection — §103, §112
Sep 18, 2025
Examiner Interview (Telephonic)
Dec 12, 2025
Interview Requested
Dec 18, 2025
Examiner Interview Summary
Dec 18, 2025
Response Filed
Dec 18, 2025
Applicant Interview (Telephonic)
Feb 20, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12482356
TRANSPORT MANAGEMENT DEVICE, TRANSPORT MANAGEMENT METHOD, AND TRANSPORT SYSTEM
2y 5m to grant Granted Nov 25, 2025
Patent 12454311
STEER-BY-WIRE STEERING DEVICE AND METHOD FOR CONTROLLING THE SAME
2y 5m to grant Granted Oct 28, 2025
Patent 12428170
METHODS AND APPARATUS FOR AUTOMATIC DRONE RESUPPLY OF A PRODUCT TO AN INDIVIDUAL BASED ON GPS LOCATION, WITHOUT HUMAN INTERVENTION
2y 5m to grant Granted Sep 30, 2025
Patent 12427974
MULTIPLE MODE BODY SWING COLLISION AVOIDANCE SYSTEM AND METHOD
2y 5m to grant Granted Sep 30, 2025
Patent 12372360
Methods and Systems for Generating Alternative Routes
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
45%
Grant Probability
58%
With Interview (+12.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 131 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month