Prosecution Insights
Last updated: April 19, 2026
Application No. 18/528,445

Plausibility And Consistency Checkers For Vehicle Apparatus Cameras

Final Rejection §103§DP
Filed
Dec 04, 2023
Examiner
XIE, EDGAR WANGSHU
Art Unit
2433
Tech Center
2400 — Computer Networks
Assignee
Qualcomm Incorporated
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
14 granted / 17 resolved
+24.4% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
15 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 17 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Claims filed on 11/18/2025 for patent application 18/528,445 has been acknowledged. Claims 1-20 are currently pending and have been considered below. Claims 1, 19, and 19 are independent claims. Claims 1-8, 10-17, and 19-20 have been amended. No new claims have been added. Applicant’s request to hold the non-statutory double patenting rejection in abeyance, on page 11 of the Remarks filed on 11/18/2025, is acknowledged. Thus, the double patenting rejection is reproduced below. Applicant’s amendments to address the claim objections in the non-final rejection are acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/18/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant's arguments filed on 11/18/2025 have been fully considered but they are not persuasive. The reasons set forth below. On page 14 of the remarks, filed on 11/18/2025, applicant argues: “However, Goel relates to a "training" of a model. Therefore, rather than "obtain[ing] a plurality of image processing outputs associated with the image," Goel (individually or when combined with Koseki) at best obtains "image processing outputs" from the disclosed "outputs" and the "ground truth information," e.g. different images.” Examiner respectful disagrees. Firstly, while Goel discusses training of a model within the specification, the abstract of Goel states the primary inventive concept relates to: “A machine-learning (ML) architecture for determining three or more outputs, such as a two and/or three-dimensional region of interest, semantic segmentation, direction logits, depth data, and/or instance segmentation associated with an object in an image.” (Goel, Abstract) Secondly, Goel explicitly teaches the claim limitation of "obtain[ing] a plurality of image processing outputs associated with the image," and does not at best obtain "image processing outputs" from the disclosed "outputs" and the "ground truth information," e.g. different images.” Goel teaches, in ¶[0037], “The ML architecture 114 discussed herein may be configured to receive an image and output a two-dimensional region of interest (ROI) associated with an object in the image, a semantic segmentation associated with the image, directional data associated with the image (e.g., which may comprise a vector per pixel pointing to the center of a corresponding object), depth data associated with the image (which may be in the form of a depth bin and an offset), an instance segmentation associated with the object, and/or a three-dimensional ROI.” In summary, Goel’s teachings is not limited to training a ML architecture and Goel explicitly teaches obtaining a plurality of image process outputs associated with an image input, rather than different images. The 103 rejection of claims 1, 10, and 19 has been updated with the newly cited paragraphs from Goel et al. On page 14-15 of the remarks, filed on 11/18/2025, applicant argues: “Koseki (individually or when combined with Goel) does not disclose the features "detecting an attack on the camera based on the inconsistency between the plurality of image processing outputs.” And “as a threshold matter, Koseki's "score value" is not an "image processing output" a least because "reliability" is not an "image processing output.” Upon further review of the cited reference, examiner respectfully agrees with this argument. Therefore, the 103 rejection of claim 1, 10, and 19 has been updated with a new prior art reference of Hofman et al. Thus, the 35 USC 103 rejection is maintained. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 10, and 19 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of copending Application No. 18/470,924 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because US Patent Application No. 18/470,924 discloses the following limitations of Claims 1, 10, and 19 as shown in the table below: Current Application No. 18/528,445 US Patent Application No. 18/470,924 Claim 1: A method for detecting vision attacks performed by a processing system on an apparatus, the method comprising: processing an image received from a camera of the apparatus using a plurality of trained image processing models to obtain a plurality of image processing outputs; performing a plurality of consistency checks on the plurality of image processing outputs, wherein a consistency check of the plurality of consistency checks compares each of the plurality of image processing outputs to detect an inconsistency; detecting an attack on the camera based on the inconsistency; and performing a mitigation action in response to recognizing the attack. Claim 1: A method for detecting vision attacks performed by a processing system on an apparatus, the method comprising: receiving a plurality of images from one or more cameras of the apparatus; performing a plurality of different processes on the plurality of images to detect different types of image inconsistencies; using results of the plurality of different processes on the plurality of images to recognize a vision attack; and performing one or more mitigation actions in response to recognizing the vision attack. This is a provisional non-statutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Objections Claims 2, 11, and 20 are objected to because of the following informalities: The following limitations, in bold, are recited in variations that may alter claim interpretation while creating grammatical errors, appropriate correction or explanatory remarks to provide clarity is required: Claim 2 recites, “performing object detection processing on the image using a trained object detection model to identify objects image and …” Whereas claim 11 recites, “perform object detection processing on the image using a trained object detection model to identify objects in the image and …” Whereas claim 20 recites, “performing object detection processing on the image using a trained object detection model to identify the objects and …” Please provide appropriate correction so that the above limitation clause is consistent across claims 2, 11, and 20. Additionally, the following limitations, in bold, are recited in different variations that may alter claim interpretation, appropriate correction or explanatory remarks to provide clarity is required: Claim 2 recites, “performing object classification processing on the image using a trained object classification model to classify objects in the .” Whereas claim 11 recites, “perform object classification processing on the image using a trained object classification model to classify objects in the .” Whereas claim 20 recites, “performing object classification processing on the image using a trained object classification model to classify the objects .” Please provide appropriate correction so that the above limitation clause is consistent across claims 2, 11, and 20. Similarly, claims 4 and 13 are objected to because of the same inconsistency informality issue as claims 2, 11, and 20. Appropriate correction or explanatory remarks to provide clarity is required. Similarly, claims 5 and 14 are objected to because of the same inconsistency informality issue as claims 2, 11, and 20. Appropriate correction or explanatory remarks to provide clarity is required. Similarly, claims 7 and 16 are objected to because of the same inconsistency informality issue as claims 2, 11, and 20. Appropriate correction or explanatory remarks to provide clarity is required. Claim 8 and 17 are objected to because of the following informalities: In claims 8 and 17, “adding indications of inconsistencies from each of the plurality of consistency checks to information regarding each detected object that is provided is to an autonomous driving system” should be corrected to “… is provided to …” Appropriate correction or explanatory remarks to provide clarity is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 10-16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Goel et al. (US Patent Application Publication No. US 2021/0181757 A1, hereinafter, Goel) in view of Hofman et al. (US Patent Application Publication No. US 2024/0233328 A1, hereinafter, Hofman) and further in view of Koseki et al. (US Patent Application Publication No. US 2025/0028821 A1, hereinafter, Koseki). Regarding Claim 1, Goel discloses: A method for detecting vision attacks performed by a processing system on an apparatus, the method comprising: processing an image received from a camera of the apparatus using a plurality of trained image processing models to obtain a plurality of image processing outputs associated with the image (Goel, ¶[0036], “ML architecture 114 may receive one or more images, such as image 120, from one or more image sensors of the sensor(s) 104.” ¶[0037], “The ML architecture 114 discussed herein may be configured to receive an image and output a two-dimensional region of interest (ROI) associated with an object in the image, a semantic segmentation associated with the image, directional data associated with the image (e.g., which may comprise a vector per pixel pointing to the center of a corresponding object), depth data associated with the image (which may be in the form of a depth bin and an offset), an instance segmentation associated with the object, and/or a three-dimensional ROI.”) performing a plurality of consistency checks on the plurality of image processing outputs associated with the image, wherein a consistency check of the plurality of consistency checks compares each of the plurality of image processing outputs to detect an inconsistency between the plurality of image processing outputs (Goel, ¶[0019], “the ML architecture, which may comprise a backbone ML model that comprises a set of neural network layers and respective components for determining an ROI (e.g., two-dimensional and/or three-dimensional), semantic segmentation, direction logits, depth data, and/or instance segmentation. For simplicity, each of the outputs discussed herein are referenced in sum as “tasks.”” ¶[0021], “enforcing consistency may comprise determining an uncertainty associated with a task … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”) Goel does not explicitly teach the following limitations that Hofman teaches: detecting an attack on the camera based on the inconsistency between the plurality of image processing outputs (Hofman, ¶[0029] Then, the information processing apparatus 10 determines whether or not an adversarial patch is included in the input scene that has been input to the OD model by using, in combination, each of the extraction result obtained from the OD model performed with respect to the input scene, the classification result of the objects obtained from the OED model, and the classification result obtained from the OD model performed with respect to the plurality of scenes.”); and Goel in view of Hofman is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “image processing systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Goel with Hofman to “detect an attack on the camera based on the inconsistency between the plurality of image processing outputs” because, the disclosure relates a determination whether or not an adversarial patch is included in the object by comparing the first classification result with a second classification result that is a result of classification of the objects obtained by inputting the input scene to a predetermined object detection model. (Hofman, Abstract) Goel in view of Hofman does not explicitly teach the following limitations that Koseki teaches: performing a mitigation action in response to recognizing the attack (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Goel in view of Hofman and further in view of Koseki is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “image processing systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Goel in view of Hofman with Koseki to “perform a mitigation action in response to recognizing the attack” because, the disclosure relates to a countermeasure techniques against an adversarial example patch attack (Koseki, ¶[0010]). Regarding Claim 2, Goel in view of Hofman and further in view of Koseki teaches: The method of claim 1, wherein processing the image received from the camera of the apparatus using the plurality of trained image processing models to obtain the plurality of image processing outputs comprises: performing semantic segmentation processing on the image using a trained semantic segmentation model to associate masks of groups of pixels in the image with classification labels (Goel, ¶[0038], “An ROI may comprise a bounding box, some other bounding shape, and/or a mask. A semantic segmentation may comprise a per-pixel indication of a classification associated therewith (e.g., semantic label, such as “pedestrian,” “vehicle,” “cyclist,” “oversized vehicle,” “articulated vehicle,” “animal), although a semantic label may be associated with any other discrete portion of the image and/or feature maps (e.g., a region, a cluster of pixels).” ¶[0073], “The semantic segmentation component 402 may determine a semantic segmentation 410 of the image 120 and/or confidence(s) 412 associated therewith. For example, the semantic segmentation may comprise a semantic label associated with a discrete portion of the image 120 (e.g., a per-pixel classification label) and/or a confidence indicating a likelihood that the classification is correct. For example, FIG. 4B depicts an example semantic segmentation 414 associated with a portion of image 120.”); performing depth estimation processing on the image using a trained depth estimation model to identify distances to objects in the image (Goel, ¶[0038], “An ROI may comprise a bounding box, some other bounding shape, and/or a mask. … Depth data may comprise an indication of a distance from an image sensor to a surface associated with a portion of the image which, in some examples, may comprise an indication of a depth “bin” and offset.” ¶[0075], “The depth component 406 may determine a depth bin 420 and/or a depth residual 422 associated with a discrete portion of the image 120.”); performing object detection processing on the image using a trained object detection model to identify objects image and define bounding boxes around the identified objects (Hofman, ¶[0025], “a computer device that performs detection of an object included in an input scene that is an example of image data that has been captured in real time by using a camera or the like at various locations, such as a store or an airport, or pieces of image data that have been captured and accumulated. For example, the information processing apparatus 10 uses an object detection model (object detector (OD) model), and performs detection of a bounding box”); and performing object classification processing on the image using a trained object classification model to classify objects in the image (Goel, ¶[0065], “The ROI component(s) 312-316 may each be trained to determine an ROI and/or classification associated with an object. The ROI component(s) 312-316 may comprise a same ML model structure, such as a YOLO structure, and/or the same hyperparameters, although in additional or alternate examples, they may comprise different structure(s) and/or hyperparameters.” ¶[0068], “FIG. 3C depicts an example of an ROI and classification 322 associated with a vehicle detected from image 120.”). Regarding Claim 3, Goel in view of Hofman and further in view of Koseki teaches: The method of claim 2, wherein performing the plurality of consistency checks on the plurality of image processing outputs comprises: performing a semantic consistency check comparing the classification labels with the bounding boxes to identify inconsistencies between mask classifications and the detected objects (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and providing an indication (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”) of detected classification inconsistencies in response to a mask classification being inconsistent with a detected object (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”). Regarding Claim 4, Goel in view of Hofman and further in view of Koseki teaches: The method of claim 3, further comprising: in response to the classification being consistent with the bounding boxes, performing a location consistency check comparing locations within the image of the masks with locations within the image of the bounding boxes to identify inconsistencies in locations of classification masks with the bounding boxes (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and providing an indication of detected classification inconsistencies if locations of classification masks are inconsistent with locations of the bounding boxes (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Regarding Claim 5, Goel in view of Hofman and further in view of Koseki teaches: The method of claim 2, wherein performing the plurality of consistency checks on the plurality of image processing outputs comprises: performing depth plausibility checks comparing depth estimations of the objects with depth estimates of individual pixels or groups of pixels from depth estimation processing to identify distributions in depth estimations of pixels across a detected object that are inconsistent with depth distributions associated with a classification of a mask encompassing the detected object from semantic classification processing (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and providing an indication of a detected depth inconsistency if distributions in depth estimations of pixels across a detected object differ from depth distributions associated with a classification of a mask (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Regarding Claim 6, Goel in view of Hofman and further in view of Koseki teaches: The method of claim 2, wherein performing the plurality of consistency checks on the plurality of image processing outputs comprises: performing a context consistency check comparing depth estimations of a bounding box encompassing a detected object with depth estimations of a mask encompassing the detected object from semantic segmentation processing to determine whether distributions of depth estimations of the mask differ from depth estimations of the bounding box (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and providing an indication of a detected context inconsistency if the distributions of depth estimations of the mask differ from distributions of depth estimations of the bounding box (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Regarding Claim 7, Goel in view of Hofman and further in view of Koseki teaches: The method of claim 2, wherein performing the plurality of consistency checks on the plurality of image processing outputs comprises: performing a label consistency check comparing a detected object from object detection processing with a label of the detected object from object classification processing to determine whether the object classification label is consistent with the detected object (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and providing an indication of detected label inconsistencies if the object classification label is inconsistent with the detected object (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Regarding Claim 10, Goel discloses: An apparatus, comprising: a processing system including one or more processors configured to: process an image received from a camera of the apparatus using a plurality of trained image processing models to obtain a plurality of image processing outputs associated with the image (Goel, ¶[0036], “ML architecture 114 may receive one or more images, such as image 120, from one or more image sensors of the sensor(s) 104.” ¶[0037], “The ML architecture 114 discussed herein may be configured to receive an image and output a two-dimensional region of interest (ROI) associated with an object in the image, a semantic segmentation associated with the image, directional data associated with the image (e.g., which may comprise a vector per pixel pointing to the center of a corresponding object), depth data associated with the image (which may be in the form of a depth bin and an offset), an instance segmentation associated with the object, and/or a three-dimensional ROI.”); perform a plurality of consistency checks on the plurality of image processing outputs associated with the image, wherein a consistency check of the plurality of consistency checks compares each of the plurality of image processing outputs to detect an inconsistency between the plurality of image processing outputs (Goel, ¶[0019], “the ML architecture, which may comprise a backbone ML model that comprises a set of neural network layers and respective components for determining an ROI (e.g., two-dimensional and/or three-dimensional), semantic segmentation, direction logits, depth data, and/or instance segmentation. For simplicity, each of the outputs discussed herein are referenced in sum as “tasks.”” ¶[0021], “enforcing consistency may comprise determining an uncertainty associated with a task … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”) Goel does not explicitly teach the following limitations that Hofman teaches: detect an attack on the camera based on the inconsistency between plurality of the image processing outputs (Hofman, ¶[0029] Then, the information processing apparatus 10 determines whether or not an adversarial patch is included in the input scene that has been input to the OD model by using, in combination, each of the extraction result obtained from the OD model performed with respect to the input scene, the classification result of the objects obtained from the OED model, and the classification result obtained from the OD model performed with respect to the plurality of scenes.”); and Goel in view of Hofman is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “image processing systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Goel with Hofman to “detect an attack on the camera based on the inconsistency between the plurality of image processing outputs” because, the disclosure relates a determination whether or not an adversarial patch is included in the object by comparing the first classification result with a second classification result that is a result of classification of the objects obtained by inputting the input scene to a predetermined object detection model. (Hofman, Abstract) Goel in view of Hofman does not explicitly teach the following limitations that Koseki teaches: perform a mitigation action in response to recognizing the attack (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Goel in view of Hofman and further in view of Koseki is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “image processing systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Goel in view of Hofman with Koseki to “perform a mitigation action in response to recognizing the attack” because, the disclosure relates to a countermeasure techniques against an adversarial example patch attack (Koseki, ¶[0010]). Regarding Claim 11, Goel in view of Hofman and further in view of Koseki teaches: The apparatus of claim 10, wherein to process the image received from the camera the apparatus, the one or more processors are further configured to: perform semantic segmentation processing on the image using a trained semantic segmentation model to associate masks of groups of pixels in the image with classification labels; perform depth estimation processing on the image using a trained depth estimation model to identify distances to objects in the images (Goel, ¶[0038], “An ROI may comprise a bounding box, some other bounding shape, and/or a mask. A semantic segmentation may comprise a per-pixel indication of a classification associated therewith (e.g., semantic label, such as “pedestrian,” “vehicle,” “cyclist,” “oversized vehicle,” “articulated vehicle,” “animal), although a semantic label may be associated with any other discrete portion of the image and/or feature maps (e.g., a region, a cluster of pixels).” ¶[0073], “The semantic segmentation component 402 may determine a semantic segmentation 410 of the image 120 and/or confidence(s) 412 associated therewith. For example, the semantic segmentation may comprise a semantic label associated with a discrete portion of the image 120 (e.g., a per-pixel classification label) and/or a confidence indicating a likelihood that the classification is correct. For example, FIG. 4B depicts an example semantic segmentation 414 associated with a portion of image 120.”); perform object detection processing on the image using a trained object detection model to identify objects in the images and define bounding boxes around identified objects (Hofman, ¶[0025], “a computer device that performs detection of an object included in an input scene that is an example of image data that has been captured in real time by using a camera or the like at various locations, such as a store or an airport, or pieces of image data that have been captured and accumulated. For example, the information processing apparatus 10 uses an object detection model (object detector (OD) model), and performs detection of a bounding box”); and perform object classification processing on the image using a trained object classification model to classify objects in the images (Goel, ¶[0065], “The ROI component(s) 312-316 may each be trained to determine an ROI and/or classification associated with an object. The ROI component(s) 312-316 may comprise a same ML model structure, such as a YOLO structure, and/or the same hyperparameters, although in additional or alternate examples, they may comprise different structure(s) and/or hyperparameters.” ¶[0068], “FIG. 3C depicts an example of an ROI and classification 322 associated with a vehicle detected from image 120.”). Regarding Claim 12, Goel in view of Hofman and further in view of Koseki teaches: The apparatus of claim 11, wherein the one or more processors are further configured to perform the plurality of consistency checks on the plurality of image processing outputs, the one or more processors are further configured to: perform a semantic consistency check comparing classification labels associated with masks from semantic segmentation processing with bounding boxes of object detections in the image from object detection processing to identify inconsistencies between mask classifications and detected objects (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and provide an indication (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”) of detected classification inconsistencies in response to a mask classification being inconsistent with a detected object in the image (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”). Regarding Claim 13, Goel in view of Hofman and further in view of Koseki teaches: The apparatus of claim 12, wherein in response to classification labels associated with masks from semantic segmentation processing being consistent with bounding boxes of object detections from object detection processing, the one or more processors are further configured to: perform a location consistency check comparing locations within the image of classification masks from semantic segmentation processing with locations within the image of bounding boxes of object detections in the images from object detection processing to identify inconsistencies in locations of classification masks with detected object bounding boxes (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and provide an indication of detected classification inconsistencies if locations of classification masks are inconsistent with locations of detected object bounding boxes within the image (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Regarding Claim 14, Goel in view of Hofman and further in view of Koseki teaches: The apparatus of claim 11, wherein to perform the plurality of consistency checks on the plurality of image processing outputs, the one or more processors are further configured to: perform depth plausibility checks comparing depth estimations of detected objects from object detection processing with depth estimates of individual pixels or groups of pixels from depth estimation processing to identify distributions in depth estimations of pixels across a detected object that are inconsistent with depth distributions associated with a classification of a mask encompassing the detected object from semantic classification processing (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and provide an indication of a detected depth inconsistency if distributions in depth estimations of pixels across a detected object from depth distributions associated with a classification of a mask (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Regarding Claim 15, Goel in view of Hofman and further in view of Koseki teaches: The apparatus of claim 11, wherein to perform the plurality of consistency checks on the plurality of image processing outputs, the one or more processors are further configured to: perform a context consistency check comparing depth estimations of a bounding box encompassing a detected object from object detection processing with depth estimations of a mask encompassing the detected object from semantic segmentation processing to determine whether distributions of depth estimations of the mask differ from depth estimations of the bounding box (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and provide an indication of a detected context inconsistency if the distributions of depth estimations of the mask are the same as or similar to distributions of depth estimations of the bounding box (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Regarding Claim 16, Goel in view of Hofman and further in view of Koseki teaches: The apparatus of claim 11, wherein to perform the plurality of consistency checks on the plurality of image processing outputs, the one or more processors are further configured to: perform a label consistency check comparing a detected object from object detection processing with a label of the detect object from object classification processing to determine whether the object classification label is consistent with the detect object (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”); and provide an indication of detected label inconsistencies if the object classification label is inconsistent with the detected object (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Regarding Claim 19, Goel discloses: A non-transitory processor-readable medium having stored thereon processor-executable instructions configured to cause a processing system of an apparatus to perform operations comprising: processing an image received from a camera of the apparatus using a plurality of trained image processing models to obtain a plurality of image processing outputs associated with the image (Goel, ¶[0036], “ML architecture 114 may receive one or more images, such as image 120, from one or more image sensors of the sensor(s) 104.” ¶[0037], “The ML architecture 114 discussed herein may be configured to receive an image and output a two-dimensional region of interest (ROI) associated with an object in the image, a semantic segmentation associated with the image, directional data associated with the image (e.g., which may comprise a vector per pixel pointing to the center of a corresponding object), depth data associated with the image (which may be in the form of a depth bin and an offset), an instance segmentation associated with the object, and/or a three-dimensional ROI.”) performing a plurality of consistency checks on the plurality of image processing outputs associated with the image, wherein a consistency check of the plurality of consistency checks compares each of the plurality of image processing outputs to detect an inconsistency between the plurality of image processing outputs (Goel, ¶[0019], “the ML architecture, which may comprise a backbone ML model that comprises a set of neural network layers and respective components for determining an ROI (e.g., two-dimensional and/or three-dimensional), semantic segmentation, direction logits, depth data, and/or instance segmentation. For simplicity, each of the outputs discussed herein are referenced in sum as “tasks.”” ¶[0021], “enforcing consistency may comprise determining an uncertainty associated with a task … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”) Goel does not explicitly teach the following limitations that Hofman teaches: detecting an attack on the camera based on the inconsistency between the plurality of image processing outputs (Hofman, ¶[0029] Then, the information processing apparatus 10 determines whether or not an adversarial patch is included in the input scene that has been input to the OD model by using, in combination, each of the extraction result obtained from the OD model performed with respect to the input scene, the classification result of the objects obtained from the OED model, and the classification result obtained from the OD model performed with respect to the plurality of scenes.”); and Goel in view of Hofman is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “image processing systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Goel with Hofman to “detect an attack on the camera based on the inconsistency between the plurality of image processing outputs” because, the disclosure relates a determination whether or not an adversarial patch is included in the object by comparing the first classification result with a second classification result that is a result of classification of the objects obtained by inputting the input scene to a predetermined object detection model. (Hofman, Abstract) Goel in view of Hofman does not explicitly teach the following limitations that Koseki teaches: performing a mitigation action in response to recognizing the attack (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”). Goel in view of Hofman and further in view of Koseki is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “image processing systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Goel in view of Hofman with Koseki to “perform a mitigation action in response to recognizing the attack” because, the disclosure relates to a countermeasure techniques against an adversarial example patch attack (Koseki, ¶[0010]). Regarding Claim 20, Goel in view of Hofman and further in view of Koseki teaches: The non-transitory processor-readable medium of claim 19, wherein the processor-executable instructions are further configured to cause the processing system to perform operations such that processing the image received from the camera the apparatus using a plurality of trained image processing models to obtain a plurality of image processing outputs comprises: performing semantic segmentation processing on the image using a trained semantic segmentation model to associate masks of groups of pixels in the image with classification labels (Goel, ¶[0038], “An ROI may comprise a bounding box, some other bounding shape, and/or a mask. A semantic segmentation may comprise a per-pixel indication of a classification associated therewith (e.g., semantic label, such as “pedestrian,” “vehicle,” “cyclist,” “oversized vehicle,” “articulated vehicle,” “animal), although a semantic label may be associated with any other discrete portion of the image and/or feature maps (e.g., a region, a cluster of pixels).” ¶[0073], “The semantic segmentation component 402 may determine a semantic segmentation 410 of the image 120 and/or confidence(s) 412 associated therewith. For example, the semantic segmentation may comprise a semantic label associated with a discrete portion of the image 120 (e.g., a per-pixel classification label) and/or a confidence indicating a likelihood that the classification is correct. For example, FIG. 4B depicts an example semantic segmentation 414 associated with a portion of image 120.”); performing depth estimation processing on the image using a trained depth estimation model to identify distances to objects in the images (Goel, ¶[0038], “An ROI may comprise a bounding box, some other bounding shape, and/or a mask. … Depth data may comprise an indication of a distance from an image sensor to a surface associated with a portion of the image which, in some examples, may comprise an indication of a depth “bin” and offset.” ¶[0075], “The depth component 406 may determine a depth bin 420 and/or a depth residual 422 associated with a discrete portion of the image 120.”); performing object detection processing on the image using a trained object detection model to identify objects in the images and define bounding boxes around identified objects (Hofman, ¶[0025], “a computer device that performs detection of an object included in an input scene that is an example of image data that has been captured in real time by using a camera or the like at various locations, such as a store or an airport, or pieces of image data that have been captured and accumulated. For example, the information processing apparatus 10 uses an object detection model (object detector (OD) model), and performs detection of a bounding box”); and performing object classification processing on the image using a trained object classification model to classify objects in the images (Goel, ¶[0065], “The ROI component(s) 312-316 may each be trained to determine an ROI and/or classification associated with an object. The ROI component(s) 312-316 may comprise a same ML model structure, such as a YOLO structure, and/or the same hyperparameters, although in additional or alternate examples, they may comprise different structure(s) and/or hyperparameters.” ¶[0068], “FIG. 3C depicts an example of an ROI and classification 322 associated with a vehicle detected from image 120.”). Claims 8, 9, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Goel et al. (US Patent Application Publication No. US 2021/0181757 A1, hereinafter, Goel) in view of Hofman et al. (US Patent Application Publication No. US 2024/0233328 A1, hereinafter, Hofman) and further in view of Koseki (US Patent Application Publication No. US 2025/0028821 A1, hereinafter, Koseki) and Deegan et al. (US Patent Application Publication No. US 2021/0287387 A1, hereinafter, Deegan). Regarding Claim 8, Goel in view of Hofman and further in view of Koseki teaches: The method of claim 1, wherein performing a mitigation action in response to recognizing the attack comprises adding indications of inconsistencies from each of the plurality of consistency checks to information regarding each detected object that is provided is (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”). Goel in view of Hofman and further in view of Koseki does not explicitly teach the following limitation that Deegan teaches: to an autonomous driving system (Deegan, ¶[0041], “In some embodiments, one or more services of the internal computing system 510 are configured to send and receive communications to remote computing system 550 for such reasons as reporting data for training and evaluating machine learning algorithms, requesting assistance from remoting computing system or a human operator via remote computing system 550”) Goel in view of Hofman and further in view of Koseki and Deegan is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “image processing systems.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Goel in view of Hofman and further in view of Koseki with Deegan to apply “to an autonomous driving system” Because subject disclosure relates to techniques for selecting points of an image for processing with LiDAR data (Deegan, Abstract) and FIG. 5 illustrates an example environment that includes an autonomous vehicle in communication with a remote computing system (Deegan, ¶[0008]). Regarding Claim 9, Goel in view of Hofman and further in view of Koseki and Deegan teaches: The method of claim 1, wherein performing a mitigation action in response to recognizing the attack comprises reporting the detected attack (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”) to a remote system (Deegan, ¶[0041], “In some embodiments, one or more services of the internal computing system 510 are configured to send and receive communications to remote computing system 550 for such reasons as reporting data for training and evaluating machine learning algorithms, requesting assistance from remoting computing system or a human operator via remote computing system 550”). Regarding Claim 17, Goel in view of Hofman and further in view of Koseki and Deegan teaches: The apparatus of claim 10, wherein the one or more processors are further configured to perform a mitigation action in response to recognizing the attack that adds indications of inconsistencies from each of the plurality of consistency checks to information regarding each detected object that provided is (Goel, ¶[0021], “For example, enforcing consistency may comprise determining an uncertainty associated with a task. … examples include, but are not limited to, comparing (e.g., determining a difference between) the ROI output by the network a bounding region determined based on one or more of the instance segmentation, semantic segmentation, and/or direction data; projecting a three-dimensional ROI into the image frame and comparing the resulting projected region with the two-dimensional ROI; determining a difference between lidar data and depth data output by the ML architecture; determining a difference between lidar data, depth data, and/or a bounding region associated with a three-dimensional ROI, and the like.”). to an autonomous driving system (Deegan, ¶[0041], “In some embodiments, one or more services of the internal computing system 510 are configured to send and receive communications to remote computing system 550 for such reasons as reporting data for training and evaluating machine learning algorithms, requesting assistance from remoting computing system or a human operator via remote computing system 550”) Regarding Claim 18, Goel in view of Hofman and further in view of Koseki and Deegan teaches: The apparatus of claim 10, wherein the one or more processors are further configured to perform a mitigation action in response to recognizing the attack that reports the detected attack (Koseki, ¶[0095], “The determination unit 140 transmits a determination flag and a detection result to the output unit 150. The output unit 150 accepts the determination flag and the detection result from the determination unit 140.” ¶[0097], “In step S160, the output unit 150 outputs a processing result 192. For example, the output unit 150 displays the processing result 192 onto a display.”) to a remote system (Deegan, ¶[0041], “In some embodiments, one or more services of the internal computing system 510 are configured to send and receive communications to remote computing system 550 for such reasons as reporting data for training and evaluating machine learning algorithms, requesting assistance from remoting computing system or a human operator via remote computing system 550”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDGAR W XIE whose telephone number is (703)756-4777. The examiner can normally be reached Monday - Friday, 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JEFFREY PWU can be reached at (571)272-6798. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDGAR W XIE/ Examiner, Art Unit 2433 /WASIKA NIPA/ Primary Examiner, Art Unit 2433
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
Aug 19, 2025
Non-Final Rejection — §103, §DP
Oct 17, 2025
Interview Requested
Oct 27, 2025
Applicant Interview (Telephonic)
Oct 27, 2025
Examiner Interview Summary
Nov 18, 2025
Response Filed
Feb 06, 2026
Examiner Interview (Telephonic)
Feb 20, 2026
Final Rejection — §103, §DP
Apr 02, 2026
Examiner Interview Summary
Apr 02, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602475
AGGREGATING INPUT/OUTPUT OPERATION FEATURES EXTRACTED FROM STORAGE DEVICES TO FORM A MACHINE LEARNING VECTOR TO CHECK FOR MALWARE
2y 5m to grant Granted Apr 14, 2026
Patent 12579267
Methods and Systems for Analyzing Environment-Sensitive Malware with Coverage-Guided Fuzzing
2y 5m to grant Granted Mar 17, 2026
Patent 12579281
Dynamic Prioritization of Vulnerability Risk Assessment Findings
2y 5m to grant Granted Mar 17, 2026
Patent 12566844
SYSTEM AND METHOD FOR COLLABORATIVE SMART EVIDENCE GATHERING AND INVESTIGATION FOR INCIDENT RESPONSE, ATTACK SURFACE MANAGEMENT, AND FORENSICS IN A COMPUTING ENVIRONMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12513001
BLOCKCHAIN VERIFICATION OF DIGITAL CONTENT ATTRIBUTIONS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+37.5%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 17 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month