Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
The status of claims 1-12 is:
Claims 1-12 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/07/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 1 is objected to because of the following informalities: “patchs”, “aquired”, and “agaist” are all spelled incorrectly. Appropriate correction is required.
Claim 8 is objected to because of the following informalities: limitation b) is missing a semicolon. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 8-9 and 11-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 8 recites the limitation "wherein protection is provided to YOLO object detectors by" in lines 1-2. There is insufficient antecedent basis for this limitation in the claim. Claim 6 does not recite protecting YOLO object detectors. For the sake of examination, claim 8 will be interpreted as intending to be dependent on claim 5.
Claim 9 recites the limitation "wherein the Isolation Forest (iForest) algorithm is used" in line 1. There is insufficient antecedent basis for this limitation in the claim. Claim 1 does not recite an Isolation Forest algorithm. For the sake of examination, claim 9 will be interpreted as intending to be dependent on claim 4.
Claims 11-12 are rejected for being dependent on claim 9.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 8-9, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Xiang et al. (Xiang, C., & Mittal, P. (2021, November). Detectorguard: Provably securing object detectors against localized patch hiding attacks. In Proceedings of the 2021 ACM SIGSAC conference on computer and communications security (pp. 3177-3196). as provided in the IDS received 01/07/2025, hereinafter “Xiang”) in view of Xu et al. (Xu, D., Wang, Y., Meng, Y., & Zhang, Z. (2017, December). An improved data anomaly detection method based on isolation forest. In 2017 10th international symposium on computational intelligence and design (ISCID) (Vol. 2, pp. 287-291). IEEE., hereinafter “Xu”).
Regarding claim 1, Xiang discloses a method for real-time detection and mitigation of attacks on object detectors being fed by input images acquired by one or more imagers (Xiang Page 3177: “In this paper, we propose DetectorGuard as the first general framework for building provably robust object detectors against localized patch hiding attacks”), comprising:
mapping normal attributes of the outputs of an ML-model associated with said object detectors (Xiang Page 3182: “Objectness Explainer takes as inputs the predicted bounding boxes of Base Detector and the generated objectness map of Objectness Predictor, and tries to use each predicted bounding box to explain/match the high activation in the objectness map”);
creating an anomaly detection model being capable of identifying adversarial attacks in the form of adversarial patchs (Xiang Page 3182: “Objectness Explainer takes as inputs the predicted bounding boxes of Base Detector and the generated objectness map of Objectness Predictor, and tries to use each predicted bounding box to explain/match the high activation in the objectness map”), based solely on the outputs of said object detectors and without accessing the object detectors model or any original frames acquired by said one or more imagers (Xiang Page 3182: “Objectness Explainer takes as inputs the predicted bounding boxes of Base Detector and the generated objectness map of Objectness Predictor, and tries to use each predicted bounding box to explain/match the high activation in the objectness map”; only receives output from object detector in Fig. 1);
calculating the anomaly score for each object being detected by said ML-model object detectors (Xiang Page 3183: “If DetCluster(omˆ ) returns None, it means that no large cluster is found, or all objectness predicted by Objectness Predictor is explained by the bounding boxes predicted by Base Detector; ObjExplainer(·) then returns False (i.e., no attack detected). We note that this clustering operation further mitigates Clean Error 2 when the robust classifier predicts background as objects at only a few scattered locations. On the other Xu, receiving a non-empty cluster set indicates that there are clusters of unexplained objectness activations in omˆ (i.e, Base Detector misses an object but Objectness Predictor predicts high objectness). Objectness Explainer regards this as a sign of patch hiding attacks and returns True”);
comparing the anomaly scores of the detected objects to a preset threshold (Xiang Page 3183: “If DetCluster(omˆ ) returns None, it means that no large cluster is found, or all objectness predicted by Objectness Predictor is explained by the bounding boxes predicted by Base Detector; ObjExplainer(·) then returns False (i.e., no attack detected). We note that this clustering operation further mitigates Clean Error 2 when the robust classifier predicts background as objects at only a few scattered locations. On the other Xu, receiving a non-empty cluster set indicates that there are clusters of unexplained objectness activations in omˆ (i.e, Base Detector misses an object but Objectness Predictor predicts high objectness). Objectness Explainer regards this as a sign of patch hiding attacks and returns True”); and
protecting said object detectors agaist said attacks by identifying and mitigating the effects of the adversarial patch attacks using the comparison results (Xiang Page 3182: “In this case, our defense will find unexplained objectness and send out an attack alert”).
Xiang does not explicitly disclose an AI-based method, comprising:
mapping normal attributes of the outputs of an ML-model associated with said object detectors, using unsupervised learning.
However, Xu teaches an AI-based method (Xu Page 287: “Based on the idea of selective integration, the precision and the difference value are taken as the criterion, and the simulated annealing algorithm is used to select the isolation tree with high abnormality detection and differentity to optimize the forest. At the same time, the excess detection precision is small and the difference is small Isolation tree improves the forest construction process of isolated forests, which improves the efficiency of the algorithm and improves the efficiency of the algorithm”), comprising:
mapping normal attributes of the outputs of an ML-model associated with said object detectors, using unsupervised learning (the isolation tree algorithm of Xu uses unsupervised learning).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the AI-based unsupervised learning model of Xu with the method of Xiang because AI models can improve themselves through learning which will lead to more accurate results and because the model of Xu has improved efficiency and accuracy (Xu Page 287). This motivation for the combination of Xiang and Xu is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding claim 2, Xiang discloses the method, wherein the normal attributes of the OD’s outputs are objects’ bounding boxes and confidence vectors (Xiang Fig. 1: input to objectness explainer is base detector bounding boxes and objectness predictor confidence vectors).
Regarding claim 3, Xiang discloses the method, wherein detection is performed, based only on the output of the ML-model being the detected bounding boxes and confidence vectors (Xiang Fig. 1: input to objectness explainer is base detector bounding boxes and objectness predictor confidence vectors).
Regarding claim 4, Xiang does not explicitly disclose the method, wherein the ML-model of the protected AI-based object detector is the Isolation Forest algorithm.
However, Xu teaches the method, wherein the ML-model of the protected AI-based object detector is the Isolation Forest algorithm (Xu Page 287: “Based on the idea of selective integration, the precision and the difference value are taken as the criterion, and the simulated annealing algorithm is used to select the isolation tree with high abnormality detection and differentity to optimize the forest. At the same time, the excess detection precision is small and the difference is small Isolation tree improves the forest construction process of isolated forests, which improves the efficiency of the algorithm and improves the efficiency of the algorithm”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Isolation Forest Algorithm of Xu with the method of Xiang because the Isolation Forest Algorithm of Xu has improved efficiency and accuracy (Xu Page 287). This motivation for the combination of Xiang and Xu is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding claim 5, Xiang discloses the method, wherein protection is provided to the YOLO object detectors (Xiang Page 3179: “In our evaluation, we instantiate the Base Detector with YOLOv4 [2, 53], Faster R-CNN [45], and a hypothetical object detector that is perfect in the clean setting”).
Regarding claim 8, Xiang discloses the method, wherein protection is provided to YOLO object detectors by:
determining candidate’s bounding box (Xiang Fig. 1: objectness explainer has bounding boxes from base detector);
determining a objectness score (Xiang Fig. 1: objectness explainer has objectness score)
determining classes scores (Xiang Fig. 2: robust classification); and
for each object’s bounding box, assuming correlation between the location of the object within the frame (Xiang Pages 3182-3183: “In Line 25-31 of Algorithm 1, we use each predicted bounding box to match/explain the objectness predicted at the same location”), being relative to the imager (Xiang Pages 3182-3183: “In Line 25-31 of Algorithm 1, we use each predicted bounding box to match/explain the objectness predicted at the same location”), the size of the bounding box of the object (Xiang Page 3179: “: Each bounding box b is represented as a tuple (𝑥min, 𝑦min, 𝑥max, 𝑦max,𝑙), where 𝑥min, 𝑦min, 𝑥max, 𝑦max together illustrate the coordinates of the bounding box”), and the objectness and class scores (Xiang Page 3182: “A match happens when Base Detector and Objectness Predictor both predict a bounding box or high objectness at a specific location. In this simplest case, the objectness is well explained by the bounding box; our defense will consider the detection as correct and output the accurate bounding box and the class label predicted by Base Detector”).
Regarding claim 9, Xiang does not explicitly disclose the method, wherein the Isolation Forest (iForest) algorithm is used for anomaly detection by:
learning the patterns of the outputs of object detectors being related to benign objects in different locations in the frame;
inferring if a new object is benign or adversarial by:
b.1) randomly selecting features; and
b.2) constructing decision trees to isolate data points, where the height of the tree represents the anomaly score, and the final score is obtained by subtracting the average height of isolation trees in the ensemble from the data point’s isolation tree height.
However, Xu teaches the method, wherein the Isolation Forest (iForest) algorithm is used for anomaly detection by:
learning the patterns of the outputs of object detectors being related to benign objects in different locations in the frame (Xu Page 287: “Because they have strong sensitivity to segregation, the anomaly data is closer to the root node of the tree, and the normal data is far from the root node, so that the anomaly data can be detected with a small number of characteristic conditions”);
inferring if a new object is benign or adversarial by:
b.1) randomly selecting features (Xu Page 288: “To build iTree, randomly select an attribute A and A split value P from the data set D={d1,d2,⋯,dn}, and then divide each data object di by the value of its attribute A (called di(A)). If di(A)<p, then it is left in the left subtree and vice versa. In this way, the left and right subtrees are constructed iteratively until one of the following conditions is satisfied: a. there is only one data or several identical data in D; b. the tree reaches its maximum height”); and
b.2) constructing decision trees to isolate data points, where the height of the tree represents the anomaly score (Xu Page 288: “To build iTree, randomly select an attribute A and A split value P from the data set D={d1,d2,⋯,dn}, and then divide each data object di by the value of its attribute A (called di(A)). If di(A)<p, then it is left in the left subtree and vice versa. In this way, the left and right subtrees are constructed iteratively until one of the following conditions is satisfied: a. there is only one data or several identical data in D; b. the tree reaches its maximum height”), and the final score is obtained by subtracting the average height of isolation trees in the ensemble from the data point’s isolation tree height (Xu Page 288: “The anomaly score s of an instance d is defined as: S(d,n)=2−E(h(d))c(n)(2) View SourceRight-click on figure for MathML and additional features where E(h(d)) is the average of h(d) from a collection of isolation trees”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Isolation Forest Algorithm of Xu with the method of Xiang because the Isolation Forest Algorithm of Xu has improved efficiency and accuracy (Xu Page 287). This motivation for the combination of Xiang and Xu is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding claim 12, Xiang discloses the method, wherein anomaly detection is performed using Frame-wise detection or Sequence-based detection (Xiang Page 3188: “In this paper, we focus on object detection in the single-frame setting”).
Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over the Xiang and Xu combination in view of Du et al. (Du, Y., Zhao, Z., Song, Y., Zhao, Y., Su, F., Gong, T., & Meng, H. (2023). Strongsort: Make deepsort great again. IEEE Transactions on Multimedia, 25, 8725-8737., hereinafter “Du”).
Regarding claim 6, the Xiang and Xu combination does not explicitly disclose the method, wherein protection is provided to the StrongSORT object-tracking algorithm.
However, Du teaches the method, wherein protection is provided to the StrongSORT object-tracking algorithm (Du Page 8726: “We propose StrongSORT, which equips DeepSORT with advanced modules (i.e., detector and embedding model) and some inference tricks. It can serve as a strong and fair baseline other MOT methods”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the StrongSort object-tracking algoritm of Du with the method of Xiang and Xu because it is a simple substitution of object-tracking algorithm and StrongSort is a strong multi object tracking method (Xu Page 8726). This motivation for the combination of Xiang, Xu, and Du is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention, exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results, and exemplary rationale (B) Simple substitution of one known element for another to obtain predictable results.
Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over the Xiang and Xu combination in view of Siddiqui et al. (Siddiqui, A. J., & Boukerche, A. (2021). A novel lightweight defense method against adversarial patches-based attacks on automated vehicle make and model recognition systems. Journal of Network and Systems Management, 29(4), 41., hereinafter “Siddiqui”).
Regarding claim 7, the Xiang and Xu method do not explicitly disclose the method, wherein the imagers are selected from the group of:
cameras of traffic systems;
surveillance cameras in junctions and intersections.
However, Siddiqui teaches the method, wherein the imagers are selected from the group of:
- cameras of traffic systems;
- surveillance cameras in junctions and intersections (Siddiqui Page 2: “Possible adopters of this technology may include smart cities (for automated surveillance), security agencies and traffic analysts”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate using the method of Xiang and Xu in a setting as suggested by Siddiqui because it would prevent automated traffic systems and surveillance cameras from being susceptible to patch attacks. This motivation for the combination of Xiang, Xu, and Siddiqui is supported by KSR exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over the Xiang and Xu combination in view of Zhang et al. (Zhang, Y., Zhang, Y., Qi, J., Bin, K., Wen, H., Tong, X., & Zhong, P. (2022). Adversarial patch attack on multi-scale object detection for UAV remote sensing images. Remote Sensing, 14(21), 5298., hereinafter “Zhang”).
Regarding claim 10, Xiang discloses the method, wherein detection of attacked objects in a frame is performed by extracting the following features of benign objects that belongs to a protected class:
width – the width of the object’s bounding box (Xiang Page 3179: “Each bounding box b is represented as a tuple (𝑥min, 𝑦min, 𝑥max, 𝑦max,𝑙), where 𝑥min, 𝑦min, 𝑥max, 𝑦max together illustrate the coordinates of the bounding box”, xmin to xmax is the width);
height – the height of the object’s bounding box (Xiang Page 3179: “Each bounding box b is represented as a tuple (𝑥min, 𝑦min, 𝑥max, 𝑦max,𝑙), where 𝑥min, 𝑦min, 𝑥max, 𝑦max together illustrate the coordinates of the bounding box”, ymin to ymax is the height);
objectness – the OD’s confidence that the object inside the bounding box is an object (Xiang Fig. 1: objectness explainer extracts objectness);
Nc – the object’s confidence scores for each possible object class (Xiang Page 3179: “Each bounding box b is represented as a tuple (𝑥min, 𝑦min, 𝑥max, 𝑦max,𝑙), where 𝑥min, 𝑦min, 𝑥max, 𝑦max together illustrate the coordinates of the bounding box, and 𝑙 ∈ L = {0, 1, · · · , 𝑁 − 1} denotes the predicted object label”).
The Xiang and Xu combination does not explicitly disclose the method, wherein detection of attacked objects in a frame is performed by extracting the following features of benign objects that belongs to a protected class:
x center – the center of the object’s bounding box on the horizontal axis;
y center – the center of the object’s bounding box on the vertical axis.
However, Zhang teaches the method, wherein detection of attacked objects in a frame is performed by extracting the following features of benign objects that belongs to a protected class:
- x center – the center of the object’s bounding box on the horizontal axis (Zhang Page 6: “In this work, we focus on the digital attack and physical attack against two detectors, Yolo-V3 and Yolo-V5, which are widely used in object detection. Given an input image, x ⊆ R N×H×W and the target object detector f(·). The outputs f(x) are a set of candidate bounding boxes Bb(x) = . . . (xbi , ybi) is the center of the I th box”);
- y center – the center of the object’s bounding box on the vertical axis (Zhang Page 6: “In this work, we focus on the digital attack and physical attack against two detectors, Yolo-V3 and Yolo-V5, which are widely used in object detection. Given an input image, x ⊆ R N×H×W and the target object detector f(·). The outputs f(x) are a set of candidate bounding boxes Bb(x) = . . . (xbi , ybi) is the center of the I th box”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate extracting the x and y centers as taught by Zhang with the method of Xiang and Xu because it would improve the detection of attacked objects by getting more data and because it would be easily identifiable using the xmin, xmax, ymin, and ymax identified in Xiang. This motivation for the combination of Xiang, Xu, and Zhang is supported by KSR exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Allowable Subject Matter
Claim 11 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN KEUP whose telephone number is (703)756-4578. The examiner can normally be reached Monday - Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AIDAN KEUP/ Examiner, Art Unit 2666 /Molly Wilburn/Primary Examiner, Art Unit 2666