DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office Action is in response to the communication filed on 14 Dec 2022.
Claims 1-20 are being considered on the merits.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 14 Dec 2022 has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, initialed and dated copies of Applicant's IDS form 1499 is attached to the instant Office action.
Drawings
The drawings filed on 14 Dec 2022 are accepted.
Claim Rejections - 35 USC § 103
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-10, 12-13, 17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Stoycheva, Mihaela (“Uncertainty Estimation in Deep Neural Object Detectors for Autonomous Driving” DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE, SWEDEN 2020; hereinafter, “Stoycheva”) in view of Suprem, et. al. (arXiv:2009.05440v1 [cs.CV] 9 Sep 2020; hereinafter “Supreme”) and further in view of Patton, et. al. (US 2019/0171428 A1; hereinafter, “Patton”).
Claim 1, Stoycheva teaches:
measuring an intersection-over-union (IoU) (Stoycheva pg. 57 teaches an IoU score for each anchor box) conditioned expected calibration error (ECE) IoU-ECE by calculating an ECE (Stoycheva pg. 52 teaches ECE as a metric to evaluate calibration) under a white-box setting with detections from the dataset prior to non-maximum suppression (pre-NMS detections) that are conditioned on a specific IoU threshold; (Stoycheva pg. 53 teaches iteratively performing NMS as compared to similarity and threshold scores).
upon a determination of the IoU-ECE being greater than a preset first threshold (Stoycheva pg 29 teaches a single i.e. same IoU threshold for bounding and object detection), performing a white-box (Stoycheva pg 59 teaches a validation dataset to optimize calibration) temperature scaling (WB-TS) calibration on the pre-NMS detections of the dataset to extract a temperature T; and (Stoycheva pg. 47 teaches use of temperature scaling to improve calibration scores)
identifying that the data drift has occurred upon a determination that temperature T exceeds a preset second threshold. (Stoycheva pg. 49 teaches using a temperature scaling with a softmax threshold that results in identification of out-of-distribution values indicative of data drift).
Stoycheva does not explicitly disclose but Suprem teaches:
identify a data drift in a trained object detection deep neural network (DNN) by: (Suprem sec. 2.2 and fig. 5 teaches and illustrates a deep neural network detecting concept drift.)
receiving a dataset based on real world use, (Suprem, sec. 2.1 teaches use of a real world dataset consisting of dashboard camera videos in different models)
wherein the dataset includes scores associated with each class in an image, (Suprem sec. 5.2 teaches assigning a confidence score regarding the probability of an object in a bounding box.)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Suprem into Stoycheva. Stoycheva teaches uncertainty estimation of deep conventional neural networks; Suprem teaches an unsupervised algorithm for detecting drift by comparing distributions of data with previous data. One of ordinary skill would have been motivated to combine the teachings of Suprem into Stoycheva in order to implement automated drift detection for higher throughput, better accuracy, and smaller memory footprint. (Supreme, abstract).
Stoycheva, as modified does not explicitly disclose but Patton teaches:
A system, comprising a computer including a processor and a memory, the memory storing instructions executable by the processor programmed to: (Patton para. 0029 teaches a system of CPU, GPUs, microprocessor and the like. Patton para 0059 teaches memory containing computer-executable instructions).
including a background (BG) class; (Patton, para. 0038 teaches labelers to label raw data or respective features included geographic region, modalities, and metadata, each of which can be considered background i.e. non-object classes).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Patton into Stoycheva. Patton teaches a method for model management within a testing platform. One of ordinary skill would have been motivated to combine the teachings of Patton into Stoycheva in order to generate, validate, and deploy models with less cost and time. (Patton para. 0004).
Claim 2, Stoycheva as modified teaches:
The system of claim 1, further comprising instructions to use(Patton para. 59 teaches instructions to execute computer components) the extracted temperature T to calibrate incoming data upon identifying the data drift. (Stoycheva pg. 47 teaches use of temperature scaling to improve calibration scores)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Patton into Stoycheva, as modified, as set forth above with respect to claim 1.
Claim 3, Stoycheva as modified teaches:
The system of claim 2, wherein incoming data is calibrated by uniformly scaling logit vectors associated with the pre-NMS detections of the object detection DNN with the temperature T prior to a Sigmoid/Softmax layer. (Stoycheva pg. 49 teaches using a temperature scaling prior to applying softmax where such scaling results in different softmax value assignations and page 58 and 59 teach detecting boxes and applying an NMS agorithm).
Claim 4, Stoycheva as modified teaches:
The system of claim 1, further comprising instructions to perform additional learning on the object detection DNN upon identifying the data drift. (Suprem sec. 3 and fig 3 illustrate a additional inference b a specialized model after drift is detected).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Suprem into Stoycheva as set forth above with respect to claim 1.
Claim 5, Stoycheva as modified teaches:
The system of claim 1, wherein the IoU-ECE is
I
o
U
-
E
C
E
=
∑
m
-
1
M
B
m
n
|
a
c
c
B
m
-
c
o
n
f
(
B
m
)
|
where n is the number of IoU-conditioned samples, M is a number of interval bins, (=15) and
B
m
is a set of indices of samples whose prediction scores fall in an interval
I
m
=
(
m
-
1
M
,
m
/
M
]
. (Stoycheva algorithm 2.51 on page 52 teaches this algorithm).
Claim 6, Stoycheva as modified teaches:
The system of claim 1, wherein the specific IoU threshold is set to be the same as an IoU threshold used for training the object detection DNN. (Stoycheva pg 29 teaches a single i.e. same IoU threshold for bounding and object detection).
Claim 7, Stoycheva as modified teaches:
determine background ground truth boxes in the dataset by comparing ground truth boxes with detection boxes generated by the object detection DNN using an intersection over union (IoU) threshold; (Stoycheva pg 29 teaches comparison of ground truth boxes and object detection using IoU threshold).
correct for class imbalance between the ground truth boxes and the background ground truth boxes in a ground truth class by updating the ground truth class to include a number of background ground truth boxes based on a number of ground truth boxes in the ground truth class; and (Stoycheva pg. 29 teaches use of an number of bounding boxes for both true positives indicating object and true negatives indicating background and use of interval N values for precision of plotting i.e. to correct class imbalance and Stoycheva pg. 31 teaches background loss of a box).
determine a single scalar parameter of the temperature T for all classes by optimizing for a negative log likelihood (NLL) loss. (Stoycheva pg 29 teaches a single temperature scaling to improve calibration scores which is optimizing T, pg. 49 teaches a network trained to minimize a negative log likelihood i.e. a loss function).
Stoycheva does not explicitly disclose but Patton teaches:
The system of claim 1, wherein the instructions for performing the WB-TS calibration on the pre-NMS detections of the dataset to extract the temperature T include instructions to: (Patton para 0059 teaches memory containing computer-executable instructions).
retrieve the dataset, wherein the dataset includes scores associated with each object class in an image, including a background (BG) class; (Patton, para. 0038 teaches labelers to label raw data or respective features included geographic region, modalities, and metadata, each of which can be considered background i.e. non-object classes).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Patton into Stoycheva, as modified, as set forth above with respect to claim 1.
Claim 8, Stoycheva as modified teaches:
The system of claim 1, wherein the preset first threshold is in a range of 2 to 4 times an IoU-ECE value calculated from a held out validation dataset and the preset second threshold is in a range of 2 to 4 times a temperature T extracted from the held out validation dataset. (Stoycheva pg. 82 teaches thresholds of 0.5 and 0.95 where such thresholds fall within a range of 2 to 4 times a value calculated where the value calculated includes several variables including accuracies for every bin where accuracies as shown in fig. 2.5 include values between 0.0 and 1 and Stoycheva at pg 29 teaches different temperature values at different confidence values which may include values between 0 and 1).
Claim 9, Stoycheva as modified teaches:
The system of claim 3, further including instructions to: after the Sigmoid/Softmax layer, perform non-maximum suppression on calibrated confidence scores with corresponding bounding box predictions to obtain final detections; and (Stoycheva pg. 59, algorithm 1 teaches non-maximum suppression algorithm on a set of bounding boxes to obtain a resulting set of bounding boxes).
Stoycheva does not explicitly disclose but Patton teaches:
actuate a vehicle component based upon an object detection determination of the object detection DNN. (Patton para. 0024 teaches event detection and notifications generated by the model, para. 0038 teaches object labeling, and para. 0027 teaches a car detecting a car crash such that Patton generally generating notifications for car crashes).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Patton into Stoycheva, as modified, as set forth above with respect to claim 1.
Claim 10, Stoycheva as modified teaches:
The system of claim 7, wherein the instructions to correct for class imbalance include instructions to: determine an average number of pre-NMS detection boxes in non-BG classes as k; and (Stoycheva pg. 58-59 and algorithm 1 teaches an empty set of boundary boxes (pre-NMS detection boxes and measure an IoU where the highest score of each box i.e. a “top”-score is added to the output set of bounding boxes))
extract a top k pre-NMS detection boxes in the BG class using corresponding model scores. (Stoycheva pg. 58-59 and algorithm 1 teaches an empty set of boundary boxes (pre-NMS detection boxes and measure an IoU where the highest score of each box i.e. a “top”-score is added to the output set of bounding boxes))
Claim 12, Stoycheva as modified teaches:
The method of claim 11, further comprising using (Patton para. 59 teaches instructions to execute computer components) the extracted temperature T to calibrate incoming data upon identifying the data drift. (Stoycheva pg. 47 teaches use of temperature scaling to improve calibration scores)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Patton into Stoycheva, as modified, as set forth above with respect to claim 1.
Claim 13, Stoycheva as modified teaches:
The method of claim 12, wherein incoming data is calibrated by uniformly scaling logit vectors associated with the pre-NMS detections of the object detection DNN with the temperature T prior to a Sigmoid/Softmax layer. (Stoycheva pg. 49 teaches using a temperature scaling prior to applying softmax where such scaling results in different softmax value assignations and page 58 and 59 teach detecting boxes and applying an NMS algorithm).
Claim 17, Stoycheva as modified teaches:
determining background ground truth boxes in the dataset by comparing ground truth boxes with detection boxes generated by the object detection DNN using an intersection over union (Io U) threshold; (Stoycheva pg 29 teaches comparison of ground truth boxes and object detection using IoU threshold).
correcting for class imbalance between the ground truth boxes and the background ground truth boxes in a ground truth class by updating the ground truth class to include a number of background ground truth boxes based on a number of ground truth boxes in the ground truth class; and (Stoycheva pg. 29 teaches use of an number of bounding boxes for both true positives indicating object and true negatives indicating background and use of interval N values for precision of plotting i.e. to correct class imbalance and Stoycheva pg. 31 teaches background loss of a box).
determining a single scalar parameter of the temperature T for all classes by optimizing for a negative log likelihood (NLL) loss. (Stoycheva pg 29 teaches a single temperature scaling to improve calibration scores which is optimizing T, pg. 49 teaches a network trained to minimize a negative log likelihood i.e. a loss function).
Stoycheva does not explicitly disclose, but Patton teaches:
The method of claim 11, wherein performing the WB-TS calibration on the pre-NMS detections of the dataset to extract the temperature T includes: retrieving the dataset, wherein the dataset includes scores associated with each object class in an image, including a background (BG) class; (Patton, para. 0038 teaches labelers to label raw data or respective features included geographic region, modalities, and metadata, each of which can be considered background i.e. non-object classes).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Patton into Stoycheva, as modified, as set forth above with respect to claim 1.
Claim 19, Stoycheva as modified teaches:
The method of claim 13, further including: after the Sigmoid/Softmax layer, performing non-maximum suppression on calibrated confidence scores with corresponding bounding box predictions to obtain final detections; and (Stoycheva pg. 59, algorithm 1 teaches non-maximum suppression algorithm on a set of bounding boxes to obtain a resulting set of bounding boxes).
Stoycheva does not explicitly disclose but Patton teaches:
actuating a vehicle component based upon an object detection determination of the object detection DNN. (Patton para. 0024 teaches event detection and notifications generated by the model, para. 0038 teaches object labeling, and para. 0027 teaches a car detecting a car crash such that Patton generally generating notifications for car crashes).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Patton into Stoycheva, as modified, as set forth above with respect to claim 1.
Claim 20, Stoycheva as modified teaches:
The method of claim 17, wherein correcting for class imbalance includes: determining an average number of pre-NMS detection boxes in non-BG classes ask; and (Stoycheva pg. 58-59 and algorithm 1 teaches an empty set of boundary boxes (pre-NMS detection boxes and measure an IoU where the highest score of each box i.e. a “top”-score is added to the output set of bounding boxes))
extracting a top k pre-NMS detection boxes in the BG class using corresponding model scores. (Stoycheva pg. 58-59 and algorithm 1 teaches an empty set of boundary boxes (pre-NMS detection boxes and measure an IoU where the highest score of each box i.e. a “top”-score is added to the output set of bounding boxes))
Claims 11, 16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Stoycheva, in view of Suprem.
Claim 11, Stoycheva as modified teaches:
measuring an intersection-over-union (IoU) (Stoycheva pg. 57 teaches an IoU score for each anchor box) conditioned expected calibration error (ECE) IoU-ECE by calculating an ECE (Stoycheva pg. 52 teaches ECE as a metric to evaluate calibration) under a white-box setting with detections from the dataset prior to non-maximum suppression (pre-NMS detections) that are conditioned on a specific IoU threshold; (Stoycheva pg. 53 teaches iteratively performing NMS as compared to similarity and threshold scores).
upon a determination of the IoU-ECE being greater than a preset first threshold (Stoycheva pg 29 teaches a single i.e. same IoU threshold for bounding and object detection), performing a white-box (Stoycheva pg 59 teaches a validation dataset to optimize calibration) temperature scaling (WB-TS) calibration on the pre-NMS detections of the dataset to extract a temperature T; and (Stoycheva pg. 47 teaches use of temperature scaling to improve calibration scores)
identifying that the data drift has occurred upon a determination that temperature T exceeds a preset second threshold. (Stoycheva pg. 49 teaches using a temperature scaling with a softmax threshold that results in identification of out-of-distribution values indicative of data drift).
Stoycheva does not specifically disclose but Suprem teaches:
A method to identify a data drift in a trained object detection deep neural network (DNN) by: (Suprem sec. 2.2 and fig. 5 teaches and illustrates a deep neural network detecting concept drift.)
receiving a dataset based on real world use, wherein the dataset includes scores associated with each class in an image, including a background (BG) class; (Suprem, sec. 2.1 teaches use of a real world dataset consisting of dashboard camera videos in different models)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Suprem into Stoycheva, as set forth above with respect to claim 1.
Claim 14, Stoycheva as modified teaches:
• The method of claim 11, further comprising performing additional learning on the object detection DNN upon identifying the data drift. (Suprem sec. 3 and fig 3 illustrate a additional inference b a specialized model after drift is detected).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Suprem into Stoycheva, as set forth above with respect to claim 1.
Claim 15:
The method of claim 11, wherein the IoU- ECE is
I
o
U
-
E
C
E
=
∑
m
-
1
M
B
m
n
|
a
c
c
B
m
-
c
o
n
f
(
B
m
)
|
where n is the number of IoU-conditioned samples, M is a number of interval bins, (=15) and
B
m
is a set of indices of samples whose prediction scores fall in an interval
I
m
=
(
m
-
1
M
,
m
/
M
]
. (Stoycheva algorithm 2.51 on page 52 teaches this algorithm).
Claim 16, Stoycheva as modified teaches:
The method of claim 11, wherein the specific IoU threshold is set to be the same as an IoU threshold used for training the object detection DNN. (Stoycheva pg 29 teaches a single i.e. same IoU threshold for bounding and object detection).
Claim 18, Stoycheva as modified teaches:
The method of claim 11, wherein the preset first threshold is in a range of 2 to 4 times an IoU-ECE value calculated from a held out validation dataset and the preset second threshold is in a range of 2 to 4 times a temperature T extracted from the held out validation dataset. (Stoycheva pg. 82 teaches thresholds of 0.5 and 0.95 where such thresholds fall within a range of 2 to 4 times a value calculated where the value calculated includes several variables including accuracies for every bin where accuracies as shown in fig. 2.5 include values between 0.0 and 1 and Stoycheva at pg 29 teaches different temperature values at different confidence values which may include values between 0 and 1).
Prior Art
Gawlikowski, et. al. (arXiv:2107.03342v3 [cs.LG] 18 Jan 2022; hereinafter, “Gawlikowski”) teaches a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sally T. Ley whose telephone number is (571)272-3406. The examiner can normally be reached Monday - Thursday, 10:00am - 6:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STL/Examiner, Art Unit 2147
/ERIC NILSSON/Primary Examiner, Art Unit 2151