Prosecution Insights
Last updated: April 19, 2026
Application No. 18/919,558

APPARATUS FOR CONTROLLING VEHICLE AND METHOD THEREOF

Non-Final OA §103§112
Filed
Oct 18, 2024
Examiner
GLENN III, FRANK T
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
60%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
81 granted / 148 resolved
+2.7% vs TC avg
Moderate +5% lift
Without
With
+5.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
177
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
28.2%
-11.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 148 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Objections Claims 1 and 15 are objected to because of the following informalities: In claim 1, “determine, based on whether the first virtual box being associated with the second virtual box,” should be “determine, based on whether the first virtual box is associated with the second virtual box,” In claim 1, “output, based on whether the second classification information being updated,” should be “output, based on whether the second classification information is updated,” In claim 15, “determining, based on whether the first virtual box being associated with the second virtual box,” should be “determining, based on whether the first virtual box is associated with the second virtual box,” In claim 15, “outputting, based on whether the second classification information being updated,” should be “outputting, based on whether the second classification information is updated,” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the claim recites “initialize second classification information among first classification information and the second classification information,” However, this limitation renders the initialization of second classification indefinite, as it is unclear whether the initialized second classification information is initialized purely from the second classification information (of a group including the first classification information and the second classification information), or if the initialized second classification is a separate category which includes information from both the first classification information and the second classification information. For the purposes of this examination, the initialization of second classification is being interpreted as initializing the second classification information from the second classification information (of a group including the first classification information and the second classification information). Claims 2-14 are dependent upon claim 1 and therefore inherit the above-described deficiencies. Accordingly, claims 2-14 are rejected under similar reasoning as claim 1 above. Regarding claim 15, the claim recites “initializing second classification information among first classification information and the second classification information,” However, this limitation renders the initialization of second classification indefinite, as it is unclear whether the initialized second classification information is initialized purely from the second classification information (of a group including the first classification information and the second classification information), or if the initialized second classification is a separate category which includes information from both the first classification information and the second classification information. For the purposes of this examination, the initialization of second classification is being interpreted as initializing the second classification information from the second classification information (of a group including the first classification information and the second classification information). Claims 16-20 are dependent upon claim 15 and therefore inherit the above-described deficiencies. Accordingly, claims 16-20 are rejected under similar reasoning as claim 15 above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-8 and 10-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al. (US 2022/0222480 A1), hereinafter Jiang, in view of Shen et al. (US 2022/0222477 A1), hereinafter Shen. Regarding claim 1, Jiang teaches an apparatus for controlling autonomous driving of a vehicle, the apparatus comprising: a sensor; Jiang teaches ([0070]): "In at least one embodiment, an image is captured from one or more image capturing systems of an autonomous vehicle and is processed by a system of said autonomous vehicle comprising one or more object detection neural networks." a memory configured to store a neural network model; Jiang teaches ([0124]): "In at least one embodiment, inference and/or training logic 615 may include, without limitation, code and/or data storage 601 to store forward and/or output weight and/or input/output data, and/or other parameters to configure neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments... In at least one embodiment, any portion of code and/or data storage 601 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory." and a processor configured to: Jiang teaches ([0125]): "In at least one embodiment, any portion of code and/or data storage 601 may be internal or external to one or more processors or other hardware logic devices or circuits." obtain, based on a cluster of points representing an external object detected by the sensor, a first virtual box; Jiang teaches ([0070]): "In at least one embodiment, referring to Algorithm 1, a system for bounding box determination (e.g., via a strong neighbor confidence and coordinate determination 110) receives or otherwise obtains a bounding box proposals coordinates B (e.g., a bounding box proposals coordinates 102), a bounding box proposals confidences S (e.g., a bounding box proposals confidences 104), a neighbor threshold Nt (e.g., a neighbor threshold 106), and a fusion threshold Ft (e.g., a fusion threshold 108) from one or more systems. In at least one embodiment, a bounding box proposals coordinates B and a bounding box proposals confidences S are referred to as candidate bounding box information and are output from one or more object detection neural networks from one or more images, and indicate locations (e.g., via coordinates) and confidences of bounding box proposals of objects depicted in said one or more images. In at least one embodiment, an image is captured from one or more image capturing systems of an autonomous vehicle and is processed by a system of said autonomous vehicle comprising one or more object detection neural networks." Paragraph [0163] suggests that the camera may be a digital camera such as a clear pixel camera. One of ordinary skill in the art would therefore recognize an image captured by such a digital camera as comprising a cluster of points (i.e., a cluster of pixels). obtain, based on inputting the cluster of points into the neural network model, a second virtual box; Jiang teaches ([0070]): "In at least one embodiment, referring to Algorithm 1, a system for bounding box determination (e.g., via a strong neighbor confidence and coordinate determination 110) receives or otherwise obtains a bounding box proposals coordinates B (e.g., a bounding box proposals coordinates 102), a bounding box proposals confidences S (e.g., a bounding box proposals confidences 104), a neighbor threshold Nt (e.g., a neighbor threshold 106), and a fusion threshold Ft (e.g., a fusion threshold 108) from one or more systems. In at least one embodiment, a bounding box proposals coordinates B and a bounding box proposals confidences S are referred to as candidate bounding box information and are output from one or more object detection neural networks from one or more images, and indicate locations (e.g., via coordinates) and confidences of bounding box proposals of objects depicted in said one or more images. In at least one embodiment, an image is captured from one or more image capturing systems of an autonomous vehicle and is processed by a system of said autonomous vehicle comprising one or more object detection neural networks." initialize second classification information among first classification information and the second classification information, Jiang teaches ([0070]): "In at least one embodiment, referring to Algorithm 1, a system for bounding box determination (e.g., via a strong neighbor confidence and coordinate determination 110) receives or otherwise obtains a bounding box proposals coordinates B (e.g., a bounding box proposals coordinates 102), a bounding box proposals confidences S (e.g., a bounding box proposals confidences 104), a neighbor threshold Nt (e.g., a neighbor threshold 106), and a fusion threshold Ft (e.g., a fusion threshold 108) from one or more systems. In at least one embodiment, a bounding box proposals coordinates B and a bounding box proposals confidences S are referred to as candidate bounding box information and are output from one or more object detection neural networks from one or more images, and indicate locations (e.g., via coordinates) and confidences of bounding box proposals of objects depicted in said one or more images. In at least one embodiment, an image is captured from one or more image capturing systems of an autonomous vehicle and is processed by a system of said autonomous vehicle comprising one or more object detection neural networks." Jiang further teaches ([0069]): "In at least one embodiment, a strong neighbor confidence and coordinate determination 110 is a collection of one or more hardware and/or software computing resources with instructions that, when executed, processes a plurality of bounding boxes in connection with a maximum confidence bounding box to determine a count, coordinates, and confidences of strong neighbor bounding boxes." The Examiner has interpreted bounding box proposals confidences S as first classification information, and strong neighbor confidence as second classification information. wherein the first classification information and the second classification information are included in the first virtual box; Jiang teaches ([0061]): "In at least one embodiment, a system for bounding box determination comprises a strong neighbor confidence and coordinate determination 110 and a main proposal confidence and coordinate tuning 118. In at least one embodiment, a strong neighbor confidence and coordinate determination 110 receives, calculates, or otherwise obtains candidate bounding box information comprising a bounding box proposals coordinates 102, a bounding box proposals confidences 104, a neighbor threshold 106, and a fusion threshold 108, and determines a strong neighbor count 112, a strong neighbor coordinates 114, and a strong neighbor confidences 116, which are utilized by a main proposal confidence and coordinate tuning 118 to determine bounding box information comprising a final coordinates 120 and a final confidences 122." Therefore, the Examiner has interpreted bounding box proposal confidence and strong neighbor confidence as being included in the first virtual box and the second virtual box. output, based on whether the second classification information being updated, the first virtual box by assigning at least one of the first classification information or the second classification information to the first virtual box; Jiang teaches ([0101]): " In at least one embodiment, an image 202C is an image with an updated maximum confidence bounding box proposal for an object visualized. In at least one embodiment, referring to FIG. 2, an image 202C depicts a maximum confidence bounding box (e.g., a maximum confidence bounding box associated with an image 202A and/or an image 202B) with an updated confidence value and updated coordinates for an object of image 202C. In at least one embodiment, referring to FIG. 2, an updated maximum confidence bounding box of an image 202C has a greater confidence value than a confidence value of a maximum confidence bounding box associated with an image 202A and/or an image 202B (e.g., a bounding box with a confidence value of 0.95), and indicates borders that completely encapsulate an object (e.g., a car object) of image 202C, as opposed to said maximum confidence bounding box associated with image 202A and/or image 202B, which has borders that partially encapsulate an object (e.g., a car object) of image 202A and/or image 202B." generate a signal indicating at least one of the first classification information or the second classification information assigned to the first virtual box; Jiang teaches ([0198]): "In at least one embodiment, a DLA may be used to run any type of network to enhance control and driving safety, including for example and without limitation, a neural network that outputs a measure of confidence for each object detection. In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. In at least one embodiment, a confidence measure enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking (“AEB”) system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB. " The Examiner has interpreted the determination that a detection is a true positive detection as a signal indicating at least one of the first classification information or the second classification information assigned to the first virtual box. and control, based on the signal, autonomous driving of the vehicle. Jiang teaches ([0198]): "In at least one embodiment, a DLA may be used to run any type of network to enhance control and driving safety, including for example and without limitation, a neural network that outputs a measure of confidence for each object detection. In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. In at least one embodiment, a confidence measure enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking (“AEB”) system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB. " Thus, highly confident detections may trigger automatic emergency braking (i.e., autonomous driving of the vehicle). However, while Jiang does teach updating the second classification information by using the second virtual box (see at least [0101]), Jiang does not outright teach determining, based on whether the first virtual box being associated with the second virtual box, whether to update the second classification information by using the second virtual box. Shen teaches non-maximum suppression for removing redundant bounding boxes corresponding to one or more objects within one or more digital images, comprising: determine, based on whether the first virtual box being associated with the second virtual box, whether to update the second classification information by using the second virtual box; Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang to incorporate the teachings of Shen to provide determining, based on whether the first virtual box being associated with the second virtual box, whether to update the second classification information by using the second virtual box. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 2, Jiang and Shen teach the aforementioned limitations of claim 1. Jiang further teaches: the first classification information comprises at least one of: first classes indicating types corresponding to the external object, Jiang teaches ([0066]): " In at least one embodiment, a bounding box proposals confidences 104, denoted by S, is a database or data structure (e.g., an array or list) that comprises confidence values of bounding box proposals of a bounding box proposals coordinates 102... In at least one embodiment, for example, a neural network (e.g., an object detection neural network) outputs a proposal for a bounding box for an object of an image with a confidence value of 0.95, which indicates that said neural network has determined that said bounding box proposal comprises said object of said image with a probability of 95% or 0.95." wherein the external object is determined through the first virtual box, or first reliabilities respectively corresponding to the first classes, Jiang teaches ([0066]): " In at least one embodiment, a bounding box proposals confidences 104, denoted by S, is a database or data structure (e.g., an array or list) that comprises confidence values of bounding box proposals of a bounding box proposals coordinates 102... In at least one embodiment, for example, a neural network (e.g., an object detection neural network) outputs a proposal for a bounding box for an object of an image with a confidence value of 0.95, which indicates that said neural network has determined that said bounding box proposal comprises said object of said image with a probability of 95% or 0.95." and wherein the second classification information comprises at least one of: second classes indicating types corresponding to the external object, Jiang teaches ([0069]): "In at least one embodiment, a strong neighbor confidence and coordinate determination 110 is a collection of one or more hardware and/or software computing resources with instructions that, when executed, processes a plurality of bounding boxes in connection with a maximum confidence bounding box to determine a count, coordinates, and confidences of strong neighbor bounding boxes." wherein the external object is determined through the second virtual box, or second reliabilities respectively corresponding to the second classes. Jiang teaches ([0101]): "In at least one embodiment, an image 202C is an image with an updated maximum confidence bounding box proposal for an object visualized. In at least one embodiment, referring to FIG. 2, an image 202C depicts a maximum confidence bounding box (e.g., a maximum confidence bounding box associated with an image 202A and/or an image 202B) with an updated confidence value and updated coordinates for an object of image 202C." Regarding claim 3, Jiang and Shen teach the aforementioned limitations of claim 1. Jiang further teaches: and output, based on updating the second classification information, the first virtual box by assigning the updated second classification information to the first virtual box. Jiang teaches ([0101]): " In at least one embodiment, an image 202C is an image with an updated maximum confidence bounding box proposal for an object visualized. In at least one embodiment, referring to FIG. 2, an image 202C depicts a maximum confidence bounding box (e.g., a maximum confidence bounding box associated with an image 202A and/or an image 202B) with an updated confidence value and updated coordinates for an object of image 202C. In at least one embodiment, referring to FIG. 2, an updated maximum confidence bounding box of an image 202C has a greater confidence value than a confidence value of a maximum confidence bounding box associated with an image 202A and/or an image 202B (e.g., a bounding box with a confidence value of 0.95), and indicates borders that completely encapsulate an object (e.g., a car object) of image 202C, as opposed to said maximum confidence bounding box associated with image 202A and/or image 202B, which has borders that partially encapsulate an object (e.g., a car object) of image 202A and/or image 202B." However, Jiang does not outright teach that the processor is configured to: update, based on determining that the second classification information is to be updated, the second classification information. Shen further teaches: the processor is configured to: update, based on determining that the second classification information is to be updated, the second classification information; Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide that the processor is configured to: update, based on determining that the second classification information is to be updated, the second classification information. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 4, Jiang and Shen teach the aforementioned limitations of claim 1. However, Jiang does not outright teach that the processor is configured to: output, based on determining that the second classification information is not to be updated by using the second virtual box, the first virtual box by assigning the first classification information to the first virtual box. Shen further teaches: the processor is configured to: output, based on determining that the second classification information is not to be updated by using the second virtual box, the first virtual box by assigning the first classification information to the first virtual box. Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." One of ordinary skill in the art would recognize that if the second classification information is not to be updated (e.g., the confidence score is not satisfied), the first virtual box is assigned the first classification information, as the box is not found to be redundant. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide that the processor is configured to: output, based on determining that the second classification information is not to be updated by using the second virtual box, the first virtual box by assigning the first classification information to the first virtual box. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 5, Jiang and Shen teach the aforementioned limitations of claim 1. However, Jiang does not outright teach that the processor is configured to: determine whether the first virtual box is associated with the second virtual box, based on at least one of: an indication of whether the first virtual box is fused with the second virtual box, an indication of whether a box reliability of the second virtual box satisfies a threshold value, or an indication of whether the first virtual box overlaps the second virtual box and whether an overlap ratio between the first virtual box and the second virtual box satisfies a reference ratio. Shen further teaches: the processor is configured to: determine whether the first virtual box is associated with the second virtual box, based on at least one of: an indication of whether the first virtual box is fused with the second virtual box, an indication of whether a box reliability of the second virtual box satisfies a threshold value, or an indication of whether the first virtual box overlaps the second virtual box and whether an overlap ratio between the first virtual box and the second virtual box satisfies a reference ratio. Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide that the processor is configured to: determine whether the first virtual box is associated with the second virtual box, based on at least one of: an indication of whether the first virtual box is fused with the second virtual box, an indication of whether a box reliability of the second virtual box satisfies a threshold value, or an indication of whether the first virtual box overlaps the second virtual box and whether an overlap ratio between the first virtual box and the second virtual box satisfies a reference ratio. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 6, Jiang and Shen teach the aforementioned limitations of claim 5. However, Jiang does not outright teach that the processor is configured to: determine that the first virtual box is associated with the second virtual box, based on at least one of: the first virtual box being fused with the second virtual box, the box reliability of the second virtual box satisfying the threshold value, or the first virtual box at least partially overlapping the second virtual box and the overlap ratio between the first virtual box and the second virtual box satisfying the reference ratio. Shen further teaches: the processor is configured to: determine that the first virtual box is associated with the second virtual box, based on at least one of: the first virtual box being fused with the second virtual box, the box reliability of the second virtual box satisfying the threshold value, or the first virtual box at least partially overlapping the second virtual box and the overlap ratio between the first virtual box and the second virtual box satisfying the reference ratio. Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide that the processor is configured to: determine that the first virtual box is associated with the second virtual box, based on at least one of: the first virtual box being fused with the second virtual box, the box reliability of the second virtual box satisfying the threshold value, or the first virtual box at least partially overlapping the second virtual box and the overlap ratio between the first virtual box and the second virtual box satisfying the reference ratio. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 7, Jiang and Shen teach the aforementioned limitations of claim 5. However, Jiang does not outright teach that the processor is configured to: determine that the first virtual box is not associated with the second virtual box, based on least one of: the first virtual box not being fused with the second virtual box, the box reliability of the second virtual box not satisfying the threshold value, the first virtual box not overlapping the second virtual box, or the overlap ratio between the first virtual box and the second virtual box not satisfying the reference ratio; and output, based on the first virtual box not being associated with the second virtual box, the first virtual box by assigning the first classification information to the first virtual box. Shen further teaches: the processor is configured to: determine that the first virtual box is not associated with the second virtual box, based on least one of: the first virtual box not being fused with the second virtual box, the box reliability of the second virtual box not satisfying the threshold value, the first virtual box not overlapping the second virtual box, or the overlap ratio between the first virtual box and the second virtual box not satisfying the reference ratio; Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." The Examiner has interpreted the determination that a bounding box is not redundant and is not to be removed as a determination that the first virtual box is not associated with the second virtual box. and output, based on the first virtual box not being associated with the second virtual box, the first virtual box by assigning the first classification information to the first virtual box. Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." One of ordinary skill in the art would recognize that if the first virtual box is not associated with the second virtual box (i.e., the box is not redundant), the first virtual box is assigned the first classification information, as the box is not found to be redundant. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide that the processor is configured to: determine that the first virtual box is not associated with the second virtual box, based on least one of: the first virtual box not being fused with the second virtual box, the box reliability of the second virtual box not satisfying the threshold value, the first virtual box not overlapping the second virtual box, or the overlap ratio between the first virtual box and the second virtual box not satisfying the reference ratio; and output, based on the first virtual box not being associated with the second virtual box, the first virtual box by assigning the first classification information to the first virtual box. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 8, Jiang and Shen teach the aforementioned limitations of claim 5. However, Jiang does not outright teach that the processor is configured to: determine, based on dividing a width and a length of the first virtual box at an interval, at least one of: whether the first virtual box overlaps the second virtual box, or the overlap ratio between the first virtual box and the second virtual box. Shen further teaches: the processor is configured to: determine, based on dividing a width and a length of the first virtual box at an interval, at least one of: whether the first virtual box overlaps the second virtual box, or the overlap ratio between the first virtual box and the second virtual box. Shen teaches ([0090]): "In at least one embodiment, a degree of overlap between two bounding boxes can be used for clustering bounding boxes. In at least one embodiment, a degree of overlap is determined using an IoU value between two boxes by computing an area of overlap (also referred to as intersection) divided by an area of union, such as illustrated by equation 600 in FIG. 6. In at least one embodiment, an IoU value is produced by equation 600 as area of overlap 602 of two bounding boxes divided by an area of union 604 of these two bounding boxes." FIG. 6, included below, demonstrates that the overlap ratio between the first virtual box and the second virtual box is determined based on dividing a width and length of the first virtual box at an interval (i.e., the interval of overlap with the other box) with the area of union. PNG media_image1.png 492 576 media_image1.png Greyscale It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide that the processor is configured to: determine, based on dividing a width and a length of the first virtual box at an interval, at least one of: whether the first virtual box overlaps the second virtual box, or the overlap ratio between the first virtual box and the second virtual box. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 10, Jiang and Shen teach the aforementioned limitations of claim 1. Jiang further teaches: the processor is configured to: change, based on a number of times that the first virtual box is associated with the second virtual box, in a plurality of frames including a frame including the cluster of points, at least one of a first information reliability of the first classification information or a second information reliability of the second classification information. Jiang teaches ([0101]): " In at least one embodiment, an image 202C is an image with an updated maximum confidence bounding box proposal for an object visualized. In at least one embodiment, referring to FIG. 2, an image 202C depicts a maximum confidence bounding box (e.g., a maximum confidence bounding box associated with an image 202A and/or an image 202B) with an updated confidence value and updated coordinates for an object of image 202C. In at least one embodiment, referring to FIG. 2, an updated maximum confidence bounding box of an image 202C has a greater confidence value than a confidence value of a maximum confidence bounding box associated with an image 202A and/or an image 202B (e.g., a bounding box with a confidence value of 0.95), and indicates borders that completely encapsulate an object (e.g., a car object) of image 202C, as opposed to said maximum confidence bounding box associated with image 202A and/or image 202B, which has borders that partially encapsulate an object (e.g., a car object) of image 202A and/or image 202B." Jiang further teaches ([0070]): "In at least one embodiment, referring to Algorithm 1, a system for bounding box determination (e.g., via a strong neighbor confidence and coordinate determination 110) receives or otherwise obtains a bounding box proposals coordinates B (e.g., a bounding box proposals coordinates 102), a bounding box proposals confidences S (e.g., a bounding box proposals confidences 104), a neighbor threshold Nt (e.g., a neighbor threshold 106), and a fusion threshold Ft (e.g., a fusion threshold 108) from one or more systems. In at least one embodiment, a bounding box proposals coordinates B and a bounding box proposals confidences S are referred to as candidate bounding box information and are output from one or more object detection neural networks from one or more images, and indicate locations (e.g., via coordinates) and confidences of bounding box proposals of objects depicted in said one or more images. In at least one embodiment, an image is captured from one or more image capturing systems of an autonomous vehicle and is processed by a system of said autonomous vehicle comprising one or more object detection neural networks." Regarding claim 11, Jiang and Shen teach the aforementioned limitations of claim 2. Jiang further teaches: the processor is configured to: adjust, based on a plurality of frames including a frame including the cluster of points, at least one of the first reliabilities or the second reliabilities by performing normalization on at least one of the first reliabilities or the second reliabilities. Jiang teaches ([0101]): " In at least one embodiment, an image 202C is an image with an updated maximum confidence bounding box proposal for an object visualized. In at least one embodiment, referring to FIG. 2, an image 202C depicts a maximum confidence bounding box (e.g., a maximum confidence bounding box associated with an image 202A and/or an image 202B) with an updated confidence value and updated coordinates for an object of image 202C. In at least one embodiment, referring to FIG. 2, an updated maximum confidence bounding box of an image 202C has a greater confidence value than a confidence value of a maximum confidence bounding box associated with an image 202A and/or an image 202B (e.g., a bounding box with a confidence value of 0.95), and indicates borders that completely encapsulate an object (e.g., a car object) of image 202C, as opposed to said maximum confidence bounding box associated with image 202A and/or image 202B, which has borders that partially encapsulate an object (e.g., a car object) of image 202A and/or image 202B." Jiang further teaches ([0110]): "In at least one embodiment, a maximum confidence bounding box corresponds to any suitable bounding box of bounding box proposals, including bounding boxes with a first maximum confidence value, second maximum confidence value, first minimum confidence value, second minimum confidence value, average confidence value, median confidence value, and/or variations thereof." Regarding claim 12, Jiang and Shen teach the aforementioned limitations of claim 1. Jiang further teaches: the processor is configured to: determine a weight based on a time point at which a frame is obtained, in a plurality of frames including the frame, wherein the frame comprises the cluster of points, Jiang teaches ([0070]): "In at least one embodiment, referring to Algorithm 1, a system for bounding box determination (e.g., via a strong neighbor confidence and coordinate determination 110) receives or otherwise obtains a bounding box proposals coordinates B (e.g., a bounding box proposals coordinates 102), a bounding box proposals confidences S (e.g., a bounding box proposals confidences 104), a neighbor threshold Nt (e.g., a neighbor threshold 106), and a fusion threshold Ft (e.g., a fusion threshold 108) from one or more systems. In at least one embodiment, a bounding box proposals coordinates B and a bounding box proposals confidences S are referred to as candidate bounding box information and are output from one or more object detection neural networks from one or more images, and indicate locations (e.g., via coordinates) and confidences of bounding box proposals of objects depicted in said one or more images. In at least one embodiment, an image is captured from one or more image capturing systems of an autonomous vehicle and is processed by a system of said autonomous vehicle comprising one or more object detection neural networks." Jiang further teaches ([0198]): "In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. In at least one embodiment, a confidence measure enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking (“AEB”) system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB." and wherein the weight is to be applied to at least one of the first classification information or the second classification information. Jiang teaches ([0198]): "In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. In at least one embodiment, a confidence measure enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking (“AEB”) system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB." Regarding claim 13, Jiang and Shen teach the aforementioned limitations of claim 1. However, Jiang does not outright teach that the processor is configured to: change, based on a size of the first virtual box, at least one of a first information reliability of the first classification information or a second information reliability of the second classification information. Shen further teaches: the processor is configured to: change, based on a size of the first virtual box, at least one of a first information reliability of the first classification information or a second information reliability of the second classification information. Shen teaches ([0090]): "In at least one embodiment, a degree of overlap between two bounding boxes can be used for clustering bounding boxes. In at least one embodiment, a degree of overlap is determined using an IoU value between two boxes by computing an area of overlap (also referred to as intersection) divided by an area of union, such as illustrated by equation 600 in FIG. 6. In at least one embodiment, an IoU value is produced by equation 600 as area of overlap 602 of two bounding boxes divided by an area of union 604 of these two bounding boxes." Shen further teaches ([0108]): "In at least one embodiment, at block 804, processing logic calculates an IoU value of a respective candidate point and a neighboring point in an identified set and determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, at block 804, processing logic identifies a candidate point as a redundant bounding box to be removed responsive to an IoU value satisfying an IoU threshold and a confidence score of a candidate point satisfying a criterion pertaining to a confidence score of a neighboring point." It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide that the processor is configured to: change, based on a size of the first virtual box, at least one of a first information reliability of the first classification information or a second information reliability of the second classification information. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 14, Jiang and Shen teach the aforementioned limitations of claim 1. However, Jiang does not outright teach that the processor is configured to: obtain, from the cluster of points, at least one of a width, a length, a height, a histogram, or a density of the cluster of points; and obtain, based on at least one of the width, the length, the height, the histogram, or the density, the first virtual box. Shen further teaches: the processor is configured to: obtain, from the cluster of points, at least one of a width, a length, a height, a histogram, or a density of the cluster of points; Shen teaches ([0098]): " In at least one embodiment, candidate point 702 can be a center point of reduced search space 700. In at least one embodiment, reduced search space 700 can be defined as a rectangle, a square, a circle, or other shapes. In at least one embodiment, candidate point 702 and neighboring point 704 can be represented with x and y coordinates and a corresponding bounding box can be represented with x and y coordinates and width w and height h. " and obtain, based on at least one of the width, the length, the height, the histogram, or the density, the first virtual box. Shen teaches ([0098]): " In at least one embodiment, candidate point 702 can be a center point of reduced search space 700. In at least one embodiment, reduced search space 700 can be defined as a rectangle, a square, a circle, or other shapes. In at least one embodiment, candidate point 702 and neighboring point 704 can be represented with x and y coordinates and a corresponding bounding box can be represented with x and y coordinates and width w and height h. " It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide that the processor is configured to: obtain, from the cluster of points, at least one of a width, a length, a height, a histogram, or a density of the cluster of points; and obtain, based on at least one of the width, the length, the height, the histogram, or the density, the first virtual box. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 15, Jiang teaches a method performed by an apparatus of a vehicle for controlling autonomous driving of the vehicle, the method comprising: obtaining, based on a cluster of points representing an external object detected by a sensor, a first virtual box; Jiang teaches ([0070]): "In at least one embodiment, referring to Algorithm 1, a system for bounding box determination (e.g., via a strong neighbor confidence and coordinate determination 110) receives or otherwise obtains a bounding box proposals coordinates B (e.g., a bounding box proposals coordinates 102), a bounding box proposals confidences S (e.g., a bounding box proposals confidences 104), a neighbor threshold Nt (e.g., a neighbor threshold 106), and a fusion threshold Ft (e.g., a fusion threshold 108) from one or more systems. In at least one embodiment, a bounding box proposals coordinates B and a bounding box proposals confidences S are referred to as candidate bounding box information and are output from one or more object detection neural networks from one or more images, and indicate locations (e.g., via coordinates) and confidences of bounding box proposals of objects depicted in said one or more images. In at least one embodiment, an image is captured from one or more image capturing systems of an autonomous vehicle and is processed by a system of said autonomous vehicle comprising one or more object detection neural networks." Paragraph [0163] suggests that the camera may be a digital camera such as a clear pixel camera. One of ordinary skill in the art would therefore recognize an image captured by such a digital camera as comprising a cluster of points (i.e., a cluster of pixels). obtaining, based on inputting the cluster of points into a neural network model, a second virtual box; Jiang teaches ([0070]): "In at least one embodiment, referring to Algorithm 1, a system for bounding box determination (e.g., via a strong neighbor confidence and coordinate determination 110) receives or otherwise obtains a bounding box proposals coordinates B (e.g., a bounding box proposals coordinates 102), a bounding box proposals confidences S (e.g., a bounding box proposals confidences 104), a neighbor threshold Nt (e.g., a neighbor threshold 106), and a fusion threshold Ft (e.g., a fusion threshold 108) from one or more systems. In at least one embodiment, a bounding box proposals coordinates B and a bounding box proposals confidences S are referred to as candidate bounding box information and are output from one or more object detection neural networks from one or more images, and indicate locations (e.g., via coordinates) and confidences of bounding box proposals of objects depicted in said one or more images. In at least one embodiment, an image is captured from one or more image capturing systems of an autonomous vehicle and is processed by a system of said autonomous vehicle comprising one or more object detection neural networks." initializing second classification information among first classification information and the second classification information, Jiang teaches ([0070]): "In at least one embodiment, referring to Algorithm 1, a system for bounding box determination (e.g., via a strong neighbor confidence and coordinate determination 110) receives or otherwise obtains a bounding box proposals coordinates B (e.g., a bounding box proposals coordinates 102), a bounding box proposals confidences S (e.g., a bounding box proposals confidences 104), a neighbor threshold Nt (e.g., a neighbor threshold 106), and a fusion threshold Ft (e.g., a fusion threshold 108) from one or more systems. In at least one embodiment, a bounding box proposals coordinates B and a bounding box proposals confidences S are referred to as candidate bounding box information and are output from one or more object detection neural networks from one or more images, and indicate locations (e.g., via coordinates) and confidences of bounding box proposals of objects depicted in said one or more images. In at least one embodiment, an image is captured from one or more image capturing systems of an autonomous vehicle and is processed by a system of said autonomous vehicle comprising one or more object detection neural networks." Jiang further teaches ([0069]): "In at least one embodiment, a strong neighbor confidence and coordinate determination 110 is a collection of one or more hardware and/or software computing resources with instructions that, when executed, processes a plurality of bounding boxes in connection with a maximum confidence bounding box to determine a count, coordinates, and confidences of strong neighbor bounding boxes." The Examiner has interpreted bounding box proposals confidences S as first classification information, and strong neighbor confidence as second classification information. wherein the first classification information and the second classification information are included in the first virtual box; Jiang teaches ([0061]): "In at least one embodiment, a system for bounding box determination comprises a strong neighbor confidence and coordinate determination 110 and a main proposal confidence and coordinate tuning 118. In at least one embodiment, a strong neighbor confidence and coordinate determination 110 receives, calculates, or otherwise obtains candidate bounding box information comprising a bounding box proposals coordinates 102, a bounding box proposals confidences 104, a neighbor threshold 106, and a fusion threshold 108, and determines a strong neighbor count 112, a strong neighbor coordinates 114, and a strong neighbor confidences 116, which are utilized by a main proposal confidence and coordinate tuning 118 to determine bounding box information comprising a final coordinates 120 and a final confidences 122." Therefore, the Examiner has interpreted bounding box proposal confidence and strong neighbor confidence as being included in the first virtual box and the second virtual box. outputting, based on whether the second classification information being updated, the first virtual box by assigning at least one of the first classification information or the second classification information to the first virtual box; Jiang teaches ([0101]): " In at least one embodiment, an image 202C is an image with an updated maximum confidence bounding box proposal for an object visualized. In at least one embodiment, referring to FIG. 2, an image 202C depicts a maximum confidence bounding box (e.g., a maximum confidence bounding box associated with an image 202A and/or an image 202B) with an updated confidence value and updated coordinates for an object of image 202C. In at least one embodiment, referring to FIG. 2, an updated maximum confidence bounding box of an image 202C has a greater confidence value than a confidence value of a maximum confidence bounding box associated with an image 202A and/or an image 202B (e.g., a bounding box with a confidence value of 0.95), and indicates borders that completely encapsulate an object (e.g., a car object) of image 202C, as opposed to said maximum confidence bounding box associated with image 202A and/or image 202B, which has borders that partially encapsulate an object (e.g., a car object) of image 202A and/or image 202B." generating a signal indicating at least one of the first classification information or the second classification information assigned to the first virtual box; Jiang teaches ([0198]): "In at least one embodiment, a DLA may be used to run any type of network to enhance control and driving safety, including for example and without limitation, a neural network that outputs a measure of confidence for each object detection. In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. In at least one embodiment, a confidence measure enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking (“AEB”) system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB. " The Examiner has interpreted the determination that a detection is a true positive detection as a signal indicating at least one of the first classification information or the second classification information assigned to the first virtual box. and controlling, based on the signal, autonomous driving of the vehicle. Jiang teaches ([0198]): "In at least one embodiment, a DLA may be used to run any type of network to enhance control and driving safety, including for example and without limitation, a neural network that outputs a measure of confidence for each object detection. In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. In at least one embodiment, a confidence measure enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking (“AEB”) system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB. " Thus, highly confident detections may trigger automatic emergency braking (i.e., autonomous driving of the vehicle). However, while Jiang does teach updating the second classification information by using the second virtual box (see at least [0101]), Jiang does not outright teach determining, based on whether the first virtual box being associated with the second virtual box, whether to update the second classification information by using the second virtual box. Shen teaches non-maximum suppression for removing redundant bounding boxes corresponding to one or more objects within one or more digital images, comprising: determining, based on whether the first virtual box being associated with the second virtual box, whether to update the second classification information by using the second virtual box; Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang to incorporate the teachings of Shen to provide determining, based on whether the first virtual box being associated with the second virtual box, whether to update the second classification information by using the second virtual box. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 16, Jiang and Shen teach the aforementioned limitations of claim 15. Jiang further teaches: the first classification information comprises at least one of: first classes indicating types corresponding to the external object, Jiang teaches ([0066]): " In at least one embodiment, a bounding box proposals confidences 104, denoted by S, is a database or data structure (e.g., an array or list) that comprises confidence values of bounding box proposals of a bounding box proposals coordinates 102... In at least one embodiment, for example, a neural network (e.g., an object detection neural network) outputs a proposal for a bounding box for an object of an image with a confidence value of 0.95, which indicates that said neural network has determined that said bounding box proposal comprises said object of said image with a probability of 95% or 0.95." wherein the external object is determined through the first virtual box, or first reliabilities respectively corresponding to the first classes, Jiang teaches ([0066]): " In at least one embodiment, a bounding box proposals confidences 104, denoted by S, is a database or data structure (e.g., an array or list) that comprises confidence values of bounding box proposals of a bounding box proposals coordinates 102... In at least one embodiment, for example, a neural network (e.g., an object detection neural network) outputs a proposal for a bounding box for an object of an image with a confidence value of 0.95, which indicates that said neural network has determined that said bounding box proposal comprises said object of said image with a probability of 95% or 0.95." and wherein the second classification information comprises at least one of: second classes indicating types corresponding to the external object, Jiang teaches ([0069]): "In at least one embodiment, a strong neighbor confidence and coordinate determination 110 is a collection of one or more hardware and/or software computing resources with instructions that, when executed, processes a plurality of bounding boxes in connection with a maximum confidence bounding box to determine a count, coordinates, and confidences of strong neighbor bounding boxes." wherein the external object is determined through the second virtual box, or second reliabilities respectively corresponding to the second classes. Jiang teaches ([0101]): "In at least one embodiment, an image 202C is an image with an updated maximum confidence bounding box proposal for an object visualized. In at least one embodiment, referring to FIG. 2, an image 202C depicts a maximum confidence bounding box (e.g., a maximum confidence bounding box associated with an image 202A and/or an image 202B) with an updated confidence value and updated coordinates for an object of image 202C." Regarding claim 17, Jiang and Shen teach the aforementioned limitations of claim 15. Jiang further teaches: and performing one of: outputting, based on updating the second classification information, the first virtual box by assigning the updated second classification information to the first virtual box; or outputting the first virtual box by assigning the first classification information to the first virtual box. Jiang teaches ([0101]): " In at least one embodiment, an image 202C is an image with an updated maximum confidence bounding box proposal for an object visualized. In at least one embodiment, referring to FIG. 2, an image 202C depicts a maximum confidence bounding box (e.g., a maximum confidence bounding box associated with an image 202A and/or an image 202B) with an updated confidence value and updated coordinates for an object of image 202C. In at least one embodiment, referring to FIG. 2, an updated maximum confidence bounding box of an image 202C has a greater confidence value than a confidence value of a maximum confidence bounding box associated with an image 202A and/or an image 202B (e.g., a bounding box with a confidence value of 0.95), and indicates borders that completely encapsulate an object (e.g., a car object) of image 202C, as opposed to said maximum confidence bounding box associated with image 202A and/or image 202B, which has borders that partially encapsulate an object (e.g., a car object) of image 202A and/or image 202B." However, Jiang does not outright teach updating, based on determining whether the second classification information is to be updated, the second classification information. Shen further teaches: updating, based on determining whether the second classification information is to be updated, the second classification information; Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide updating, based on determining whether the second classification information is to be updated, the second classification information. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 18, Jiang and Shen teach the aforementioned limitations of claim 15. However, Jiang does not outright teach that the outputting the first virtual box by assigning the first classification information to the first virtual box is based on determining that the second classification information is not to be updated by using the second virtual box. Shen further teaches: the outputting the first virtual box by assigning the first classification information to the first virtual box is based on determining that the second classification information is not to be updated by using the second virtual box. Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." One of ordinary skill in the art would recognize that if the second classification information is not to be updated (e.g., the confidence score is not satisfied), the first virtual box is assigned the first classification information, as the box is not found to be redundant. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide that the outputting the first virtual box by assigning the first classification information to the first virtual box is based on determining that the second classification information is not to be updated by using the second virtual box. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 19, Jiang and Shen teach the aforementioned limitations of claim 15. However, Jiang does not outright teach determining whether the first virtual box is associated with the second virtual box, based on at least one of: an indication of whether the first virtual box is fused with the second virtual box, an indication of whether a box reliability of the second virtual box satisfies a threshold value, or an indication of whether the first virtual box overlaps the second virtual box and whether an overlap ratio between the first virtual box and the second virtual box satisfies a reference ratio. Shen further teaches: determining whether the first virtual box is associated with the second virtual box, based on at least one of: an indication of whether the first virtual box is fused with the second virtual box, an indication of whether a box reliability of the second virtual box satisfies a threshold value, or an indication of whether the first virtual box overlaps the second virtual box and whether an overlap ratio between the first virtual box and the second virtual box satisfies a reference ratio. Shen teaches ([0080]): "For example, in at least one embodiment, first parallel suppression sub-process, which is initiated with respect to a first candidate point and performed by second circuit 404, calculates an IoU value of first candidate point and a neighboring point. In at least one embodiment, first parallel suppression sub-process, which is performed by second circuit 404, determines whether an IoU value satisfies (e.g., is greater than) an IoU threshold and whether a confidence score of a candidate point satisfies a criterion pertaining to (e.g., is less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, identifies a candidate point as corresponding to a redundant bounding box to be removed in response to an IoU value satisfying (e.g., being greater than) an IoU threshold and a confidence score satisfying (e.g., being less than) a confidence score of a neighboring point. In at least one embodiment, first parallel suppression sub-process, performed by second circuit 404, can repeat calculations of IoU values and compare IoU values and confidence scores for each candidate point in a first set of candidate points that are within an area surrounding a respective candidate point." It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide determining whether the first virtual box is associated with the second virtual box, based on at least one of: an indication of whether the first virtual box is fused with the second virtual box, an indication of whether a box reliability of the second virtual box satisfies a threshold value, or an indication of whether the first virtual box overlaps the second virtual box and whether an overlap ratio between the first virtual box and the second virtual box satisfies a reference ratio. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Regarding claim 20, Jiang and Shen teach the aforementioned limitations of claim 15. However, Jiang does not outright teach obtaining, from the cluster of points, at least one of a width, a length, a height, a histogram, or a density of the cluster of points; and obtaining, based on at least one of the width, the length, the height, the histogram, or the density, the first virtual box. Shen further teaches: obtaining, from the cluster of points, at least one of a width, a length, a height, a histogram, or a density of the cluster of points; Shen teaches ([0098]): " In at least one embodiment, candidate point 702 can be a center point of reduced search space 700. In at least one embodiment, reduced search space 700 can be defined as a rectangle, a square, a circle, or other shapes. In at least one embodiment, candidate point 702 and neighboring point 704 can be represented with x and y coordinates and a corresponding bounding box can be represented with x and y coordinates and width w and height h. " and obtaining, based on at least one of the width, the length, the height, the histogram, or the density, the first virtual box. Shen teaches ([0098]): " In at least one embodiment, candidate point 702 can be a center point of reduced search space 700. In at least one embodiment, reduced search space 700 can be defined as a rectangle, a square, a circle, or other shapes. In at least one embodiment, candidate point 702 and neighboring point 704 can be represented with x and y coordinates and a corresponding bounding box can be represented with x and y coordinates and width w and height h. " It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to further incorporate the teachings of Shen to provide obtaining, from the cluster of points, at least one of a width, a length, a height, a histogram, or a density of the cluster of points; and obtaining, based on at least one of the width, the length, the height, the histogram, or the density, the first virtual box. Jiang and Shen are each directed towards similar pursuits in the field of bounding box determination for imaging systems of autonomous vehicles. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Shen, as doing so allows for the determination of redundant bounding boxes which should be removed based on whether the first virtual box is associated with the second virtual box, as recognized by Shen (see at least [0080]). In the same paragraph, Shen provides the further benefit of allowing for repeat calculations of IoU (i.e., Intersection of Union) values and confidence scores. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang and Shen in view of Horowitz et al. (US 2023/0192418 A1), hereinafter Horowitz. Regarding claim 9, Jiang and Shen teach the aforementioned limitations of claim 1. Jiang further teaches: and change, based on a number of times that the second classification information assigned to the first virtual box is changed, in the plurality of frames, a second information reliability of the second classification information. Jiang teaches ([0101]): " In at least one embodiment, an image 202C is an image with an updated maximum confidence bounding box proposal for an object visualized. In at least one embodiment, referring to FIG. 2, an image 202C depicts a maximum confidence bounding box (e.g., a maximum confidence bounding box associated with an image 202A and/or an image 202B) with an updated confidence value and updated coordinates for an object of image 202C. In at least one embodiment, referring to FIG. 2, an updated maximum confidence bounding box of an image 202C has a greater confidence value than a confidence value of a maximum confidence bounding box associated with an image 202A and/or an image 202B (e.g., a bounding box with a confidence value of 0.95), and indicates borders that completely encapsulate an object (e.g., a car object) of image 202C, as opposed to said maximum confidence bounding box associated with image 202A and/or image 202B, which has borders that partially encapsulate an object (e.g., a car object) of image 202A and/or image 202B." However, Jiang does not outright teach that the processor is configured to: change, based on a number of times that the first classification information assigned to the first virtual box is changed, in a plurality of frames including a frame including the cluster of points, a first information reliability of the first classification information. Horowitz teaches an object recognition device utilizing bounding boxes, comprising: the processor is configured to: change, based on a number of times that the first classification information assigned to the first virtual box is changed, in a plurality of frames including a frame including the cluster of points, a first information reliability of the first classification information; Horowitz teaches ([0123]): "Given that the vision sensor of an object recognition device may only “see” portions of an object at a time (due to the entirety of the object not always being in full view of the vision sensor), in some embodiments, object tracking logic 708 is configured to maintain a dynamically variable bounding polygon (e.g., such as a four-sided box) estimate around the object as part of the object recognition. In various embodiments, a “dynamically variable bounding polygon” around an object is a bounding polygon that approximates the shape of the object and in which different portions of the bounding polygon are associated with respective confidence values depending on the sensed data that has been collected on the object so far. As mentioned above, object tracking logic 708 is configured to apply one or more machine learning models to visual sensor signals (e.g., images) to identify object regions (e.g., masks, bounding polygons, etc.) that define the shape and location of the objects. Object tracking logic 708 is configured to assign for each portion of a bounding polygon (e.g., box) of an object a confidence value that is associated with that boundary polygon’s portion’s inference probability (i.e., a variance value related to confidence in the estimate). For example, the portion of the bounding polygon that is outside the field of view of the vision sensor is assigned a higher variance estimate than the portion of the bounding polygon that is inside the field of view of the vision sensor, thereby ensuring that as the object’s trajectory changes over time and that additional visual sensor data is collected on the object, the bounding polygon for the object as determined by object tracking logic 708 becomes more accurate and converges quickly. " It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jiang and Shen to incorporate the teachings of Horowitz to provide that the processor is configured to: change, based on a number of times that the first classification information assigned to the first virtual box is changed, in a plurality of frames including a frame including the cluster of points, a first information reliability of the first classification information. Jiang, Shen, and Horowitz are each directed towards similar pursuits in the field of object detection using bounding boxes. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Horowitz, as doing so improves object recognition for objects which may not always be in full view of the vision sensor through the use of variance estimation, as recognized by Horowitz (see at least [0123]). In the same paragraph, Horowitz indicates that such object tracking logic advantageously allows for the bounding box for the object to become more accurate and converge quickly. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sajjadi Mohammadabadi et al. (US 2020/0293796 A1) teaches intersection detection and classification in autonomous machine applications, including the use of bounding box(es) and determining whether bounding boxes overlap (see at least FIG. 3B). Tariq (US 2021/0166049 A1) teaches bounding box embedding for object identifying, including providing an image to a machine learning model/neural network to determine a bounding box that surrounds the object, thereby identifying the object (see at least [0015] and [0028]). Ko et al. (US 2021/0383134 A1) teaches an advanced driver assist system and method of detecting objects, including determining class scores of candidate bounding boxes, and selecting an adjusted candidate bounding box whose adjusted score is greatest as the final bounding box (see at least [0008]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK T GLENN III whose telephone number is (571)272-5078. The examiner can normally be reached M-F 7:30AM - 4:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at 571-270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /F.T.G./Examiner, Art Unit 3662 /DALE W HILGENDORF/Primary Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Oct 18, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601772
ENERGY CONSUMPTION DECOMPOSITION METHOD OF ELECTRIC VEHICLE, ANALYSIS METHOD, SYSTEM, DEVICE AND MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12564533
ELECTRIC ASSISTIVE DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12559918
DEVICE FOR DETERMINING THE ACTUAL STATE AND/OR THE REMAINING SERVICE LIFE OF A CONSTRUCTION, MATERIALS-HANDLING AND/OR CONVEYOR MACHINE
2y 5m to grant Granted Feb 24, 2026
Patent 12545404
CONTROL DEVICE, UNMANNED AERIAL VEHICLE, AND CONTROL METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12541008
Light Detection and Ranging (LIDAR) Device having Multiple Receivers
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
60%
With Interview (+5.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 148 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month