Prosecution Insights
Last updated: April 19, 2026
Application No. 18/637,963

Method and System for Estimating an Object Size Using Range Sensor Detections

Non-Final OA §103
Filed
Apr 17, 2024
Examiner
MASHELE, BONGANI JABULANI
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Aptiv Technologies AG
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
40 granted / 45 resolved
+36.9% vs TC avg
Minimal +4% lift
Without
With
+4.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
29 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
29.4%
-10.6% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the application filed on 04/17/2024. Claims 1-15 are currently pending and have been examined. Information Disclosure Statement The information disclosure statements (IDS) submitted on 04/17/2024 have been considered by the examiner and initialed copies of the IDS are hereby attached. Claim Objections Claim 11 is objected to because of the following informalities: Claim 11 recites the limitation “in response to the the detected object being at least partly occluded”. This should read “in response to the detected object being at least partly occluded”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4, 5, 13-15 are rejected under 35 U.S.C 103 as being unpatentable over Unnikrishnan (US11461915B2) in view of Schindler (DE1020005833A1) Regarding claim 1 Unnikrishnan discloses: A computer-implemented method for estimating an object size using range sensor detections detected by a range sensor mounted on a host vehicle (Figure 1, Para: “ The present disclosure generally relates to determining the size (and in some cases position) of objects, and more specifically to techniques and systems for determining the size and/or position of objects using camera map, radar information, and/or other information.”), the computer-implemented method comprising: for an object detected by the range sensor detections, over time: determining, based on the range sensor detections, pseudo measurements of dimensions of a bounding box enclosing the detected object (Para 93: “The radar-based size estimation engine 212 can obtain radar measurements 211 from one or more radars 210 and can use the radar measurements 211 to estimate a size of a target object. A radar of the one or more radars 210 can include a device or system with a radio frequency (RF) source that sends RF signals (e.g., pulses of high-frequency electromagnetic waves), which can be reflected off of a target object back to the source of the RF signals. The reflected RF signals can be received by a RF receiver of the radar device or system. The reflected RF signals can be used to determine a size of the target object. The one or more radars 210 can include multiple radars positioned at different locations on the tracking object. For instance, using an autonomous tracking vehicle as an illustrative example, the tracking vehicle can have radars located at one or more the front, the corners, the sides, and/or the back of the vehicle. Reflected RF signals received by all of the sensors on the tracking object can be evaluated and used by the radar-based size estimation engine 212 to estimate the size (e.g., the length and/or other dimension) of the target object from which the signals were reflected.”); determining estimates for the dimensions of the bounding box based on respective previous pseudo measurements and current pseudo measurements (Para 109: “For instance, again using autonomous vehicles for illustrative purposes, the final estimate of the length (or other estimated dimension) of a target vehicle can obtained by a sequential Bayesian estimation framework, which can be interpreted as a degenerate Kalman filtering framework in which the state, representing the length of the object (e.g., vehicle), is modeled as static and does not change over time. For example, because the length of the object (e.g., vehicle) is fixed, there are no dynamics associated with the state, no state transitions, no state evolution, etc. The length X can be assumed to be a Gaussian random variable with a prior distribution with mean equal to the standard length (or other estimated dimension) of vehicles in the class of the tracked vehicle (e.g., as determined by the class likelihood estimation engine 206), and a variance given by the typical variance of length for the class of the tracked vehicle. The length estimate (or other estimated dimension) can be sequentially refined using Bayesian estimation as new measurements Y.sub.i of length are received from any combination of one or more of the map-based size estimation engine 208, the radar-based size estimation engine 212, and/or the radar image-based size estimation described above.”). Unnikrishnan does not teach “wherein the estimates are determined based on a confidence measure for the range sensor detections” However, Schindler in the analogous arts teaches: wherein the estimates are determined based on a confidence measure for the range sensor detections (Description: “In the method for detecting and classifying obstacles on the basis of data acquired by means of at least one camera and by means of at least one lidar, a first step is checked according to the invention as to whether a potential obstacle is detected by more than one sensor and the detections can be associated with errors in the sensors are. If the result of the check in the first step is positive, the height of the obstacle is determined in a second step to classify the obstacle by determining a distance to the obstacle using the at least one lidar, determining a number of vertically stacked pixels of an associated camera detection of the obstacle and using it the distance to the obstacle, the number of pixels and a focal length of the camera, the height is measured.”); and determining the object size based on the determined estimates (Para 115: “While length is used as an example of a dimension of a target object that can be estimated by the size estimation engine 214, the same approach can be also used to filter the width and height estimates of a target object (e.g., a target vehicle) obtained from the map-based size estimation engine 208. In some cases, for certain objects (such as vehicles), the heights and widths of those objects do not vary by a large amount between different models of the same class of object (e.g., there is a small variance in width and sometimes height for different models of the same vehicle type). In such cases, the size estimation engine 214 can predict the width and/or the height of a target object (e.g., a target vehicle or other object) as a constant based on the most likely class identified by the class likelihood estimation engine 206.”). It would have been obvious to someone in the art prior to the effective filing date of the claimed invention to modify Unnikrishnan with Schindler to incorporate the feature of: wherein the estimates are determined based on a confidence measure for the range sensor detections. Unnikrishnan and Schindler are all considered analogous arts as they all disclose the use of sensor technology to detect objects. However, Unnikrishnan fails to disclose a feature of using sensor detections confidence measure when computing dimension estimates. This feature is disclosed by Schindler. It would have been obvious to someone in the art prior to the effective filling date of the claimed invention to modify Unnikrishnan with Schindler to incorporate the feature of: wherein the estimates are determined based on a confidence measure for the range sensor detections as such a feature would increase the reliability and efficiency of the system. Claims 14 and 15 recites limitations that are similar to those of claim 1, therefore claims 14 and 15 are rejected under the same rationale. Claims 2 and 3 are rejected under 35 U.S.C 103 as being unpatentable over Unnikrishnan (US11461915B2) in view of Schindler (DE1020005833A1) and further in view of Gulati (US20220291327A1). Regarding claim 2 the combination of Unnikrishnan and Schindler discloses all the limitations of claim 1. Unnikrishnan does not teach “the confidence measure for the range sensor detections is a measurement uncertainty of each pseudo measurement, and the estimates are determined based on the measurement uncertainty of each pseudo measurement “. However, Gulati in the analogous arts teaches: the confidence measure for the range sensor detections is a measurement uncertainty of each pseudo measurement, and the estimates are determined based on the measurement uncertainty of each pseudo measurement (Para 0106: “UEs 415 may perform cooperative radar sensing with knowledge dissemination, as described with reference to FIGS. 2, 3A, and 3B. Each UE 415 may sense one or more objects, and may determine one or more radar measurement parameter values for the detected objects within a corresponding sensing coverage 425. For instance, radar measurement parameters for sensed radar targets may include position, velocity, orientation, or the like. Radar measurement parameters for sensed radar targets may further include estimated radar cross sections, signal strengths, as well as uncertainty levels (e.g., a confidence level) for each radar target. In some cases, UEs 415 may determine the radar measurement parameter values based on identifying a reference frame associated with each measured radar target. “). It would have been obvious to someone in the art prior to the effective filing date of the claimed invention to modify Unnikrishnan with Gulati to incorporate the feature of: the confidence measure for the range sensor detections is a measurement uncertainty of each pseudo measurement, and the estimates are determined based on the measurement uncertainty of each pseudo measurement. Unnikrishnan with Gulati are all considered analogous arts as they all disclose the use of radar technology to detect objects. However, Unnikrishnan fails to disclose a feature of computing estimates based on measurement uncertainty. This feature is disclosed by Gulati. It would have been obvious to someone in the art prior to the effective filling date of the claimed invention to modify Unnikrishnan with Gulati to incorporate the feature of: the confidence measure for the range sensor detections is a measurement uncertainty of each pseudo measurement, and the estimates are determined based on the measurement uncertainty of each pseudo measurement as such a feature would increase the reliability and efficiency of the system. Regarding claim 3 the combination of Unnikrishnan, Schindler and Gulati discloses all the limitations of claim 2. Gulati further teaches: wherein the measurement uncertainty of each pseudo measurement is calculated based on an object distance of the detected object to the host vehicle and an orientation of the detected object towards the host vehicle (Para 0019: “In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the one or more radar measurement parameters include at least one of a velocity measurement, a velocity uncertainty level, a position measurement, a position uncertainty level, an orientation measurement, an orientation uncertainty level, a radar cross-section measurement, a radar cross-section uncertainty level, a signal strength measurement, a signal strength uncertainty level, or any combination thereof.”). The reason to combine Unnikrishnan with Gulati is the same as one given in claim 3 above. Regarding claim 4 the combination of Unnikrishnan and Schindler discloses all the limitations of claim 1. Unnikrishnan further teaches: wherein the estimates are determined by applying a Kalman filter (Para 109: “For instance, again using autonomous vehicles for illustrative purposes, the final estimate of the length (or other estimated dimension) of a target vehicle can obtained by a sequential Bayesian estimation framework, which can be interpreted as a degenerate Kalman filtering framework in which the state, representing the length of the object (e.g., vehicle), is modeled as static and does not change over time. For example, because the length of the object (e.g., vehicle) is fixed, there are no dynamics associated with the state, no state transitions, no state evolution, etc. The length X can be assumed to be a Gaussian random variable with a prior distribution with mean equal to the standard length (or other estimated dimension) of vehicles in the class of the tracked vehicle (e.g., as determined by the class likelihood estimation engine 206), and a variance given by the typical variance of length for the class of the tracked vehicle. The length estimate (or other estimated dimension) can be sequentially refined using Bayesian estimation as new measurements Y.sub.i of length are received from any combination of one or more of the map-based size estimation engine 208, the radar-based size estimation engine 212, and/or the radar image-based size estimation described above. “). Regarding claim 5 the combination of Unnikrishnan and Schindler discloses all the limitations of claim 1. Unnikrishnan further teaches: wherein the range sensor detections include at least one of: LiDAR detections provided by a LiDAR sensor; or radar detections provided by a radar sensor (Para 18: “In some aspects, the method, apparatuses, and computer-readable medium described above further comprise: obtaining a plurality of radar measurement points, the plurality of radar measurement points being based on radar signals reflected by the first object; and determining an additional estimated size of the first object based on the plurality of radar measurements. In some aspects, the plurality of radar measurement points are obtained using a plurality of radars included on a second object, the second object including a camera used to capture the image.”). Regarding claim 13 the combination of Unnikrishnan and Schindler discloses all the limitations of claim 1. Unnikrishnan further teaches: further comprising at least one of: determining a movement instruction for the host vehicle based on the determined object size (Para 29: “While some autonomous vehicles may be able to determine a classification or category of another vehicle (e.g., based on object detection and classification), the three-dimensional (3D) sizes of vehicles can have large variance even within the same classification or category. For example, a vehicle category of “truck” can include many different shapes and sizes of trucks, including small trucks, medium-sized trucks, and large trucks. Indeed, some trucks, such as semi-trailer trucks and moving trucks, are multiple times larger than small trucks. Accurately estimating the 3D size, including the length, of other vehicles on the road is an important feature of an autonomous driving system of an autonomous vehicle to be able make accurate motion planning and trajectory planning decisions.”) ; using the determined object size in a path planning or parking aid sub-system of the host vehicle; or using the determined object size for a driving function, wherein the driving function is one of: blind spot warning, lane change assist, automatic emergency braking, or evasive steering (Para 28: “ FIG. 1 is an image 100 illustrating an environment including numerous vehicles driving on a road. The vehicles include a tracking vehicle 102, a target vehicle 104, a target vehicle 106, and a target vehicle 108. The tracking vehicle 102 is an autonomous vehicle operating at a particular autonomy level. The tracking vehicle 102 can track the target vehicles 104, 106, and 108 in order to navigate the environment. For example, the tracking vehicle 102 can determine the position and size of the target vehicle 104 to determine when to slow down, speed up, change lanes, and/or perform some other function. While the vehicle 102 is referred to as a tracking vehicle 102 and the vehicles 104, 106, and 108 are referred to as target vehicles with respect to FIG. 1, the vehicles 104, 106, and 108 can also be referred to as tracking vehicles if and when they are tracking other vehicles, in which the other vehicles become target vehicles.”). Allowable Subject Matter Claims 6-12 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 6 the combination of Unnikrishnan and Schindler discloses all the limitations of claim 1. Unnikrishnan further teaches: the dimensions of the bounding box include a width and a length (Para 115: “While length is used as an example of a dimension of a target object that can be estimated by the size estimation engine 214, the same approach can be also used to filter the width and height estimates of a target object (e.g., a target vehicle) obtained from the map-based size estimation engine 208. In some cases, for certain objects (such as vehicles), the heights and widths of those objects do not vary by a large amount between different models of the same class of object (e.g., there is a small variance in width and sometimes height for different models of the same vehicle type). In such cases, the size estimation engine 214 can predict the width and/or the height of a target object (e.g., a target vehicle or other object) as a constant based on the most likely class identified by the class likelihood estimation engine 206.”), the width is defined by distances measured from a reference point relative to a longitudinal axis of the detected object, and the length is defined by distances measured from the reference point relative to a lateral axis of the detected object. In reference to depend/independent claim 6, the prior arts made of record individually or in any combination, failed to teach, render obvious, or fairly suggest to one of ordinary skill in the art at the time of filing the combination of the claimed features of claim 6. Specifically, the prior arts made of record fail to disclose the limitation: “the width is defined by distances measured from a reference point relative to a longitudinal axis of the detected object, and the length is defined by distances measured from the reference point relative to a lateral axis of the detected object. “ Dependent claims 7, 8 and 12 are also allowable due to their dependency on allowable claim 6. Regarding claim 9 the combination of Unnikrishnan and Schindler discloses all the limitations of claim 1. Unnikrishnan does not teach: prior to the determining of the pseudo measurements: for each pseudo measurement based on a range sensor detection outside the current bounding box weighting the range sensor detection higher compared to a range sensor detection inside the current bounding box. In reference to depend/independent claim 9, the prior arts made of record individually or in any combination, failed to teach, render obvious, or fairly suggest to one of ordinary skill in the art at the time of filing the combination of the claimed features of claim 9. Specifically, the prior arts made of record fail to disclose the limitation: “prior to the determining of the pseudo measurements: for each pseudo measurement based on a range sensor detection outside the current bounding box weighting the range sensor detection higher compared to a range sensor detection inside the current bounding box. “ Dependent claim 10 is also allowable due to its dependency on allowable claim 9. Regarding claim 11 the combination of Unnikrishnan and Schindler discloses all the limitations of claim 1. Unnikrishnan does not teach: prior to the determining of the pseudo measurements: in response to the the detected object being at least partly occluded, stopping the determining of estimates until the detected object is no longer occluded to freeze the determining of estimates. In reference to depend/independent claim 11, the prior arts made of record individually or in any combination, failed to teach, render obvious, or fairly suggest to one of ordinary skill in the art at the time of filing the combination of the claimed features of claim 11. Specifically, the prior arts made of record fail to disclose the limitation: “prior to the determining of the pseudo measurements: in response to the the detected object being at least partly occluded, stopping the determining of estimates until the detected object is no longer occluded to freeze the determining of estimates. “ Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bongani J. Mashele whose telephone number is (703)756-5861. The examiner can normally be reached Monday-Friday, 8:00AM-5:00PM (CT). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Robert W. Hodge, can be reached on 571-272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BONGANI JABULANI MASHELE/Examiner, Art Unit 3645 /ROBERT W HODGE/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Apr 17, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601625
LOOP-POWERED FIELD DEVICE WITH IMPROVED LOOP CURRENT CONTROL
2y 5m to grant Granted Apr 14, 2026
Patent 12596191
SYNTHETIC APERTURE RADAR USING ALTERNATING BEAMS AND ASSOCIATED METHODS
2y 5m to grant Granted Apr 07, 2026
Patent 12591040
DETERMINING ANTENNA PHASE CENTER USING BASEBAND DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12579796
SYSTEMS AND METHODS FOR PROCESSING SELECTED PORTIONS OF RADAR DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12566255
OBJECT DETECTION ALARM SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
93%
With Interview (+4.1%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month