Prosecution Insights
Last updated: April 19, 2026
Application No. 18/581,075

METHOD FOR DETERMINING A CONTOUR OF AN OBJECT

Non-Final OA §103§112
Filed
Feb 19, 2024
Examiner
HENSON, BRANDON JAMES
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Aptiv Technologies AG
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
38 granted / 55 resolved
+17.1% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
116
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
53.1%
+13.1% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
21.1%
-18.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 55 resolved cases

Office Action

§103 §112
DETAILED ACTION Status of Claims Claims 1-12, 15-22 are currently pending and have been examined in this application. This NON-FINAL communication is the first action on the merits. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application filed in EP 24151404.1 on 01/11/2024 under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 9-10 recite “determined by shifting the respective primary area in a direction perpendicular (normal vector) to the respective segment”. It is unclear how a shift can be in a perpendicular direction to the respective segment while also depend on the distribution of the sensor detections. A shift in the perpendicular direction lacks clarity as it would seem that a shift in this direction would also be dependent on a relative displacement of the sensor detections. It is further unclear how a dot product between an auxiliary vector and a normal vector is used to calculate a shift in the primary area. The examiner has interpreted the claims as depending on the distribution of the sensor detections only when considering any shift of the primary area as indicated in the second limitation of claim 9. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi (US 20220187841) in view of Van Beek (US 20220161815). Regarding Claims 1, 16 Ebrahimi teaches the following limitations: A computer implemented method for determining a contour of an object with respect to a sensor, (Ebrahimi - [0947] In some embodiments, the processor may use shape descriptors for objects. In embodiments, shape descriptors are immune to rotation, translation, and scaling. In embodiments, shape descriptors may be region based descriptors or boundary based descriptors. In some embodiments, the processor may use curvature Fourier descriptors wherein the image contour is extracted by sampling coordinates along the contour, [1458] The functionality described herein may be provided by one or more processors of one or more computers executing specialized code stored on a tangible, non-transitory, machine readable medium.) (Claim 16) A computer system configured to: receive a plurality of sensor detections from a sensor configured to determine a respective position of the sensor detections; (Ebrahimi - [0947], [0899] In embodiments, the processor may stitch data collected at a first and a second time point or a same time point by a same or a different sensor type; stitch data captured by a first camera and a second camera with overlapping or non-overlapping fields of view; stitch data captured by a first LIDAR and a second LIDAR; and stitch data captured by a LIDAR and a camera. FIG. 76A illustrates stitching data 7600 captured at times t1, t2, t3, . . . , tn to obtain combined data 7601. FIG. 76B illustrates two overlapping sensor fields of view 7602 and 7603 of vehicle 7604 and two non-overlapping sensor fields of view 7605 and 7606 of vehicle 7607. Data captured within the overlapping sensor fields of view 7602 and 7603 may be stitched together to combine the data.) the sensor being configured to provide a plurality of sensor detections, (Ebrahimi - [0006] a plurality of radial distances from the primary sensor to objects within a maximum range of the primary sensor [0792] a plurality of sensors (e.g., tactile sensor, obstacle sensor, temperature sensor, imaging sensor, light detection and ranging (LIDAR) sensor, camera, depth sensor, time-of-flight (TOF) sensor,) each sensor detection including a respective position at the object, (Ebrahimi - [1164] When a depth image is taken and considered independently, for each pixel (i,j) in the image, there is a depth value D. When SLAM is used to combine the images and depth sensing into a reconstruction of the spatial model, then for each pixel (i,j), there is a corresponding physical point which may be described by an (x,y,z) coordinate in the grid space frame of reference.) the method comprising: determining an initial contour of the object including a plurality of segments, (Ebrahimi - [Fig. 193 A-D], [1054] Pixels along the edge of a binary region (i.e., border) may be identified by morphological operations and difference images. Marking the pixels along the contour may have some useful applications, however, an ordered sequence of border pixel coordinates for describing the contour of a region may also be determined. In some embodiments, an image may include only one outer contour and any number of inner contours. For example, FIG. 192 illustrates an image of a vehicle including an outer contour and multiple inner contour. In some embodiments, the processor may perform sequential region labeling, followed by contour tracing… the processor may determine a length of a contour using chain codes and differential chain codes. In some embodiments, a chain code algorithm may begin by traversing a contour from a given starting point x.sub.s and may encode the relative position between adjacent contour points using a directional code for either 4-connected or 8-connected neighborhoods. In some embodiments, the processor may determine the length of the resulting path as the sum of the individual segments, which may be used as an approximation of the actual length of the contour.) each segment being related to a respective initial subset of the sensor detections, (Ebrahimi - [0851] FIG. 29A illustrates a map 2900 and an edge detector 2901 received as input and an output 2902 comprising a subset of the map defined by edges. FIG. 29B illustrates an image of a person 2903 and an edge detector 2904 received as input and an output 2902 comprising a subset of the image defined by edges) associating a respective surrounding set of the sensor detections with each segment of the initial contour, (Ebrahimi - [0851], [1054]) providing a respective weight to each of the sensor detections of the respective surrounding set, (Ebrahimi - [0839] In embodiments, every logistic regression may be connected to other logistic regressions with a weight. In embodiments, every connection between node j in layer k and node m in layer n may have a weight denoted by w.sup.kn. In embodiments, the weight may determine the amount of influence the output from a logistic regression has on the next connected logistic regression and ultimately on the final logistic regression in the final output layer. [0842] In some embodiments, the processor may flatten images (i.e., two dimensional arrays) into image vectors. In some embodiments, the processor may provide an image vector to a logistic regression. FIG. 24 illustrates an example of flattening a two dimensional image array 2400 into an image vector 2401 to obtain a stream of pixels. In some embodiments, the elements of the image vector may be provided to the network of nodes that perform logistic regression at each different network layer. For example, FIG. 25 illustrates the values of elements of vector array 2500 provided as inputs A, B, C, D, . . . into the first layer of the network 2501 of nodes that perform logistic regression. The first layer of the network 2501 may output updated values for A, B, C, D, . . . which may then be fed to the second layer of the network 2502 of nodes that perform logistic regression. The same processor continues, until A, B, C, D, . . . are fed into the last layer of the network 2503 of nodes that perform the final logistic regression and provide the final result 2504. Ebrahimi does not explicitly teach “weight to each of the sensor detections”.) each weight depending from a relative position of the sensor detection with respect to the associated segment, and (Ebrahimi - [0839], [0842], [1016] Or a feature at a center of the image may have more weight compared to features detected in less central areas.) refining each segment of the initial contour by using the weights of the sensor detections of the respective surrounding set associated with the respective segment in order to determine a final contour including the refined segments for the object. (Ebrahimi - [0839], [0842], [1054]) Ebrahimi does not explicitly teach the following limitations, however Van Beek, in the same field of endeavor, teaches: weight to each of the sensor detections (Van Beek – [0421] in some embodiments, sensor fusion improvement is achieved by adapting weights for each sensor based on the context. The SNR (and consequently the overall variance) may be improved by adaptively weighting data from the sensors differently based on the context.) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the weights of Ebrahimi with the weights for each sensor of Van Beek in order to improve sensor fusion (Van Beek – [0421]). Regarding Claims 2, 17, Ebrahimi further teaches: wherein: the initial contour further includes a plurality of vertices, the segments of the initial contour extend between a respective pair of the vertices, (Ebrahimi - [1054]) one of the sensor detections is selected as a first vertex, (Ebrahimi - [1054]) the further vertices following the first vertex are iteratively determined by: selecting the respective initial subset of the sensor detections with respect to the respective preceding vertex, and (Ebrahimi - [1054]) estimating a position of the next vertex by utilizing the respective initial subset being selected with respect to the preceding vertex. (Ebrahimi - [1054]) Regarding Claims 3, 18, Ebrahimi further teaches: wherein: selecting the respective initial subset of the sensor detections for the preceding vertex includes selecting sensor detections being located within a predefined area around the preceding vertex. (Ebrahimi - [1054]) Regarding Claims 4, 19, Ebrahimi further teaches: wherein: the predefined area is provided as a rectangular bounding box which is centered at the respective vertex and which is shifted from a respective one of the vertices to the subsequent vertex when the plurality of vertices is determined iteratively. (Ebrahimi - [Fig. 193 A-D], [1054]) Regarding Claims 5, 20, Ebrahimi further teaches: wherein: the sensor detections selected for the initial subset for the preceding vertex are excluded from the initial subset for the subsequent vertex. (Ebrahimi - [Fig. 193 A-D], [1054]) Regarding Claims 6, 21 Ebrahimi further teaches: wherein: the sensor detection selected as the first vertex is identified by: determining at least two sorted lists for the positions of the sensor detections, each sorted list referring to a respective coordinate of the positions, (Ebrahimi - [0842], [1054], [1164]) selecting the sorted list having the greatest difference between a first element and a last element of the list, and selecting the first element of the selected sorted list as the first vertex. (Ebrahimi - [0842], [1029] For example, the processor may use a best bin first method to search for neighboring feature space partitions by starting at the closest distance.) Regarding Claim 7, Ebrahimi further teaches: wherein: estimating the position of the next vertex by utilizing the respective initial subset includes determining a segment vector extending from the preceding vertex to the next vertex by: (Ebrahimi – [0851], [1054], [1171] For example, images may pass through a low pass filter to smoothen the images and reduce noise. In embodiments, feature extraction may be performed using methods such as Harris or Canny edge detection. Further processing may then be applied, such as morphological operations, inflation and deflation of objects, contrast manipulation, increase and decrease in lighting, grey scale, geometric mean filtering, and forming a binary image.) calculating a geometric mean over the sensor detections of the respective initial subset, (Ebrahimi – [0851], [1054], [1171]) wherein the geometric mean provides a direction of the segment vector, and determining a most distant sensor detection with respect to the preceding vertex within the respective initial subset, (Ebrahimi – [0006], [0842], [1054], [1171]) wherein a distance between the most distant sensor detection and the preceding vertex defines the absolute value of the segment vector. (Ebrahimi – [0006], [0842], [1054], [1171]) Regarding Claim 8, Ebrahimi further teaches: wherein: a respective primary area is arranged symmetrically to the respective segment, (Ebrahimi – [1054]) the respective surrounding set of sensor detections includes regular sensor detections which are located within the primary area, (Ebrahimi – [1054]) a respective modified area is determined by modifying the respective primary area according to a distribution of the sensor detections with respect to the sensor, (Ebrahimi – [1054]) the respective surrounding set of sensor detections further includes special sensor detections which are located within the modified area and outside of the primary area, (Ebrahimi – [1054]) the regular sensor detections are provided with a normal weight, and the special sensor detections are provided with an increased weight being larger than the normal weight. (Ebrahimi – [0839], [0842], [1016], [1054]) Regarding Claim 9, Ebrahimi further teaches: the respective modified area is determined by shifting the respective primary area in a direction perpendicular to the respective segment, and (Ebrahimi – [1054], [1053] In some embodiments, the processor may determine positions u at which intensity change along the horizontal and vertical axes occurs. In some embodiments, the processor may then determine the direction of the maximum intensity change to determine the angle of the edge normal. In some embodiments, the processor may use the angle of the edge normal to derive the local edge strength. In other embodiments, the processor may use the difference between the eigenvalues, λ.sub.1−λ.sub.2, to quantify edge strength. [1066] In some embodiments, the processor may generate a velocity map based on multiple images taken from multiple cameras at multiple time stamps, wherein objects do not move with the same speed in the velocity map. Speed of movement is different for different objects depending on how the objects are positioned in relation to the cameras.) (See 112(b) section.) the direction and the amount of the shifting depends on the distribution of the sensor detections with respect to the sensor. (Ebrahimi – [1054], [1066]) Regarding Claim 10, Ebrahimi further teaches: wherein: a respective segment vector is determined as a difference between position vectors of a pair of vertices being associated with the respective segment, (Ebrahimi – [0006], [1054]) a respective normal vector being perpendicular to the respective segment vector is determined for each segment, (Ebrahimi – [1053], [1054]) an auxiliary vector is determined as a difference of a position vector of a reference close to the sensor position and a position vector of a center of the segment being closest to the sensor, (Ebrahimi – [0006], [1054]) for all segments, an amount of the shift of the primary area is given by a dot product of the auxiliary vector and the normal vector of the segment being closest to the sensor and a direction of the shift is given by the respective normal vector of the segment. (Ebrahimi – [1054], [1066]) (See 112(b) section.) Regarding Claim 11, Ebrahimi further teaches: wherein: for each sensor detection of the respective surrounding set, it is determined whether the sensor detection is located at the initial contour, on an inner side of the initial contour facing the sensor or on an outer side of the initial contour being averted from the sensor, (Ebrahimi – [1054], [1164]) the sensor detections located at the initial contour are provided with a normal weight, (Ebrahimi – [0839], [0842], [1016]) the sensor detections located on the inner side are provided with an increased weight which is greater than the normal weight, and (Ebrahimi – [0839], [0842], [1016]) the sensor detections located on the outer side are provided with a decreased weight which is smaller than the normal weight. (Ebrahimi – [0839], [0842], [1016]) Regarding Claim 12, Ebrahimi further teaches: wherein: refining each segment of the initial contour includes using a regression procedure for adapting the respective segment to the weighted sensor detections of the respective surrounding subset of sensor detections associated with the respective segment. (Ebrahimi – [0842]) Regarding Claim 15, Ebrahimi further teaches: A non-transitory computer readable medium comprising instructions for carrying out the computer implemented method of claim 1. (Ebrahimi – [1458]) Regarding Claim 22, Ebrahimi further teaches: A vehicle comprising the computer system recited by claim 16. (Ebrahimi – [1458], [0004] Robotic devices are increasingly used within commercial and consumer environments. Some examples include robotic lawn mowers, robotic surface cleaners, autonomous vehicles,) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of art is listed on the enclosed PTO-892. The following is a brief description for relevant prior art that was cited but not applied: Wu (US 20200191943) describes methods, apparatus and systems for wireless object tracking including using regression function to fit sampled data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON JAMES HENSON whose telephone number is (703)756-1841. The examiner can normally be reached Monday-Friday 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Hodge can be reached at 571-272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRANDON JAMES HENSON/Examiner, Art Unit 3645 /ROBERT W HODGE/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Feb 19, 2024
Application Filed
Dec 20, 2025
Non-Final Rejection — §103, §112
Mar 23, 2026
Applicant Interview (Telephonic)
Mar 23, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601830
METHOD AND APPARATUS FOR OBTAINING LOCATION INFORMATION USING RANGING BLOCK AND RANGING ROUNDS
2y 5m to grant Granted Apr 14, 2026
Patent 12584996
HARDWARE GENERATION OF 3D DMA CONFIGURATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12566242
RADIO FREQUENCY APPARATUS AND METHOD FOR ASSEMBLING RADIO FREQUENCY APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12566258
SYSTEM AND METHOD OF FULLY POLARIMETRIC PULSED RADAR
2y 5m to grant Granted Mar 03, 2026
Patent 12560700
METHOD AND DEVICE FOR DETERMINING AT LEAST ONE ARTICULATION ANGLE OF A VEHICLE COMBINATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+27.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 55 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month