Prosecution Insights
Last updated: April 19, 2026
Application No. 18/225,262

FEATURE DETECTION AND LOCALIZATION FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

Non-Final OA §102§103
Filed
Jul 24, 2023
Examiner
YANG, QIAN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
709 granted / 963 resolved
+11.6% vs TC avg
Strong +31% interview lift
Without
With
+31.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
997
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 963 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant's amendment filed on January 9, 2026 has been entered. Claims 1, 4 – 5, 9, 12 – 13, 16 and 18 have been amended. Claims 6 and 14 have been canceled. Claims 21 and 22 have been added. Claims 1 – 5, 7 – 13 and 15 – 22 are still pending in this application, with claims 1, 9 and 18 being independent. Response to Arguments Regarding rejections under 35 USC § 102/103 The Applicant alleges: “ [T]he Office further has not shown that Yang teaches or suggests determining a "3D location" associated with an object based at least on such a "planar surface" and "second vertices." Consequently, the Office has not shown that Yang teaches or suggest "determining, based at least on the one or more points, a planar surface that is located within the 3D shape associated with the traffic sign; based at least on the determining the planar surface, projecting first vertices associated with the bounding shape from the image to determine second vertices associated with the planar surface; [and] determining, based at least on the planar surface and the second vertices, a 3D location associated with the traffic sign," as amended claim 1 recites.” Examiner’s response: The Examiner respectfully disagrees. Yang teaches: determining, based at least on the one or more points, a planar surface that is located within the 3D shape associated with the traffic sign (Fig. 12, [0138], determine minimum depth planes 1230 by utilizing the closest lidar points); based at least on the determining the planar surface, projecting first vertices associated with the bounding shape from the image (Fig. 12, [0138], projecting vertices of rectangular in minimum depth planes 1230) to determine second vertices associated with the planar surface (Fig. 12, [0138], to determine vertices of rectangular in maximum depth planes 1240); determining, based at least on the planar surface and the second vertices, a 3D location associated with the traffic sign ([0138], all points within frustum 1250 in 3D map are determined based on plane 1230 and vertices of rectangular in maximum depth planes 1240; [0132 – 0133], talked about sign 3D location); and Therefore, claim 1 is still read on by Yang. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 – 5, 7 – 13, 15 – 20 and 22 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yang et al. (US Patent Application Publication 2018/0189578, IDS), hereinafter referred as Yang. Regarding claim 1, Yang discloses a method (abstract) comprising: receiving image data obtained using one or more image sensors of a machine (Fig. 9, 910, [0112], receiving image data obtained using camera) and depth data obtained using one or more depth sensors of the machine within an environment (Fig. 9, 930, [0114], receiving 930 a depth map including the traffic sign captured by a detection and ranging sensor); determining, based at least on the image data representative of an image, a bounding shape associated with a traffic sign depicted in the image ([0116], bounding box); determining, based at least on the bounding shape, at least a first plane and a second plane corresponding to a three-dimensional (3D) shape associated with the traffic sign within the environment ([0138], determine planes 1230 and 1240); determining, based at least on the 3D shape, one or more points from a point cloud being located between the first plane and the second plane of the 3D shape, that the one or more points correspond to the traffic sign ([0138 – 0139], determine points within frustum 1250 (between two planes)); determining, based at least on the one or more points, a planar surface that is located within the 3D shape associated with the traffic sign (Fig. 12, [0138], determine minimum depth planes 1230 by utilizing the closest lidar points); based at least on the determining the planar surface, projecting first vertices associated with the bounding shape from the image (Fig. 12, [0138], projecting vertices of rectangular in minimum depth planes 1230) to determine second vertices associated with the planar surface (Fig. 12, [0138], to determine vertices of rectangular in maximum depth planes 1240); determining, based at least on the planar surface and the second vertices, a 3D location associated with the traffic sign ([0138], all points within frustum 1250 in 3D map are determined based on plane 1230 and vertices of rectangular in maximum depth planes 1240; [0132 – 0133], talked about sign 3D location); and updating a map to indicate at least the 3D location associated with the traffic sign ([0132 – 0133], optimizing sign 3D location in HD map system; (update/optimize) the accuracy of the sign position relative to the OMap (disclosed in [0093 – 0095])). Regarding claim 2 (depends on claim 1), Yang discloses the method further comprising: determining, based at least on the bounding shape at least the first plane and the second plane, a frustum associated with the traffic sign, the frustum including the 3D shape ([0138 – 0139]), wherein the determining that the one or more points correspond to the traffic sign is based at least on the one or more points being located within the frustum ([0138 – 0139]). Regarding claim 3 (depends on claim 1), Yang discloses the method wherein the determining the 3D shape associated with the traffic sign comprises: the determining the first plane is further based at least on the bounding shape and a set minimum distance ([0138 – 0139]); and the determining the second plane is further based at least on the bounding shape and a set maximum distance ([0138 – 0139]); and determining the 3D shape based at least on the first plane and the second plane ([0138 – 0140]). Regarding claim 4 (depends on claim 1), Yang discloses the method wherein the determining the 3D shape associated with the traffic sign comprises further comprising: determining, based at least on projecting one or more of the first vertices associated with the bounding shape, one or more 3D vertices ([0135 – 0140]); and determining the 3D shape based at least on the one or more 3D vertices, the first plane, and the second plane ([0138 – 0139]), wherein the determining that the one or more points correspond to the traffic sign is based at least on the one or more points being located within the 3D shape ([0135 – 0140]). Regarding claim 5 (depends on claim 1), Yang discloses the method wherein the determining the planar surface that is located within the 3D shape associated with the traffic sign comprises determining the planar surface by fitting the one or more points to be located on the planar surface ([0140], fitting subset points). Regarding claim 7 (depends on claim 1), Yang discloses the method wherein the determining the one or more points from the point cloud that correspond to the traffic sign comprises determining, based at least on 3D locations associated with points from the point cloud, that the one or more points are located within between the first plane and the second plane of the 3D shape ([0138 – 0139], determine points within frustum 1250 (between two planes)). Regarding claim 8 (depends on claim 1), Yang discloses the method further comprising: determining one or more intensity values associated with the one or more points ([0182 – 0184]), wherein the determining the one or more points from the point cloud that correspond to the traffic sign is further based at least on the one or more intensity values ([0182 – 0184]). Regarding claim 9, Yang discloses a system (Fig. 44) comprising: one or more processors (Fig. 44, #4402) to: determine, based at least on image data obtained using a machine within an environment, a bounding shape associated with an object a traffic sign depicted in an image represented by the image data ([0116], bounding box); determine, based at least on the bounding shape, a three-dimensional (3D) shape associated with the object traffic sign, the 3D shape defined using at least a first plane and a second plane located within the environment ([0138], determine planes 1230 and 1240); determine, based at least on the 3D shape depth data obtained using the machine, that one or more points from a point cloud associated with the depth data are located within the 3D shape (Fig. 9, 930, [0114], receiving 930 a depth map including the traffic sign captured by a detection and ranging sensor): determine, based at least on the one or more points being located within the 3D shape, a planar surface that is located within the 3D shape associated with the traffic sign (Fig. 12, [0138], determine minimum depth planes 1230); based at least on the determining the planar surface, projecting first vertices associated with the bounding shape from the image (Fig. 12, [0138], projecting vertices of rectangular in minimum depth planes 1230) to determine second vertices associated with the planar surface (Fig. 12, [0138], to determine vertices of rectangular in maximum depth planes 1240); determining, based at least on the planar surface and the second vertices, a 3D location associated with the traffic sign ([0138], all points within frustum 1250 in 3D map are determined based on plane 1230 and vertices of rectangular in maximum depth planes 1240; [0132 – 0133], talked about sign 3D location); and cause a performance of one or more operations based at least on the 3D location associated with the traffic sign ([0132 – 0133], optimizing sign 3D location in HD map system; (update/optimize) the accuracy of the sign position relative to the OMap (disclosed in [0093 – 0095])). Regarding claims 10 – 13 and 15 – 16, they are corresponding to claims 2 – 5 and 7 – 8, respectively, thus, they are interpreted and rejected for a same reason set forth for claims 2 – 5 and 7 – 8. Regarding claim 17 (depends on claim 9), Yang discloses the system wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine ([0066, 0073, 0074, 0083]); a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system implementing one or more large language models (LLMs); a system for performing conversational AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources ([0059]). Regarding claims 18 – 20, they are corresponding to claims 1, 3 and 17, respectively, thus, they are interpreted and rejected for a same reason set forth for claims 1, 3 and 17. Regarding claim 22 (depends on claim 1), Yang discloses the method wherein the 3D location includes at least a portion of the planar surface that is defined using the second vertices (Fig. 12, [0138], using vertices in 1240 can define planar surface of 1240). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Agarwal et al. (US Patent Application Publication 2024/0096111), hereinafter referred as Agarwal. Regarding claim 21 (depends on claim 1), Yang fails to explicitly disclose the method wherein the determining the planar surface that is located within the 3D shape associated with the traffic sign comprises determining the planar surface as being located at substantially a center of the one or more points. However, in a similar field of endeavor Agarwal discloses an image processing method (Fig. 5). In addition, Agarwal discloses the method comprising determining the planar surface as being located at substantially a center of the one or more points ([0055], center of bounding box). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yang, and determining the planar surface as being located at substantially a center of the one or more points. The motivation for doing this is the middle plane of the object can be defined so that location can be more accurate. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to QIAN YANG whose telephone number is (571)270-7239. The examiner can normally be reached on Monday-Thursday 8am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on 571-270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QIAN YANG/ Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Sep 15, 2025
Non-Final Rejection — §102, §103
Nov 20, 2025
Applicant Interview (Telephonic)
Nov 20, 2025
Examiner Interview Summary
Nov 21, 2025
Response Filed
Dec 10, 2025
Final Rejection — §102, §103
Jan 09, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598273
Camera Platform Incorporating Schedule and Stature
2y 5m to grant Granted Apr 07, 2026
Patent 12586560
ELECTRONIC APPARATUS, TERMINAL APPARATUS AND CONTROLLING METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12586239
SMART IMAGE PROCESSING METHOD AND DEVICE USING SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12579432
METHODS AND APPARATUS FOR AUTOMATED SPECIMEN CHARACTERIZATION USING DIAGNOSTIC ANALYSIS SYSTEM WITH CONTINUOUS PERFORMANCE BASED TRAINING
2y 5m to grant Granted Mar 17, 2026
Patent 12579686
Mixed Depth Object Detection
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+31.3%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 963 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month