Prosecution Insights
Last updated: April 19, 2026
Application No. 18/238,394

THREE-DIMENSIONAL TARGET DETECTION METHOD AND VEHICLE

Final Rejection §101§102§103
Filed
Aug 25, 2023
Examiner
HELCO, NICHOLAS JOHN
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Xiaomi Ev Technology Co. Ltd.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
26 granted / 36 resolved
+10.2% vs TC avg
Strong +44% interview lift
Without
With
+44.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
19.6%
-20.4% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants This action is in response to the amendments and remarks filed on 01/05/2026. Claims 1-15 are pending. Corrective Actions by Applicant Claims 1, 7-8, and 14-15 have been amended. Response to Arguments The examiner has fully considered Applicant’s presented arguments. On page 9 of the remarks, Applicant argues that the amendments to claims 1, 7-8, and 14-15 overcome the objections to claims 1, 7-8, and 14-15. This is persuasive. The objections to claims 1, 7-8, and 14-15 have been withdrawn. On page 9 of the remarks, Applicant argues that the amendments to independent claims 1, 8, and 15 overcome all 35 U.S.C. 101 rejections. This is persuasive. All 35 U.S.C. 101 rejections have been withdrawn. For clarity, the relevant limitations of the present claim 1 are reproduced below with annotation added, to reference while responding to the remaining arguments below: inputting by the processor the surrounding image of the vehicle into a preset target detection model, and acquiring by the processor three-dimensional detection framework information of a target vehicle output by the target detection model; obtaining by the processor auxiliary detection information of the target vehicle by performing at least one of a grounding line detection, an in-garage location detection or an occlusion rate detection on the surrounding image of the vehicle; and obtaining by the processor corrected three-dimensional detection framework information of the target vehicle by correcting the three-dimensional detection framework information of the target vehicle according to the auxiliary detection information of the target vehicle. On pages 10-11 of the remarks, Applicant argues that Zhao fails to disclose limitation “a” above. The examiner respectfully disagrees. The generation of the 2D and 3D bounding boxes are not unrelated processes; Figure 3A of Zhao requires them to be performed sequentially, one immediately after the other, before any downstream processing occurs. Furthermore, the claimed term “target detection model” is broad, as the broadest reasonable interpretation can include any set or subset of structure/actions that perform image processing for target detection. The rejection interprets all of the actions 301-303 of Figure 3A to be performed by an instance of a target detection model; the fact that step 302 is performed by an “image processing network” and step 303 is performed by “the vehicle equipped with the image processing apparatus” does not disavow an interpretation of the set of these image processing actions as those performed by a “target detection model.” On pages 11-12 of the remarks, Applicant argues that Zhao fails to disclose limitation “b” above. The examiner respectfully disagrees. Although the side line/auxiliary information in Zhao is extracted from the same information defining the first result/three-dimensional detection framework information, the claim does not disavow how/when the auxiliary information, such as the grounding line, is obtained, other than by performing the detection “on the surrounding image of the vehicle”, which Zhao does perform. On pages 12-13 of the remarks, Applicant argues that Zhao fails to disclose limitation “c” above. The examiner respectfully disagrees. Although the vertices discussed by Zhao are extracted from/within the 3D outer bounding box/three-dimensional detection framework information, the claims do not disavow the structure/location of the auxiliary information; in other words, the auxiliary information, such as the grounding line, could still be within or obtained from the three-dimensional detection framework information while remaining within the scope of the current claim language. However, if Applicant were to incorporate claim language relating to this argument, tentatively such as “wherein the auxiliary detection information is other information different from the three-dimensional detection framework information”, or tentatively such as “wherein the auxiliary detection information is not obtained from the three-dimensional detection framework information”, such limitations would distinguish over Zhao regarding limitation “c”. Claim Rejections – 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 8-9, and 15 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Zhao et al. (U.S. Publ. US-2023/0047094-A1). Regarding claim 1, Zhao discloses a three-dimensional target detection method (see figures 3A-3C), performed by a vehicle comprising a processor (see figure 17, processor 1603), comprising: acquiring by the processor a surrounding image of the vehicle (see figure 3A, step 301 and paragraphs 0117-0119); inputting by the processor the surrounding image of the vehicle into a preset target detection model, and acquiring by the processor three-dimensional detection framework information of a target vehicle output by the target detection model (see figure 3A, step 302 and paragraph 0120, where a 2D bounding box of a target vehicle is first generated; see figure 3A, step 303 and paragraph 0129, where a 3D outer bounding box of the target vehicle is generated based on the 2D bounding box); obtaining by the processor auxiliary detection information of the target vehicle by performing at least one of a grounding line detection, an in-garage location detection or an occlusion rate detection on the surrounding image of the vehicle (see figure 4, side line A5 and paragraphs 0122-0123, where, during the 2D bounding box generation, a side line representing an intersection line between the vehicle and the ground plane is generated from the image; the examiner regards this "side line" as a grounding line); and obtaining by the processor corrected three-dimensional detection framework information of the target vehicle by correcting the three-dimensional detection framework information of the target vehicle according to the auxiliary detection information of the target vehicle (see figure 3B, step 307 and paragraphs 0159 and 0162-0165, where "first points" are selected from the side line of the target vehicle and are used to determine the orientation of the target vehicle relative to the present vehicle; then see figure 3C, step 309 and paragraphs 0170-0173, where a "first vertex" of the 3D outer bounding box is obtained from the first points, and then the first vertex is used to correct points of the 3D outer bounding box), wherein the corrected three-dimensional detection framework information of the target vehicle is used to control the vehicle (see paragraphs 0155-0156, 0210, and 0316, where the position and predicted behavior of the target vehicle influences control of the present vehicle). Regarding claim 2, Zhao discloses wherein the auxiliary detection information of the target vehicle comprises grounding line information of the target vehicle (see figure 4, side line A5 and paragraphs 0122-0123, where, during the 2D bounding box generation, a side line representing an intersection line between the vehicle and the ground plane is generated from the image; the examiner regards this "side line" as a grounding line), and the obtaining the corrected three-dimensional detection framework information comprises: acquiring detection framework location information in the three-dimensional detection framework information of the target vehicle (see figure 3A, step 302 and paragraph 0120, where a 2D bounding box of a target vehicle is first generated; see figure 3A, step 303 and paragraph 0129, where a 3D outer bounding box of the target vehicle is generated based on the 2D bounding box); and obtaining the corrected three-dimensional detection framework information of the target vehicle by correcting the detection framework location information in the three-dimensional detection framework information of the target vehicle according to the grounding line information of the target vehicle (see figure 3B, step 307 and paragraphs 0159 and 0162-0165, where "first points" are selected from the side line of the target vehicle and are used to determine the orientation of the target vehicle relative to the present vehicle; then see figure 3C, step 309 and paragraphs 0170-0173, where a "first vertex" of the 3D outer bounding box is obtained from the first points, and then the first vertex is used to correct points of the 3D outer bounding box). Regarding claim 8, Zhao discloses a vehicle (see figure 17), comprising: a processor (see figure 17, processor 1603) and a memory storing instructions that, when executed by the processor, cause the processor to (see figure 17, memory 1604 and instructions 115). The remainder of claim 8 recites steps identical to those of claim 1. Therefore, Zhao anticipates claim 8 as applied to claim 1 above. Regarding claim 9, Zhao anticipates claim 9 as applied to claim 2 above. Regarding claim 15, Zhao discloses a non-transitory computer-readable storage medium storing instructions that, when executed by a processor of a vehicle, cause the processor to (see paragraph 0311). The remainder of claim 15 recites steps identical to those of claim 1. Therefore, Zhao anticipates claim 15 as applied to claim 1 above. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-5, 7, 11-12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (U.S. Publ. US-2023/0047094-A1) in view of Hayakawa (U.S. Publ. US-20170253236-A1). Regarding claim 4, Zhao discloses and the obtaining the corrected three-dimensional detection framework information comprises: acquiring detection framework direction information (see paragraph 0116, where an orientation angle of the target vehicle can be estimated from the 3D bounding box) and detection framework location information in the three-dimensional detection framework information of the target vehicle (see figure 3A, step 302 and paragraph 0120, where a 2D bounding box of a target vehicle is first generated; see figure 3A, step 303 and paragraph 0129, where a 3D outer bounding box of the target vehicle is generated based on the 2D bounding box); Zhao fails to disclose wherein the auxiliary detection information of the target vehicle comprises in-garage location information of the target vehicle, and obtaining the corrected three-dimensional detection framework information of the target vehicle by correcting the detection framework direction information and the detection framework location information in the three-dimensional detection framework information of the target vehicle according to the in-garage location information of the target vehicle. Pertaining to the same field of endeavor, Hayakawa discloses wherein the auxiliary detection information of the target vehicle comprises in-garage location information of the target vehicle (see figure 6, step S2, which is detailed in figure 7; figure 8 and paragraphs 0074-0075, where points "Ps" on target vehicles C1 & C2 are used to estimate surfaces X1, X2, Y1, and Y2, which define the target vehicles' parking positions in a parking lot/garage), and obtaining the corrected three-dimensional detection framework information of the target vehicle by correcting the detection framework direction information and the detection framework location information in the three-dimensional detection framework information of the target vehicle according to the in-garage location information of the target vehicle (see figure 6, step S7, figures 15A-15B and paragraph 0096, where a target parking spot location, as well as the target vehicle locations, are corrected using the surfaces to produce corrected surfaces X1N, X2N, Y1N, and Y2N). Zhao and Hayakawa are considered analogous art, as they are both directed to vehicle detection models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Hayakawa into Zhao because doing so enables detection of target vehicles in garages for the purpose of automated parking (see Hayakawa paragraph 0042). Regarding claim 5, Zhao fails to disclose the limitations of claim 5. Pertaining to the same field of endeavor, Hayakawa discloses wherein the obtaining the corrected three-dimensional detection framework information of the target vehicle by correcting the detection framework direction information and the detection framework location information in the three-dimensional detection framework information of the target vehicle according to the in-garage location information of the target vehicle comprises: determining in-garage location direction information according to the in-garage location information of the target vehicle (see figure 8, where the angles of surfaces X1, X2, Y1, and Y2 provide the predicted direction of the target vehicles C1 and C2); determining direction offset information between the in-garage location direction information and the detection framework direction information (see figures 15A-15B, where figure 15A shows the target vehicle predictions before the offset is applied, and figure 15B shows them after the offset is applied) in response to determining that the in-garage location direction information is inconsistent with the detection framework direction information (see paragraphs 0076-0077 and 0096, where low accuracy or high error of the initial predictions are identified); and obtaining the corrected three-dimensional detection framework information of the target vehicle by correcting the detection framework direction information and the detection framework location information in the three-dimensional detection framework information of the target vehicle according to the direction offset information (see figure 6, step S7, figures 15A-15B and paragraph 0096, where a target parking spot location, as well as the target vehicle locations, are corrected using the surfaces to produce corrected surfaces X1N, X2N, Y1N, and Y2N). Zhao and Hayakawa are considered analogous art, as they are both directed to vehicle detection models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Hayakawa into Zhao because doing so enables detection of target vehicles in garages for the purpose of automated parking (see Hayakawa paragraph 0042). Regarding claim 7, Zhao discloses in response to determining that the target vehicle is in a moving state, the auxiliary detection information of the target vehicle comprises at least one of grounding line information or an occlusion rate (see paragraphs 0305-0306, where the relevant objects/target vehicles that are the subject of the above grounding line process have detected velocities indicating a moving state); Zhao fails to disclose wherein a scene corresponding to the surrounding image of the vehicle is a parking lot scene; and in response to determining that the target vehicle is in a stationary state, the auxiliary detection information of the target vehicle comprises at least one of in-garage location information and or occlusion rate. Pertaining to the same field of endeavor, Hayakawa discloses wherein a scene corresponding to the surrounding image of the vehicle is a parking lot scene (see paragraph 0042); and in response to determining that the target vehicle is in a stationary state, the auxiliary detection information of the target vehicle comprises at least one of in-garage location information or an occlusion rate (see figure 6, step S2, which is detailed in figure 7; figure 8, and paragraphs 0074-0075, where points "Ps" on target vehicles C1 & C2 are used to estimate surfaces X1, X2, Y1, and Y2, which define the target vehicles' stationary parking positions in a parking lot/garage). Zhao and Hayakawa are considered analogous art, as they are both directed to vehicle detection models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Hayakawa into Zhao because doing so enables detection of target vehicles in garages for the purpose of automated parking (see Hayakawa paragraph 0042). Regarding claim 11, Zhao in view of Hayakawa discloses claim 11 as applied to claim 4 above. Regarding claim 12, Zhao in view of Hayakawa discloses claim 12 as applied to claim 5 above. Regarding claim 14, Zhao in view of Hayakawa discloses claim 14 as applied to claim 7 above. Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (U.S. Publ. US-2023/0047094-A1) in view of Hashimoto et al. (U.S. Publ. US-2020/0097756-A1). Regarding claim 6, Zhao fails to disclose the limitations of claim 6. Pertaining to the same field of endeavor, Hashimoto discloses wherein the auxiliary detection information of the target vehicle comprises an occlusion rate of the target vehicle (see paragraph 0024, where an occlusion ratio is calculated by taking the ratio of an occluded object region to an entire object region; here, a higher occlusion ratio represents stronger occlusion instead of a lower occlusion rate from the present invention), and the obtaining the corrected three-dimensional detection framework information comprises: acquiring confidence information in the three-dimensional detection framework information of the target vehicle (see paragraph 0005, where a confidence value representing the likelihood of a correct object detection is generated; object presence is concluded if this confidence is higher than a second confidence threshold); and obtaining the corrected three-dimensional detection framework information of the target vehicle by lowering the confidence information in the three-dimensional detection framework information of the target vehicle in response to determining that the occlusion rate of the target vehicle is less than a preset occlusion rate threshold (see paragraphs 0012 and 0070, where if the occlusion ratio is high enough, the second confidence threshold is lowered). Zhao and Hashimoto are considered analogous art, as they are both directed to vehicle detection models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Hashimoto into Zhao because doing so enables more accurate detection of highly occluded objects (see Hashimoto paragraph 0070). Regarding claim 13, Zhao in view of Hashimoto discloses claim 13 as applied to claim 6 above. Allowable Subject Matter Claims 3 and 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. As indicated in the previous office action, as claims 3 and 10 are now eligible under 35 U.S.C. 101, they would now be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 3, Zhao discloses wherein the obtaining the corrected three-dimensional detection framework information comprises: determining a detection framework region of the target vehicle according to the detection framework location information of the target vehicle (see figure 3A, step 302 and paragraph 0120, where a 2D bounding box of a target vehicle is first generated; see figure 3A, step 303 and paragraph 0129, where a 3D outer bounding box of the target vehicle is generated based on the 2D bounding box). However, neither Zhao nor the rest of the cited art disclose or reasonably suggest in response to determining that there is a first grounding point in the grounding line information, wherein the first grounding point is not located in the detection framework region, determining location offset information of the detection framework region according to location information of the first grounding point; and obtaining the corrected detection framework location information of the target vehicle by correcting the detection framework location information in the three-dimensional detection framework information of the target vehicle according to the location offset information. Yang et al. (“Ground Plane Matters: Picking Up Ground Plane Prior in Monocular 3D Object Detection”, 3 November 2022) discloses a process of using vehicle-ground contact points to aid in initial 3D bounding box prediction (see page 2, figure 2, as well as page 7, “2) Dynamic Back Projection of Contact Points with Ground Plane Equation”), but does not consider if these contact points fall outside of an initial 3D bounding box. A 3D bounding box refinement/correction process is detailed (see page 8, "4) 3D Bounding Box Refinement"), but does not involve the contact points, much less identifying if any fall outside of the bounding box and correcting according to the respective offset. Regarding claim 10, similar reasons for allowability apply to claim 10 as claim 3 above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS JOHN HELCO whose telephone number is (703)756-5539. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached at telephone number 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /NICHOLAS JOHN HELCO/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Aug 25, 2023
Application Filed
Oct 01, 2025
Non-Final Rejection — §101, §102, §103
Jan 05, 2026
Response Filed
Mar 03, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602867
METHOD FOR AUTONOMOUSLY SCANNING AND CONSTRUCTING A REPRESENTATION OF A STAND OF TREES
2y 5m to grant Granted Apr 14, 2026
Patent 12597092
Systems and Methods for Altering Images
2y 5m to grant Granted Apr 07, 2026
Patent 12586370
VEHICLE IMAGE ANALYSIS SYSTEM FOR A PERIPHERAL CAMERA
2y 5m to grant Granted Mar 24, 2026
Patent 12573018
DEFECT ANALYSIS DEVICE, DEFECT ANALYSIS METHOD, NON-TRANSITORY COMPUTER-READABLE MEDIUM, AND LEARNING DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12561754
METHOD AND SYSTEM FOR PROCESSING IMAGE BASED ON WEIGHTED MULTIPLE KERNELS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+44.4%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month