Prosecution Insights
Last updated: April 19, 2026
Application No. 18/036,011

OBJECT DETECTION AND IDENTIFICATION SYSTEM AND METHOD FOR MANNED AND UNMANNED VEHICLES

Final Rejection §103
Filed
May 09, 2023
Examiner
ZHAO, LEI
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Brightway Vision Ltd.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
41 granted / 55 resolved
+12.5% vs TC avg
Strong +31% interview lift
Without
With
+30.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
29 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
64.4%
+24.4% vs TC avg
§102
26.2%
-13.8% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 55 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed December 5, 2025 with respect to claims 1-4, 6-7, 9, 11, 13, 17, 33-42 have been considered but are moot in view of new grounds of rejection. Claim Objections Claims 1, 3-4 and 17, 33, 35-36, 42 are objected to because of the following informalities: Claims 1, 3-4 and 17, 33, 35-36, 42 as recited are ambiguous as “an object (that) is located in the scene that casts a shadow” can include object that cast a shadow in the scene and object that does not cast a shadow in the scene. For the record, the examiner recommends claims 1, 3-4 and 17, 33, 35-36, 42 to be rewritten as follow, and interpretation will be as such until clarification is made of record or applicant accepts this proposal and makes changes accordingly. 1. A system employed by a platform for detecting an obstacle to the platform in a scene, the system comprising: a plurality of illuminators arranged at different locations of the platform; at least one imager; a processor; and a memory configured to store data and software code executable by the processor to perform the following: illuminating the scene from at least two different directions by the plurality of illuminators; acquiring, by the at least one imager, a plurality of images of the illuminated scene; wherein a height above ground of the at least one imager is different from a height above ground of the plurality of illuminators; comparing at least one image of the scene illuminated from a first direction with at least one image of the scene illuminated from a second direction which is different from the first direction; determining, based on the comparing, whether an object that casts a shadow is located in the scene. 3. The system of claim 1, wherein the determining whether an object that casts a shadow is located in the scene is performed with respect to the at least one selected depth-of-field. 4. The system of claim 1, wherein the determining whether an object that casts a shadow is located in the scene is performed for a plurality of two different depth-of-fields of the scene. 17. A method for detecting an obstacle to a platform in a scene, the method comprising: actively illuminating a scene from at least two different directions by a plurality of illuminators of the platform, wherein the plurality of illuminators are arranged at different locations of the platform; acquiring, by at least one imager of the platform, a plurality of images of the illuminated scene; wherein a height above ground of the at least one imager is different from a height above ground of the plurality of illuminators; comparing at least one image of the scene illuminated from a first direction with at least one image of the scene illuminated from a second direction which is different from the first direction; determining, based on the comparing, whether an object that casts a shadow is located in the scene. 33. A system employed by a platform for detecting an obstacle to the platform in a scene, the system comprising: a processor; and a memory configured to store data and software code portions executable by the processor to perform the following: acquiring, by a plurality of imagers, a plurality of images of a scene comprising at least one region of interest (ROI) from at least two different directions, wherein at least one of the plurality of images is acquired while the ROI is actively illuminated by at least one illuminator; wherein a height above ground of the plurality of imagers is different from a height above ground of the at least one illuminator; comparing at least one first image of the actively illuminated scene acquired from a first direction with at least one second image of the actively illuminated scene acquired from at least one second direction which is different from the first direction; determining, based on the comparing, whether an object that casts a shadow is located in the scene. 35. The system of claim 33, wherein the wherein the determining whether an object that casts a shadow is located in the scene is performed with respect to 36. The system of claim 33, wherein the wherein the determining whether an object that casts a shadow is located in the scene is preformed for 42. The system of claim 33, further configured for: acquiring, by a plurality of imagers, a plurality of images of a scene comprising at least one region of interest (ROI) from at least two different directions, wherein at least one of the plurality of images is acquired while the ROI is actively illuminated by at least one illuminator; wherein a height above ground of the plurality of imagers is different from a height above ground of the at least one illuminator; comparing at least one first image of the actively illuminated scene acquired from a first direction with at least one second image of the actively illuminated scene acquired from at least one second direction which is different from the first direction; determining, based on the comparing, whether an object that casts a shadow is located in the scene of the at least one ROI; determining oneone or more characteristics of the at least one Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 6-7, 17, 33, 35, 37-38 and 42 are rejected under 35 U.S.C. 103 as being unpatentable over O'Cualain (US Patent Publication No.: US 2015/0291097 A1), hereinafter O'Cualain, in view of Agrawal (US Patent No.: US 7,983,487 B2), hereinafter Agrawal. Regarding claim 33, O'Cualain teaches a system employed by a platform for detecting an obstacle to the platform in a scene, the system comprising: a processor (The camera system 2 includes a camera 4, a lighting control (not shown) and an image processing device 5, which can for example be integrated in the camera 4. [0040]); and a memory configured to store data and software code portions executable by the processor (The claimed memory is considered inherent in the computer system disclosed by O'Cualain because it is a necessary component in conventional computer systems.) to perform the following: acquiring, by a plurality of imagers (In one embodiment, not shown, the camera system 2 can include several cameras 4. [0043]), a plurality of images of a scene comprising at least one region of interest (ROI) from at least two different directions (In one embodiment, not shown, the camera system 2 can include several cameras 4. Moreover, the camera system 2 can also be constructed as a stereo camera system. [0043]), wherein at least one of the plurality of images is acquired while the ROI is actively illuminated by at least one illuminator (A first image of the environmental region is provided by a camera of the camera system with illumination of the environmental region using a light source of the motor vehicle such that a shadow of the object caused by the illumination is depicted in the first image. Abstract); comparing at least one first image of the actively illuminated scene acquired from a first direction (According to steps S2a to S2n, images 12 are captured with illuminated environmental region 6, wherein-as explained above-an image 12 is each provided for each illumination scenario. The steps S2a to S2n thus include the capture of images 12 with the environmental region 6 illuminated by the different light sources 10. Each light source 10 is attached at a different location of the motor vehicle 1. Thus, each light source 10 generates a different shadow 13 of the object 7 (see FIG. 4), i.e. the shadow 13 is cast into different directions and thus onto different ground areas. [0054]. PNG media_image1.png 488 778 media_image1.png Greyscale ) with at least one second image of the actively illuminated scene acquired from at least one second direction which is different from the first direction (According to steps S2a to S2n, images 12 are captured with illuminated environmental region 6, wherein-as explained above-an image 12 is each provided for each illumination scenario. The steps S2a to S2n thus include the capture of images 12 with the environmental region 6 illuminated by the different light sources 10. Each light source 10 is attached at a different location of the motor vehicle 1. Thus, each light source 10 generates a different shadow 13 of the object 7 (see FIG. 4), i.e. the shadow 13 is cast into different directions and thus onto different ground areas. [0054]); determining, based on the comparing (Finally, in step S7, an edge image 18 with edges is generated from all of the step edges with negative transition 17, which partially or completely depicts a contour of the object 7. The completeness of the contour of the object 7 depends on the positions of the light sources 10. Based on the edges, the object 7 is finally detected. [0057]), whether an object that casts a shadow is located in the scene (FIG. 4 shows the image 8 of the camera 4 with the extracted vertical edges 19 of objects 7 presenting an obstacle, thus having a certain height above the ground. Objects 7, which are flat and do not have any height, such as a flat object 20 in FIG. 4, are not detected, because here, a shadow 13 is not cast. [0059] PNG media_image2.png 764 978 media_image2.png Greyscale ). O'Cualain does not teach the following limitations as further recited, but Agrawal further teaches wherein a height above ground of the plurality of imagers (The set (multiple) images 111 are acquired 110 by one or more cameras 105. Column 3 line 48) is different from a height above ground of the at least one illuminator ( PNG media_image3.png 490 658 media_image3.png Greyscale . As is shown in Fig. 1, the height above ground of camera 105 is different from the height above ground of illuminator 104.). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified O'Cualain to incorporate the teachings of Agrawal to use imagers with a height above ground that is different from a height above ground of the at least one illuminator in order for a shadow for the extraction of the horizontal edges to be cast. Regarding claim 35, O'Cualain in the combination teaches the system of claim 33, wherein the wherein the determining whether an object that casts a shadow is located in the scene is performed with respect to (FIG. 4 shows the image 8 of the camera 4 with the extracted vertical edges 19 of objects 7 presenting an obstacle, thus having a certain height above the ground. Objects 7, which are flat and do not have any height, such as a flat object 20 in FIG. 4, are not detected, because here, a shadow 13 is not cast. [0059]). Regarding claim 37, O'Cualain in the combination teaches the system of claim 33, wherein the system is further configured to determine a direction and/or size of a shadow in the scene (A first image of the environmental region is provided by a camera of the camera system with illumination of the environmental region using a light source of the motor vehicle such that a shadow of the object caused by the illumination is depicted in the first image. Abstract). Regarding claim 38, O'Cualain in the combination teaches the system of claim 33, further configured to perform(FIG. 4 shows the image 8 of the camera 4 with the extracted vertical edges 19 of objects 7 presenting an obstacle, thus having a certain height above the ground. [0059]); determining a distance between one of the plurality of imagers and an object in the scene (In particular, the camera system 2 is used with calibration data of the camera 4. This calibration data is composed of an internal and an external orientation. Due to the calibration data, a distance 22 from the object 7 to the camera 4 and/or to the motor vehicle 1 can each be calculated. [0062]); determining a distance between the object in the scene and moving platform (In particular, the camera system 2 is used with calibration data of the camera 4. This calibration data is composed of an internal and an external orientation. Due to the calibration data, a distance 22 from the object 7 to the camera 4 and/or to the motor vehicle 1 can each be calculated. [0062]); increasing contrast of an object located in the scene (FIG. 4 shows the image 8 of the camera 4 with the extracted vertical edges 19 of objects 7 presenting an obstacle, thus having a certain height above the ground. [0059]); or any combination of the aforesaid. Regarding claim 42, O'Cualain in the combination teaches the system of claim 33, further configured for: acquiring, by a plurality of imagers (In one embodiment, not shown, the camera system 2 can include several cameras 4. [0043]), a plurality of images of a scene comprising at least one region of interest (ROI) from at least two different directions (In one embodiment, not shown, the camera system 2 can include several cameras 4. Moreover, the camera system 2 can also be constructed as a stereo camera system. [0043]), wherein at least one of the plurality of images is acquired while the ROI is actively illuminated by at least one illuminator (A first image of the environmental region is provided by a camera of the camera system with illumination of the environmental region using a light source of the motor vehicle such that a shadow of the object caused by the illumination is depicted in the first image. Abstract); comparing at least one first image of the actively illuminated scene acquired from a first direction (According to steps S2a to S2n, images 12 are captured with illuminated environmental region 6, wherein-as explained above-an image 12 is each provided for each illumination scenario. The steps S2a to S2n thus include the capture of images 12 with the environmental region 6 illuminated by the different light sources 10. Each light source 10 is attached at a different location of the motor vehicle 1. Thus, each light source 10 generates a different shadow 13 of the object 7 (see FIG. 4), i.e. the shadow 13 is cast into different directions and thus onto different ground areas. [0054]) with at least one second image of the actively illuminated scene acquired from at least one second direction which is different from the first direction (According to steps S2a to S2n, images 12 are captured with illuminated environmental region 6, wherein-as explained above-an image 12 is each provided for each illumination scenario. The steps S2a to S2n thus include the capture of images 12 with the environmental region 6 illuminated by the different light sources 10. Each light source 10 is attached at a different location of the motor vehicle 1. Thus, each light source 10 generates a different shadow 13 of the object 7 (see FIG. 4), i.e. the shadow 13 is cast into different directions and thus onto different ground areas. [0054]); determining, based on the comparing (Finally, in step S7, an edge image 18 with edges is generated from all of the step edges with negative transition 17, which partially or completely depicts a contour of the object 7. The completeness of the contour of the object 7 depends on the positions of the light sources 10. Based on the edges, the object 7 is finally detected. [0057]. PNG media_image1.png 488 778 media_image1.png Greyscale ), whether an object that casts a shadow is located in the scene of the at least one ROI (FIG. 4 shows the image 8 of the camera 4 with the extracted vertical edges 19 of objects 7 presenting an obstacle, thus having a certain height above the ground. Objects 7, which are flat and do not have any height, such as a flat object 20 in FIG. 4, are not detected, because here, a shadow 13 is not cast. [0059]); determining one(FIG. 4 shows the image 8 of the camera 4 with the extracted vertical edges 19 of objects 7 presenting an obstacle, thus having a certain height above the ground. [0059]); and providing an output descriptive of [[the]] one or more characteristics of the at least one(In a further development, it is provided that a warning signal is output by the image processing device depending on the distance and/or depending on the classification of the object, by means of which a driver of the motor vehicle is warned of the object. [0021]). Agrawal in the combination teaches wherein a height above ground of the plurality of imagers (The set (multiple) images 111 are acquired 110 by one or more cameras 105. Column 3 line 48) is different from a height above ground of the at least one illuminator ( PNG media_image3.png 490 658 media_image3.png Greyscale . As is shown in Fig. 1, the height above ground of camera 105 is different from the height above ground of illuminator 104.). Regarding claim 1, O'Cualain in the combination teaches a system employed by a platform for detecting an obstacle to the platform in a scene, the system comprising: a plurality of illuminators arranged at different locations of the platform (The steps S2a to S2n thus include the capture of images 12 with the environmental region 6 illuminated by the different light sources 10. [0054]); at least one imager (In one embodiment, not shown, the camera system 2 can include several cameras 4. [0043]); a processor (The camera system 2 includes a camera 4, a lighting control (not shown) and an image processing device 5, which can for example be integrated in the camera 4. [0040]); and a memory configured to store data and software code executable by the processor (The claimed memory is considered inherent in the computer system disclosed by O'Cualain because it is a necessary component in conventional computer systems.) to perform the following: illuminating the scene from at least two different directions by the plurality of illuminators (The steps S2a to S2n thus include the capture of images 12 with the environmental region 6 illuminated by the different light sources 10. [0054]); acquiring, by the at least one imager, a plurality of images of the illuminated scene (According to steps S2a to S2n, images 12 are captured with illuminated environmental region 6, wherein-as explained above-an image 12 is each provided for each illumination scenario. [0054]); comparing at least one image of the scene illuminated from a first direction (According to steps S2a to S2n, images 12 are captured with illuminated environmental region 6, wherein-as explained above-an image 12 is each provided for each illumination scenario. The steps S2a to S2n thus include the capture of images 12 with the environmental region 6 illuminated by the different light sources 10. Each light source 10 is attached at a different location of the motor vehicle 1. Thus, each light source 10 generates a different shadow 13 of the object 7 (see FIG. 4), i.e. the shadow 13 is cast into different directions and thus onto different ground areas. [0054]) with at least one image of the scene illuminated from a second direction which is different from the first direction (According to steps S2a to S2n, images 12 are captured with illuminated environmental region 6, wherein-as explained above-an image 12 is each provided for each illumination scenario. The steps S2a to S2n thus include the capture of images 12 with the environmental region 6 illuminated by the different light sources 10. Each light source 10 is attached at a different location of the motor vehicle 1. Thus, each light source 10 generates a different shadow 13 of the object 7 (see FIG. 4), i.e. the shadow 13 is cast into different directions and thus onto different ground areas. [0054]); determining, based on the comparing (Finally, in step S7, an edge image 18 with edges is generated from all of the step edges with negative transition 17, which partially or completely depicts a contour of the object 7. The completeness of the contour of the object 7 depends on the positions of the light sources 10. Based on the edges, the object 7 is finally detected. [0057]. PNG media_image1.png 488 778 media_image1.png Greyscale ), whether an object is located in the scene that casts a shadow.(FIG. 4 shows the image 8 of the camera 4 with the extracted vertical edges 19 of objects 7 presenting an obstacle, thus having a certain height above the ground. Objects 7, which are flat and do not have any height, such as a flat object 20 in FIG. 4, are not detected, because here, a shadow 13 is not cast. [0059]) Agrawal in the combination teaches wherein a height above ground of the at least one imager is different from a height above ground of the plurality of illuminators ( PNG media_image3.png 490 658 media_image3.png Greyscale . As is shown in Fig. 1, the height above ground of camera 105 is different from the height above ground of illuminator 104.). Method claim 17 is drawn to the method of using the corresponding apparatus claimed in claim 1. Therefore method claim 17 corresponds to apparatus claim 1 and is rejected for the same reasons of obviousness as used above. Apparatus claims 3 and 6-7 are drawn to the apparatus as claimed in claims 35 and 38-39. Therefore apparatus claims 3 and 6-7 correspond to apparatus claims 35 and 38-39, and are rejected for the same reasons of obviousness as used above. Claims 2 and 34, unamended and are rejected based on the revised combination of O'Cualain (US Patent Publication No.: US 2015/0291097 A1), hereinafter O'Cualain, in view of Agrawal (US Patent No.: US 7,983,487 B2), hereinafter Agrawal, as applied to claim 33 above, and further in view of Knox (Canadian Patent Pub. No.: CA 2993208 A1), hereinafter Knox. The ground of rejection established in the last Office Action is fully incorporated herein. Claims 4, 13 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over O'Cualain (US Patent Publication No.: US 2015/0291097 A1), hereinafter O'Cualain, in view of Agrawal (US Patent No.: US 7,983,487 B2), hereinafter Agrawal, further in view of Sonn (PCT Patent Publication No.: WO 2017/149370 A1), hereinafter Sonn. Regarding claim 36, O'Cualain and Agrawal teach all of the elements of the claimed invention as stated in claim 33 except for the following limitations as further recited. However, Sonn teaches wherein the wherein the determining whether an object that casts a shadow is located in the scene is preformed for (FIG. 6 is a schematic illustration of the gated imaging of objects in a scene by adapting depths of fields (DOFs), according to some embodiments [0070] PNG media_image4.png 616 1218 media_image4.png Greyscale ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified O'Cualain and Agrawal to incorporate the teachings of Sonn to determine one shadow-related characteristics for a plurality of two different depth-of-fields of the scene in order to identify multiple objects in the scene. Claim 13, unamended and is rejected based on the revised combination of O'Cualain, in view of Agrawal, as applied to claim 33 above, and further in view of Sonn. The ground of rejection established in the last Office Action is fully incorporated herein. Apparatus claim 4 is drawn to the apparatus as claimed in claim 36. Therefore apparatus claim 4 corresponds to apparatus claim 36, and is rejected for the same reasons of obviousness as used above. Claim 40, unamended and is rejected based on the revised combination of O'Cualain (US Patent Publication No.: US 2015/0291097 A1), hereinafter O'Cualain, in view of Agrawal (US Patent No.: US 7,983,487 B2), hereinafter Agrawal, as applied to claim 33 above, and further in view of Edmonds (PCT Patent Publication No.: WO 2021/230934 A1), hereinafter Edmonds. The ground of rejection established in the last Office Action is fully incorporated herein. Claim 11 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over O'Cualain (US Patent Publication No.: US 2015/0291097 A1), hereinafter O'Cualain, in view of Agrawal (US Patent No.: US 7,983,487 B2), hereinafter Agrawal, and further in view of Yao (Chinese Patent Publication No.: CN 107606578 B), hereinafter Yao. Claim 11, unamended and is rejected based on the revised combination of O'Cualain, in view of Agrawal, as applied to claim 33 above, and further in view of Yao. The ground of rejection established in the last Office Action is fully incorporated herein. Regarding claim 39, O'Cualain in the combination teaches the system of claim 33, further configured to classify[[ing]] an object in the scene as one of the following: "obstacle" or "non-obstacle"(The proposed method provides an edge image with edges of objects having a certain height above the ground and thus presenting actual obstacles to the motor vehicle. In comparison, for example in case of ground markings, no shadow is generated. [0012]). Yao in the combination further teaches wherein actively illuminating the scene comprises: simultaneously emitting light from at least one first and the at least one second illuminator of the plurality of illuminators (a light source unit composed of multiple light source set in the light source of the same light source group have the same color and spectrum of light (i.e., one first illuminator), simultaneously illuminating light at light source of different light source group have the same light color and different spectrum (i.e., one second illuminator). Page 3 5th paragraph), wherein light emitted from the at least one first illuminator has different characteristics than light emitted from the at least one second illuminator (a light source unit composed of multiple light source set in the light source of the same light source group have the same color and spectrum of light (i.e., one first illuminator), simultaneously illuminating light at light source of different light source group have the same light color and different spectrum (i.e., one second illuminator). Page 3 5th paragraph); and differentiating between the plurality of acquired images based on the characteristics of the light emitted by the at least one first and the at least second illuminator (Due to the combination of the presence of a plurality of spectral characteristics, such that the irradiated onto object generate obvious colour effect, so as to greatly improve the resolution of the target object. Abstract). Allowable Subject Matter Claims 9 and 41 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the closest prior arts of record teach the system of claim 33. However, none of them alone or in any combination teaches further configured to illuminate the ROI with at least one illuminator having an illuminator axis and at least one imager having an optical axis on a platform such that the illuminator axis and the optical axis substantially coincide resulting in that due to illumination by at least one illuminator cannot be imaged by the at least one imager, to identify scene regions which are associated with false-positive shadows as specified in claim 41. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEI ZHAO whose telephone number is (703)756-1922. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VU LE can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LEI ZHAO/Examiner, Art Unit 2668 /VU LE/Supervisory Patent Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

May 09, 2023
Application Filed
Aug 04, 2025
Non-Final Rejection — §103
Dec 05, 2025
Response Filed
Feb 14, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597143
SEMANTIC SEGMENTATION OF SPARSE MULTI-DIMENSIONAL TENSORS
2y 5m to grant Granted Apr 07, 2026
Patent 12560719
AUTONOMOUS VEHICLE ENVIRONMENTAL PERCEPTION SOFTWARE ARCHITECTURE
2y 5m to grant Granted Feb 24, 2026
Patent 12536704
LIGHT FIELD IMAGE PROCESSING METHOD, LIGHT FIELD IMAGE ENCODER AND DECODER, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Patent 12511765
MITIGATION OF REGISTRATION DATA OVERSAMPLING
2y 5m to grant Granted Dec 30, 2025
Patent 12511907
CROWDING DEGREE ESTIMATION METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+30.9%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 55 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month