Prosecution Insights
Last updated: April 19, 2026
Application No. 17/677,144

LIDAR SYSTEM FOR DYNAMICALLY SELECTING FIELD-OF-VIEWS TO SCAN WITH DIFFERENT RESOLUTIONS

Non-Final OA §103§112
Filed
Feb 22, 2022
Examiner
NGUYEN, RACHEL NICOLE
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BEIJING VOYAGER TECHNOLOGY CO., LTD.
OA Round
3 (Non-Final)
21%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
6 granted / 28 resolved
-30.6% vs TC avg
Strong +62% interview lift
Without
With
+62.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
49 currently pending
Career history
77
Total Applications
across all art units

Statute-Specific Performance

§101
1.5%
-38.5% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
24.7%
-15.3% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The following addresses applicant’s remarks/amendments dated 12 January 2026. The amendment is sufficient to overcome the claim objections. Claims 1, 4-9, 12, 15-18, and 20-23 were amended. Claims 2, 13, and 19 were previously cancelled. No new claims were added. Therefore, claims 1, 3-12, 14-18, and 20-23 are currently pending in the current application and are addressed below. Response to Arguments Applicant’s arguments, see page 13 of the Remarks, filed 12 January 2026, with respect to the rejections of claims 1, 12, and 18 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Pelz et al., US 20200041618 A1 in view of Slobodyanyuk et al., US 20180067195 A1. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 5 recites the limitation "the object acceleration feature map" in lines 2 and 5 of the claim. Because the plurality of feature maps in claim 1 includes at least one of an object acceleration feature map or an object velocity feature map, there is insufficient antecedent basis for "the object acceleration feature map" in the claim. Claim 6 recites the limitation "the object velocity feature map" in lines 2 and 5 of the claim. Because the plurality of feature maps in claim 1 includes at least one of an object acceleration feature map or an object velocity feature map, there is insufficient antecedent basis for "the object velocity feature map" in the claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4, 6-8, 12, 15-18, 20, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Pelz et al., US 20200041618 A1 ("Pelz") in view of Slobodyanyuk et al., US 20180067195 A1 ("Slobodyanyuk"). Regarding claim 1, Pelz discloses a light detection and ranging (LiDAR) system, comprising: a first transmitter subsystem (Fig. 1, pixelated light source 100, Paragraph [0032]); […]; a controller coupled to the first transmitter subsystem identify a first field-of-view (FOV) to be scanned (Fig. 4, controller 106, low-resolution scan 300, Paragraph [0044]) and a second FOV within the first FOV, wherein the second FOV is associated with an area-of-interest (Fig. 4, controller 106, high-resolutions scan 304, Paragraph [0046]), wherein to identify the second FOV, the controller is further configured to: obtain a set of object data associated with a third FOV scanned during a third optical sensing procedure performed prior to a first optical sensing procedure and a second optical sensing procedure (Fig. 4, controller 106, Blocks 300 to 302 are repeated until object is detected; Paragraph [0044]-[0045]); identify an object based on the set of object data (Fig. 4, controller 106, Block 302, Paragraph [0045]); […]; cause the first transmitter subsystem to scan the first FOV using a first resolution during the first optical sensing procedure (Fig. 4, low-resolution scan 300, Paragraph [0044]); and cause the second transmitter subsystem to scan the second FOV using a second resolution during the second optical sensing procedure, the second resolution being finer than the first resolution (Fig. 4, controller 106, high-resolution scan 304, Paragraph [0046]-[0047]); at least one photodetector configured to detect light returned from the first FOV scanned during the first optical sensing procedure and light returned from the second FOV scanned during the second optical sensing procedure (Fig. 1, photosensor 110, Paragraph [0036]); and […]. Pelz does not teach: a second transmitter subsystem; generate a plurality of feature maps using a convolutional neural network based on the object data, each feature in the plurality of feature maps corresponding to a different object criteria of one or more object criteria, wherein the plurality of feature maps includes an object type feature map and at least one of an object acceleration feature map or an object velocity feature map; and in response to determining that an object criteria is met by the object based on the object type feature map of the plurality of feature maps, the object acceleration feature map of the plurality of feature maps, or the object velocity feature maps of the plurality of feature maps, identify the area-of- interest based on the object; a signal processor coupled to the at least one photodetector and configured to: generate point cloud data based on the light returned from the first FOV and the second FOV and detected by the at least one photodetector. However, Slobodyanyuk teaches a LIDAR system with four laser emitter components that are grouped into two groups (Fig. 1, laser emitter components 106a-b and 108a-b, Paragraph [0024]). One group of laser emitters emits a wide angle FOV and the other group emits a narrow angle FOV (Paragraph [0025]-[0027]). Slobodyanyuk also teaches a processor that determines objects detected in the FOVs. The processor can detect the objects using a convolutional neural network to detect characteristics of the object, such as type of object and velocity of the object (Fig. 3, processor 310, Paragraph [0043]). The processor also determines time of flight information to map objects in the environment (Paragraph [0044], See also Paragraph [0074]). Lastly, Slobodyanyuk teaches selecting laser emitters for a FOV range based on object characteristics, such as density of objects or object type or speed (Fig. 8, step 802 and 808, Paragraph [0070]-[0072]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pelz’s LIDAR system by adding a second transmitter and a processor which maps and determines objects detected in the FOV based on object type and object speed, which is disclosed by Slobodyanyuk. One of ordinary skill in the art would have been motivated to make these modifications in order to “achieve better performance” and construct a map of the environment which may assist with autonomous navigation, as suggested by Slobodyanyuk (Paragraph [0038], [0074]). Regarding claim 4, Pelz, as modified in view of Slobodyanyuk, discloses the LiDAR system of claim 1, wherein to identify the second FOV, the controller is further configured to: determine whether the object criteria is met by the object based on the object type feature map of the plurality of feature maps (Pelz, Fig. 2, in-range object 200, analysis unit 114, Paragraph [0041], Fig. 4, Block 302, Paragraph [0045]; Slobodyanyuk, Fig. 3, processor 310, Paragraph [0043]); in response to determining that the object criteria is met by the object based on the object type feature map of the plurality of feature maps, determine positioning information of the object (Pelz, Fig. 2, in-range object 200, analysis unit 114, Paragraph [0041], Fig. 4, Block B, second subset of pixels 104, Paragraph [0047]); and identify the second FOV based on the positioning information of the object (Pelz, Fig. 4, Block B, second subset of pixels 104, Paragraph [0047]). Regarding claim 6, Pelz, as modified in view of Slobodyanyuk, discloses the LiDAR system of claim 4, wherein to determine whether the object criteria is met by the object based on the object velocity feature map of the plurality of feature maps, the controller is configured to: determine whether a velocity of the object meets a velocity threshold condition based on the object velocity feature map of the plurality of feature maps (Slobodyanyuk, Fig. 8, step 808, Paragraph [0071]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pelz’s LIDAR system by adding a second transmitter and a processor which maps and determines objects detected in the FOV based on object type and object speed, which is disclosed by Slobodyanyuk. One of ordinary skill in the art would have been motivated to make these modifications in order to “achieve better performance” and construct a map of the environment which may assist with autonomous navigation, as suggested by Slobodyanyuk (Paragraph [0038], [0074]). Regarding claim 7, Pelz, as modified in view of Slobodyanyuk, discloses the LiDAR system of claim 4, wherein the controller is further configured to: determine whether a movement of the object meets a movement condition based on a feature map of the plurality of feature maps corresponding to movement type (Pelz, Fig. 4, Block 306, track object / adapt high-resolution pixel set, Paragraph [0048]; Slobodyanyuk, Fig. 3, processor 310, Paragraph [0043]) Regarding claim 8, Pelz, as modified in view of Slobodyanyuk, discloses the LiDAR system of claim 4, wherein to determine whether the object criteria is met by the object based on the object type feature map of the plurality of feature maps, the controller is further configured to: determine whether the object is a pedestrian based on the object type feature map of the plurality of feature maps (Slobodyanyuk, Fig. 3, processor 310, Paragraph [0043], [0056]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pelz’s LIDAR system by adding a second transmitter and a processor which maps and determines objects detected in the FOV based on object type and object speed, which is disclosed by Slobodyanyuk. One of ordinary skill in the art would have been motivated to make these modifications in order to “achieve better performance” and construct a map of the environment which may assist with autonomous navigation, as suggested by Slobodyanyuk (Paragraph [0038], [0074]). Claims 12 and 15-17 contain claim limitations corresponding to claims 1, 4, 7, and 8 and are rejected for the same reasons. Claims 18, 20, and 22 are method claims corresponding to apparatus claims 1, 4, and 7. They are rejected for the same reasons. Claims 3, 10, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Pelz, as modified in view of Slobodyanyuk, in further view of Zhang et al., US 20220244359 A1 ("Zhang"). Regarding claim 3, Pelz, as modified in view of Slobodyanyuk, discloses the LiDAR system of claim 1. Pelz, as modified in view of Slobodyanyuk, does not teach: wherein the controller is further configured to: cause the second transmitter subsystem to scan the third FOV using the second resolution during the third optical sensing procedure, wherein the second FOV and the third FOV are associated with different areas-of- interest of a far-field environment. However, Zhang discloses a FOV with multiple regions of interest within the field of view. Zhang’s LIDAR system may scan the entirety of the FOV while also scanning the regions of interest at an increased resolution (Fig. 12, FOV 1200, ROIs 1210-1214, Paragraph [0065]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of detecting an object, disclosed by Pelz in view of Slobodyanyuk, by additionally scanning smaller regions of interest in conjunction with the low resolution scan, which is disclosed by Zhang. One of ordinary skill in the art could have applied Zhang’s known scanning technique to Pelz and Slobodyanyuk’s method of object detection, and the results would have been predictable. Regarding claim 10, Pelz, as modified in view of Slobodyanyuk, discloses the LiDAR system of claim 1. Pelz, as modified in view of Slobodyanyuk, does not teach: to generate the point cloud data, the signal processor is further configured to: generate the point cloud data corresponding to the second FOV using a signal generated based on the light returned from the second FOV during the second optical sensing procedure; and generate the point cloud data corresponding to a remaining area of the first FOV using the signal returned from the first FOV during the first optical sensing procedure. However, Zhang teaches a LiDAR system with two light sources that transmits light to different scanning areas and produces a point map from the scanned patterns (Fig. 8, light outputs 706 and 714, scan area for source 706, scan area for source 714, Fig. 9, point map where two channels overlap vertically, Paragraph [0060]-[0061]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the LIDAR system disclosed by Pelz and Slobodyanyuk by adding a second light source to scan the high resolution FOV and a process that generates point maps of the scanned FOVs, which is disclosed by Zhang. One of ordinary skill in the art would have been motivated to add a second light source in order to “increase the points density of a points map without sacrificing the maximum unambiguous detection range of the system”, as suggested by Zhang (Paragraph [0060]). One of ordinary skill in the art would have been motivated to construct point maps in order to “to provide sensory input to assist in semi-autonomous or fully autonomous vehicle control”, as suggested by Zhang (Paragraph [0003]). Claim 14 contains claim limitations corresponding to claim 3 and is rejected for the same reasons. Claims 5, 16, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Pelz, as modified in view of Slobodyanyuk, in further view of Lee et al., US 20200278532 A1 ("Lee"). Regarding claim 5, Pelz, as modified in view of Slobodyanyuk, discloses the LiDAR system of claim 4. Pelz, as modified in view of Slobodyanyuk, does not teach: wherein to determine whether the object criteria is met by the object based on the object acceleration feature map of the plurality of feature maps, the controller is configured to: determine whether an acceleration of the object meets an acceleration threshold condition based on the object acceleration feature map of the plurality of feature maps. However, Lee teaches a receive block that processes return light to determine the velocity and acceleration of the object. Within the receive block, a threshold control block sets the object detection threshold (Fig. 7, Receive clock 750, processor 720, threshold control 780, Paragraph [0053], Paragraph [0057]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined analysis unit and classifier that determined object detection, disclosed by Pelz in view of Slobodyanyuk, with the processor and threshold control circuitry to determine the object’s acceleration, which is disclosed by Lee. One of ordinary skill in the art could have combined these two analysis and processing units, and the results would have been predictable. Claim 16 contains claim limitations corresponding to claims 5 and 6 and is rejected for the same reasons. Claim 21 is a method claim corresponding to apparatus claims 5 and 6 and is rejected for the same reasons. Claim 9, 17, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Pelz, as modified in view of Slobodyanyuk, in further view of Ulutan et al., US 12236705 B1 ("Ulutan "). Regarding claim 9, Pelz, as modified in view of Slobodyanyuk, discloses the LiDAR system of claim 4. Pelz, as modified in view of Slobodyanyuk, does not teach: wherein to determine whether the object criteria is met by the object based on the object type feature map of the plurality of feature maps, the controller is further configured to: determine whether the object is a child based on the object type feature map of the plurality of feature maps. However, Ulutan teaches using machine learning models to detect pedestrian attributes, such as if the pedestrian is a child, adult, construction worker, etc. (Col. 2, lines 43-62; See also Fig. 8, CNN backbone 804, feature vector 808, machine-learned model temporal head 810, outputs 812, Col. 17 lines 33-41). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the analysis unit and classifier that determined object detection, disclosed by Pelz in view of Slobodyanyuk, by adding Ulutan’s machine learning model that gives attributes to pedestrians. One of ordinary skill in the art would have been motivated to make this modification in order to “improve the operation of autonomous vehicles by accurately detecting attributes and/or gestures of pedestrians so that the vehicle may traverse an environment more safely”, as suggested by Ulutan (Col. 2 lines 63-67). Claim 17 contains claim limitations corresponding to claim 9 and is rejected for the same reasons. Claim 23 is a method claim corresponding to apparatus claim 9 and is rejected for the same reasons. Claims 11 is rejected under 35 U.S.C. 103 as being unpatentable over Pelz, as modified in view of Slobodyanyuk, in further view of Droz, US 20190178974 A1 ("Droz"). Regarding claim 11, Pelz, as modified in view of Slobodyanyuk, discloses the LiDAR system of claim 1, to generate the point cloud data, the signal processor is further configured to: receive, from the photodetector, a signal associated with the light returned from the first FOV during the first optical sensing procedure and the light returned from the second FOV during the second optical sensing procedure (Pelz, Fig. 1, photosensor 110, Paragraph [0036]); identify a first portion of a signal associated with the light returned from the first FOV during the first optical sensing procedure that corresponds to the second FOV (Pelz, Block 302 to Block 304, Paragraph [0045]-[0047]: detect object and then scan object area at higher resolution). Pelz, as modified in view of Slobodyanyuk, does not teach: generate a concatenated signal by combining the first portion of the signal with a second portion of the signal associated with the light returned from the second FOV during the second optical sensing procedure; and generate the point cloud data corresponding to the second FOV using the concatenated signal. However, Droz teaches scanning a FOV with a first spatial light pattern and a second spatial light pattern and then forming a point cloud based on the combined first and second reflected light signals (Fig. 5, method 500, step 510, Paragraph [0118]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the point cloud generation, disclosed by Pelz in view of Slobodyanyuk, by forming a point cloud from the combination of the first and second FOV data, which is disclosed by Droz. One of ordinary skill in the art would have been motivated to make this modification in order to conserve power, data rate, and/or point cloud computation, as suggested by Droz (Paragraph [0081]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL N NGUYEN whose telephone number is (571)270-5405. The examiner can normally be reached Monday - Friday 8 am - 5:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached at (571) 270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL NGUYEN/Examiner, Art Unit 3645 /YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Feb 22, 2022
Application Filed
Jun 13, 2025
Non-Final Rejection — §103, §112
Sep 18, 2025
Response Filed
Nov 06, 2025
Final Rejection — §103, §112
Jan 12, 2026
Response after Non-Final Action
Feb 10, 2026
Request for Continued Examination
Mar 01, 2026
Response after Non-Final Action
Mar 12, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12442900
OPTICAL COMPONENTS FOR IMAGING
2y 5m to grant Granted Oct 14, 2025
Patent 12372354
Surveying Instrument
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
21%
Grant Probability
84%
With Interview (+62.5%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month