Prosecution Insights
Last updated: April 19, 2026
Application No. 17/856,313

TOF DEPTH SENSING MODULE AND IMAGE GENERATION METHOD

Final Rejection §103
Filed
Jul 01, 2022
Examiner
BOEGHOLM, ISABELLE LIN
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Huawei Technologies Co., Ltd.
OA Round
2 (Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
8 granted / 18 resolved
-7.6% vs TC avg
Strong +62% interview lift
Without
With
+62.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
33 currently pending
Career history
51
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This office action is responsive to the amendment filed 12/30/2025. As directed by the amendment claims 1, 6, 7, 11, 12, 16, 17, and 20 are amended. Thus, claims 1-20 are currently pending in this application. Information Disclosure Statement The Information Disclosure Statement submitted on 12/03/2025 is in compliance with the provisions of 37 CFR 1.97 and 1.98 and has been considered. Response to Amendment The amendments filed 12/30/2025 have been fully considered. The amendment to claim 6 has overcome the objection which is now withdrawn. The amendments made to claims 1, 7, 11, and 16, have overcome the claim rejections made under 35 U.S.C. 102(a)(2) and 103. The rejections of claims 1-20 are now withdrawn. However, in view of the amendments, new grounds of rejection are made under 35 U.S.C. 103. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Hillard (US 20200132848 A1) in view of Nash (US 20190339364 A1). Regarding Claim 1: Hillard discloses a time of flight depth sensing module (Fig. 3B, lidar system 10) comprising: an array light source having N light emitting regions that do not overlap each other, wherein each light emitting region is used to generate a beam (Fig. 5B; emitter array 111, which is made up of lasers such that the light beams they emit define sub-pixels [0027]); a control unit configured to control M light emitting regions of the N light emitting regions to emit light at different moments, wherein M is less than or equal to N ([0021] the processing module 400 is coupled to the controller 112, which controls the emitters of the emitter array 111; [0055] different emitters emit light beams at different times, using TDMA; Fig. 5B, there are less emitters than sub pixels); a collimation lens group configured to perform collimation processing on beams from the L light emitting regions or the M light emitting regions (Fig. 4, collimator 122 is between emitter array and beam splitter 121; [0026]); a beam splitter configured to perform beam splitting processing on beams obtained after collimation processing, to obtain an emergent beam (Fig. 4, beam splitter 121 is downstream from the collimator 122) wherein the beam splitter is configured to split each beam of light into a plurality of beams of light ([0025] beams incident on the beam splitter are split into a plurality of beams, and Fig. 6B shows how one input beam can be split into multiple output beams. The beam splitting illustrated by Fig. 6B corresponds to the embodiment illustrated by Fig. 5B); and a receiving unit configured to receive reflected beams of the target object, wherein the reflected beams of the target object is obtained by reflecting the emergent beam (Fig. 3B, receive module 300, which receives the light beams reflected back from the external object). Hillard does not expressly disclose that the control unit is configured to operate in a first working mode when a distance to a target object is less than or equal to a preset distance to control L light emitting regions of the N light emitting regions to simultaneously emit light, wherein L is less than or equal to N, and, operate in a second working mode when the distance to the target object is greater than the preset distance and control M light emitting regions to emit light. Nash teaches a lidar system (Fig. 3, device 300 having a TOF system) with an array light source having N light emitting regions that each generate a beam (Fig. 10, laser array 1002, and there are 10 light emitting regions pictured here), a control unit that controls M light emitting regions of the N light emitting regions, where M is less than or equal to N (Fig. 10, single lasers 1004A, 1004B, and 1004C); a beam splitter (Fig. 10, DOEs 1006). Nash further teaches that the control unit is configured to operate in a first working mode when a distance to a target object is less than or equal to a preset distance (Fig. 14, steps 1404 and 1410 check if the object has moved to the second or first range respectively, and if the object is in the first range, the object is measured using a first mode in step 1402; Fig. 13 shows the first range 1308 is closer than the second range 1310) to control L light emitting regions of the N light emitting regions to simultaneously emit light, wherein L is less than or equal to N ([0061] the emissions with different fields of transmission/different modes are time division multiplexed such that different times correspond to different fields of emission; Fig. 10, there are a number of light emitting regions that correspond to a first, short range measurement, which means that multiple light emitting regions emit light simultaneously), and, operate in a second working mode when the distance to the target object is greater than the preset distance and control M light emitting regions to emit light (Fig. 14, in steps 1404 and 1410, the controller determines if the object is in the second range or not. If the object is in the second range, then it is measured using the second mode in step 1408; Figs. 6, 8, and 10, in the second mode there is a different, smaller, number of emitted beams). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the depth sensing module disclosed by Hillard, such that the device operates in a first or second mode based on the distance to the object, as taught by Nash. In this first mode, the coverage is the largest and the entire field of view is sensed, whereas in the second mode, there is less coverage (See Nash, Fig. 8). This would be beneficial because the first mode provides the greatest resolution, which is important when the objects are closer (Nash, [0063]). When the object is farther, the emitted light has more power such that the signal is strong enough to be detected upon return, and having less coverage in the second mode when the object is far would reduce power consumption by the TOF system (Nash, [0059]). Regarding Claim 7: Hillard discloses a time of flight depth sensing module (Fig. 3B, lidar system 10) comprising: an array light source having N light emitting regions that do not overlap each other, wherein each light emitting region is used to generate a beam (Fig. 5B; emitter array 111, which is made up of lasers such that the light beams they emit define sub-pixels [0027]); a control unit configured to: control M light emitting regions of the N light emitting regions to emit light at different moments, wherein M is less than or equal to N ([0021] the processing module 400 is coupled to the controller 112, which controls the emitters of the emitter array 111; [0055] different emitters emit light beams at different times, using TDMA; Fig. 5B, there are less emitters than sub pixels); a beam splitter configured to perform beam splitting processing on beams from the L light emitting regions or the M light emitting regions (Fig. 4, beam splitter 121 is downstream from the emitter array) wherein the beam splitter is configured to split each beam of light into a plurality of beams of light ([0025] beams incident on the beam splitter are split into a plurality of beams, and Fig. 6B shows how one input beam can be split into multiple output beams. The beam splitting illustrated by Fig. 6B corresponds to the embodiment illustrated by Fig. 5B); a collimation lens group configured to perform collimation processing on beams from the beam splitter to obtain an emergent beam ([0026] and Fig. 4, the lenses 122 between the beam splitter 121 and the beam director (not shown in Fig. 4). Hillard discloses that the lenses 122 can include any suitable arrangement of lenses, so both lens sets 122 can be collimating lenses); and a receiving unit configured to receive reflected beams of the target object, wherein the reflected beams of the target object is obtained by reflecting the emergent beam (Fig. 3B, receive module 300, which receives the light beams reflected back from the external object). Hillard does not expressly disclose that the control unit is configured to operate in a first working mode when a distance to a target object is less than or equal to a preset distance to control L light emitting regions of the N light emitting regions to simultaneously emit light, wherein L is less than or equal to N, and, operate in a second working mode when the distance to the target object is greater than the preset distance and control M light emitting regions to emit light. Nash teaches a lidar system (Fig. 3, device 300 having a TOF system) with an array light source having N light emitting regions that each generate a beam (Fig. 10, laser array 1002, and there are 10 light emitting regions pictured here), a control unit that controls M light emitting regions of the N light emitting regions, where M is less than or equal to N (Fig. 10, single lasers 1004A, 1004B, and 1004C); a beam splitter (Fig. 10, DOEs 1006). Nash further teaches that the control unit is configured to operate in a first working mode when a distance to a target object is less than or equal to a preset distance (Fig. 14, steps 1404 and 1410 check if the object has moved to the second or first range respectively, and if the object is in the first range, the object is measured using a first mode in step 1402; Fig. 13 shows the first range 1308 is closer than the second range 1310) to control L light emitting regions of the N light emitting regions to simultaneously emit light, wherein L is less than or equal to N ([0061] the emissions with different fields of transmission/different modes are time division multiplexed such that different times correspond to different fields of emission; Fig. 10, there are a number of light emitting regions that correspond to a first, short range measurement, which means that multiple light emitting regions emit light simultaneously), and, operate in a second working mode when the distance to the target object is greater than the preset distance and control M light emitting regions to emit light (Fig. 14, in steps 1404 and 1410, the controller determines if the object is in the second range or not. If the object is in the second range, then it is measured using the second mode in step 1408; Figs. 6, 8, and 10, in the second mode there is a different, smaller, number of emitted beams). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the depth sensing module disclosed by Hillard, such that the device operates in a first or second mode based on the distance to the object, as taught by Nash. In this first mode, the coverage is the largest and the entire field of view is sensed, whereas in the second mode, there is less coverage (See Nash, Fig. 8). This would be beneficial because the first mode provides the greatest resolution, which is important when the objects are closer (Nash, [0063]). When the object is farther, the emitted light has more power such that the signal is strong enough to be detected upon return, and having less coverage in the second mode when the object is far would reduce power consumption by the TOF system (Nash, [0059]). Regarding Claims 2 and 8: Hillard, in view of Nash, teaches the TOF depth sensing module according to claim 1 and the TOF depth sensing module according to claim 7. Hillard further discloses wherein the receiving unit comprises a sensor (Figs. 3B and 5B, receive module 300 has optical transducers 311 that receive light), and a receiving lens group is configured to converge the reflected beams to the sensor ([0046] the receive module optics 320 and receiver optics 313 couple light into the transducers 311 so that different beams are directed to their associated pixels of the receiver). Regarding Claims 3 and 9: Hillard, in view of Nash, teaches the TOF depth sensing module according to claim 1 and the TOF depth sensing module according to claim 7. Hillard further discloses wherein a beam receiving surface of the beam splitter is parallel to a beam emission surface of the array light source (Figs. 4 and 5B, the emitter array 111 is parallel to the beam splitter 121). Regarding Claims 4 and 10: Hillard, in view of Nash, teaches the TOF depth sensing module according to claim 1 and the TOF depth sensing module according to claim 7. Hillard further discloses wherein the beam splitter is any one of a cylindrical lens array, a microlens array, and a diffraction optical device ([0025] the beam splitter 121 preferably includes a diffractive beam splitter). Regarding Claim 5: Hillard, in view of Nash, teaches the TOF depth sensing module according to claim 1. This particular combination does not expressly teach that the array light source comprises a vertical cavity surface emitting laser. However, Nash further teaches the use of VCSEL lasers in a transmitter array ([0067] “the array of light emitters may be an array of VCSELs”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the system taught by Hillard and Nash, by replacing the lasers in the laser array disclosed by Hillard, with the VCSEL array further taught by Nash. This would be a simple substitution for one type of laser for another type of laser. See MPEP 2141.III KSR Rationale (B). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Hillard (US 20200132848 A1) in view of Nash (US 20190339364 A1), further in view of Donovan (US 20170307736 A1). Hillard, in view of Nash, teaches the TOF depth sensing module according to claim 1. Hillard and Nash do not expressly teach a light emitting area of the light array source is less than or equal to 5 x 5 mm2; an area of a beam incident on an end face of the beam splitter is less than 5 x 5 mm2; and a clear aperture of the collimation lens group is less than or equal to 5mm. However, Donovan discloses a light emitting area of the light array source is less than or equal to 5 x 5 mm2 ([0064] the system can use fewer lasers to achieve arrays with a 4×4 mm foot print; if the array light source has a footprint of 4 x 4 mm2); an area of a beam incident on an end face of the beam splitter is less than 5 x 5 mm2 ([0113] and Fig. 14, wavelength multiplexer 1406 is illustrated as the same size as the VCSEL array 1402, and each of the individual beams 1410 and 1412 illuminate an area smaller than the size of the VCSEL array and the multiplexer. This means the area of the beam must be smaller than 5 x 5 mm2 since the array light source only has a footprint of 4 x 4 mm2); and a clear aperture of the collimation lens group is less than or equal to 5mm (Fig. 14, light 1410 and 1412 that emerges from lens 1408 is collimated, and the diameter of the lens is the same as the height of the array light source. Since the array light source has a footprint of 4 x 4 mm2, the diameter of the lens is 4mm). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the transmitter disclosed by Hillard, in the system taught by Hillard and Nash, by ensuring that the emitter array is smaller than 5 x 5 mm2 and that the diameter of the lenses are also smaller than 5mm in diameter, as taught by Donovan. This is beneficial because smaller emitter arrays can lower the cost of the lidar system (Donovan, [0064]). Claims 11-13, 15-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hillard (US 20200132848 A1), in view of Nash (US 20190339364 A1), further in view of Bagwell (US 11640170 B1). Regarding Claim 11: Hillard discloses an image generation method comprising: operating, by using a control unit, to control M light emitting regions of N light emitting regions of the array light source to respectively emit light at M different moments, wherein M is less than or equal to N (Fig. 3B and [0021] the processing module 400 is coupled to the controller 112, which controls the emitters of the emitter array 111; [0055] different emitters can emit light at different times and using TDMA); performing, by using a collimation lens group, collimation processing on beams that are respectively generated by the L light emitting regions, or the M light emitting regions at the M different moments, to obtain beams obtained after collimation processing is performed (Fig. 4, collimation lens group 122 between the emitter array and splitter); performing, by using a beam splitter, beam splitting processing on the beams obtained after collimation processing is performed, to obtain an emergent beam, wherein the beam splitter is configured to split each received beam of light into a plurality of beams of light ([0025] beams incident on the beam splitter are split into a plurality of beams, and Fig. 6B shows how one input beam can be split into multiple output beams. The beam splitting illustrated by Fig. 6B corresponds to the embodiment illustrated by Fig. 5B; Fig. 4, splitter 121 is downstream from the collimating lenses 122); receiving reflected beams of the target object by using a receiving unit, wherein the reflected beam of the target object is obtained by reflecting the emergent beam (Fig. 5B, the receive optics direct the beams to the receiver array; Fig. 3B, receiver 300); obtaining TOFs corresponding to the beams that are respectively emitted by the L light emitting regions, or the M light emitting regions at the M different moments ([0059] the data associated with different sub pixels is differentiated by using multiplexing techniques like TDMA and times of flight for each received beam are obtained). Hillard does not disclose: operating, by using the control unit, in a first working mode when a distance to a target object is less than or equal to a preset distance to control L light emitting regions of the N light emitting regions to simultaneously emit light, wherein L is less than or equal to N, and, operating, by using the control unit, in a second working mode when the distance to the target object is greater than the preset distance and control M light emitting regions to emit light. Hillard also does not expressly disclose generating L depth maps corresponding to the beams that are respectively emitted by the L light emitting regions or M depth maps based on the TOFs corresponding to the beams that are respectively emitted by the M light emitting regions at the M different moments; and obtaining a final depth map of the target object based on the L depth maps or the M depth maps. Nash teaches that the control unit is configured to operate in a first working mode when a distance to a target object is less than or equal to a preset distance (Fig. 14, steps 1404 and 1410 check if the object has moved to the second or first range respectively, and if the object is in the first range, the object is measured using a first mode in step 1402; Fig. 13 shows the first range 1308 is closer than the second range 1310) to control L light emitting regions of the N light emitting regions to simultaneously emit light, wherein L is less than or equal to N ([0061] the emissions with different fields of transmission/different modes are time division multiplexed such that different times correspond to different fields of emission; Fig. 10, there are a number of light emitting regions that correspond to a first, short range measurement, which means that multiple light emitting regions emit light simultaneously), and, operate in a second working mode when the distance to the target object is greater than the preset distance and control M light emitting regions to emit light (Fig. 14, in steps 1404 and 1410, the controller determines if the object is in the second range or not. If the object is in the second range, then it is measured using the second mode in step 1408; Figs. 6, 8, and 10, in the second mode there is a different, smaller, number of emitted beams). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the depth sensing module disclosed by Hillard, such that the device operates in a first or second mode based on the distance to the object, as taught by Nash. In this first mode, the coverage is the largest and the entire field of view is sensed, whereas in the second mode, there is less coverage (See Nash, Fig. 8). This would be beneficial because the first mode provides the greatest resolution, which is important when the objects are closer (Nash, [0063]). When the object is farther, the emitted light has more power such that the signal is strong enough to be detected upon return, and having less coverage in the second mode when the object is far would reduce power consumption by the TOF system (Nash, [0059]). However, this combination of Hillard and Nash still does not expressly teach: generating L depth maps corresponding to the beams that are respectively emitted by the L light emitting regions or M depth maps based on the TOFs corresponding to the beams that are respectively emitted by the M light emitting regions at the M different moments; and obtaining a final depth map of the target object based on the L depth maps or the M depth maps. Bagwell teaches the generation of multiple different depth maps corresponding to different, discrete, portions of an environment (Col. 24 ln 48-60, the map representing the entire environment is made up of multiple map tiles that correspond to discrete portions of the environment and are stored as map tiles) and a complete depth map of the entire environment based on the discrete depth maps (Col. 24, ln 48-60, the map includes a 3D mesh of the entire environment and can be stored as map tiles). It would have been obvious to a person having ordinary skill in the art before the effective filing date to modify the image generation method taught by Hillard and Nash, such that M or L map tiles are generated for the M or L emission times, which together can be used in a 3D mesh to map the entire environment, as taught by Bagwell. This would be beneficial because it would be applying a known technique for generating maps representing the environment surrounding a lidar device, to a lidar system ready for improvement to yield predictable results (MPEP 2141.III KSR Rationale D). Regarding Claim 16: Hillard discloses an image generation method comprising: operating, by using a control unit, to control M light emitting regions of N light emitting regions of the array light source to respectively emit light at M different moments, wherein M is less than or equal to N (Fig. 3B and [0021] the processing module 400 is coupled to the controller 112, which controls the emitters of the emitter array 111; [0055] different emitters can emit light at different times and using TDMA); performing, by using a beam splitter, beam splitting processing on beams that are respectively generated by the L light emitting regions or the M light emitting regions at the M light emitting regions at the M different moments, wherein the beam splitter is configured to split each received beam, of light into a plurality of beams of light ([0025] beams incident on the beam splitter are split into a plurality of beams, and Fig. 6B shows how one input beam can be split into multiple output beams. The beam splitting illustrated by Fig. 6B corresponds to the embodiment illustrated by Fig. 5B) performing collimation processing on beams from the beam splitter by using a collimation lens group, to obtain an emergent beam ([0026] and Fig. 4, the lenses 122 between the beam splitter 121 and the beam director (not shown in Fig. 4). Hillard discloses that the lenses 122 can include any suitable arrangement of lenses, so both lens sets 122 can be collimating lenses); receiving reflected beams of the target object by using a receiving unit, wherein the reflected beam of the target object is obtained by reflecting the emergent beam (Fig. 5B, the receive optics direct the beams to the receiver array; Fig. 3B, receiver 300); obtaining TOFs corresponding to the beams that are respectively emitted by the L light emitting regions, or the M light emitting regions at the M different moments ([0059] the data associated with different sub pixels is differentiated by using multiplexing techniques like TDMA and times of flight for each received beam are obtained). Hillard does not disclose: operating, by using the control unit, in a first working mode when a distance to a target object is less than or equal to a preset distance to control L light emitting regions of the N light emitting regions to simultaneously emit light, wherein L is less than or equal to N, and, operating, by using the control unit, in a second working mode when the distance to the target object is greater than the preset distance and control M light emitting regions to emit light. Hillard also does not expressly disclose generating L depth maps corresponding to the beams that are respectively emitted by the L light emitting regions or M depth maps based on the TOFs corresponding to the beams that are respectively emitted by the M light emitting regions at the M different moments; and obtaining a final depth map of the target object based on the L depth maps or the M depth maps. Nash teaches that the control unit is configured to operate in a first working mode when a distance to a target object is less than or equal to a preset distance (Fig. 14, steps 1404 and 1410 check if the object has moved to the second or first range respectively, and if the object is in the first range, the object is measured using a first mode in step 1402; Fig. 13 shows the first range 1308 is closer than the second range 1310) to control L light emitting regions of the N light emitting regions to simultaneously emit light, wherein L is less than or equal to N ([0061] the emissions with different fields of transmission/different modes are time division multiplexed such that different times correspond to different fields of emission; Fig. 10, there are a number of light emitting regions that correspond to a first, short range measurement, which means that multiple light emitting regions emit light simultaneously), and, operate in a second working mode when the distance to the target object is greater than the preset distance and control M light emitting regions to emit light (Fig. 14, in steps 1404 and 1410, the controller determines if the object is in the second range or not. If the object is in the second range, then it is measured using the second mode in step 1408; Figs. 6, 8, and 10, in the second mode there is a different, smaller, number of emitted beams). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the depth sensing module disclosed by Hillard, such that the device operates in a first or second mode based on the distance to the object, as taught by Nash. In this first mode, the coverage is the largest and the entire field of view is sensed, whereas in the second mode, there is less coverage (See Nash, Fig. 8). This would be beneficial because the first mode provides the greatest resolution, which is important when the objects are closer (Nash, [0063]). When the object is farther, the emitted light has more power such that the signal is strong enough to be detected upon return, and having less coverage in the second mode when the object is far would reduce power consumption by the TOF system (Nash, [0059]). However, this combination of Hillard and Nash still does not expressly teach: generating L depth maps corresponding to the beams that are respectively emitted by the L light emitting regions or M depth maps based on the TOFs corresponding to the beams that are respectively emitted by the M light emitting regions at the M different moments; and obtaining a final depth map of the target object based on the L depth maps or the M depth maps. Bagwell teaches the generation of multiple different depth maps corresponding to different, discrete, portions of an environment (Col. 24 ln 48-60, the map representing the entire environment is made up of multiple map tiles that correspond to discrete portions of the environment and are stored as map tiles) and a complete depth map of the entire environment based on the discrete depth maps (Col. 24, ln 48-60, the map includes a 3D mesh of the entire environment and can be stored as map tiles). It would have been obvious to a person having ordinary skill in the art before the effective filing date to modify the image generation method taught by Hillard and Nash, such that M or L map tiles are generated for the M or L emission times, which together can be used in a 3D mesh to map the entire environment, as taught by Bagwell. This would be beneficial because it would be applying a known technique for generating maps representing the environment surrounding a lidar device, to a lidar system ready for improvement to yield predictable results (MPEP 2141.III KSR Rationale D). Regarding Claims 12 and 17: Hillard, as modified in view of Nash and Bagwell, teaches the image generation method according to claims 11 and 16. The combination of Hillard, Nash, and Bagwell further teaches wherein the L depth maps are respectively depth maps corresponding to L region sets of the target object or the M depth maps are respectively depth maps corresponding to M region sets of the target object and there is no overlapping region between any two region sets in the M region sets (Hillard: Fig. 5B, each of the M light emitters is directed towards only one region of the environment in the far-field image. The emitter that is shaded in black only directs light towards its corresponding row of sub pixels in the far field image). Since the combination of Hillard and Nash and Bagwell utilizes the map tiles disclosed by Bagwell such that each map tile corresponds to each emitter of the array, the map tiles are also non-overlapping. Regarding Claims 13 and 18: Hillard, as modified in view of Nash and Bagwell, teaches the image generation method according to claims 11 and 16. Hillard further discloses wherein the receiving unit comprises a receiving lens group and a sensor (Fig. 3B, receive module 300 has receiver optics 320 and 313, as well as optical transducers 311 which are illustrated as APDs) and the receiving reflected beams of a target object by using the receiving unit comprises: converging the reflected beams of the target object to the sensor by using the receiving lens group ([0046] and Fig. 5B, the receive optics module 320 and the receiver optics 313 couple light from the environment into the individual sensors 311 of the receiver array). Regarding Claims 15 and 20: Hillard, as modified in view of Nash and Bagwell, teaches the image generation method according to claims 11 and 16. Hillard further discloses wherein performing beam splitting processing comprises: performing, by using the beam splitter, one dimensional or two dimensional beam splitting processing on the beams generated after collimation processing is performed (Fig. 6B, one dimensional beam splitting is performed, where the input beam is split into a row of output beams). Claims 14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hillard (US 20200132848 A1), in view of Nash (US 20190339364 A1), further in view of Bagwell (US 11640170 B1), further in view of Kudla (US 20200386876 A1). Hillard, as modified in view of Nash and Bagwell, teaches the image generation method according to claims 13 and 18. They do not expressly teach wherein resolution of the sensor is greater than or equal to P x Q, and a quantity of beams obtained after the beam splitter performs beam splitting on a beam from one light emitting region of the array light source is P x Q, wherein both P and Q are positive integers. However, Kudla teaches this limitation with Fig 4A and paragraphs [0030 - 0033]. Kudla teaches that each light source is mapped to a group of pixels of the photodetector, illustrated in Fig. 4A, where lasers 1 through 8 are mapped to the receiving row, RL, of the detector array 15. Furthermore, each transmission angle for each of the light sources is mapped onto a group of pixels, where in Fig. 4A, for each receive line, each laser source is also mapped to its particular row of pixels. This means each point scanned in the environment is mapped to its individual pixel group which corresponds to a particular transmitter and a particular angle. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify the method taught by Hillard, Nash, and Bagwell, by using the receiver and mapping of transmission points to receiver pixels as taught by Kudla. Using a receiver that has more pixels, and thus a higher resolution, where the pixels are mapped directly to corresponding regions of the environment, would be using a known technique for improving similar lidar devices in the same way (MPEP 2141.III KSR Rationale C). Conclusion Applicant's amendment necessitated the new grounds of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISABELLE LIN BOEGHOLM whose telephone number is (571)270-0570. The examiner can normally be reached Monday-Thursday 7:30am-5pm, Fridays 8am-12pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached at (571) 270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ISABELLE LIN BOEGHOLM/Examiner, Art Unit 3645 /YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Jul 01, 2022
Application Filed
Jul 19, 2022
Response after Non-Final Action
Oct 27, 2025
Non-Final Rejection — §103
Dec 30, 2025
Response Filed
Mar 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591063
READING DEVICE AND LIDAR MEASURING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12546868
RANGING METHOD AND APPARATUS BASED ON DETECTION SIGNAL
2y 5m to grant Granted Feb 10, 2026
Patent 12449538
Ambiguity Mitigation for FMCW Lidar System
2y 5m to grant Granted Oct 21, 2025
Patent 12442899
MEMS ACTUATED VIBRATORY RISLEY PRISM FOR LIDAR
2y 5m to grant Granted Oct 14, 2025
Patent 12436287
3-DIMENSIONAL IMAGING LIDAR SYSTEM
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
99%
With Interview (+62.5%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month