Prosecution Insights
Last updated: April 19, 2026
Application No. 18/755,293

SYSTEMS, METHODS, AND MEDIA FOR CONCURRENT DEPTH AND MOTION ESTIMATION USING INDIRECT TIME OF FLIGHT IMAGING

Non-Final OA §103
Filed
Jun 26, 2024
Examiner
XU, XIAOLAN
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Wisconsin Alumni Research Foundation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
87%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
247 granted / 334 resolved
+16.0% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
371
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 334 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Claims 17-20 are withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected II, there being no allowable generic or linking claim. Applicant timely traversed the restriction (election) requirement in the reply filed on 02/17/2026. The species are independent or distinct because Species I claims a system and method for estimating depths of a dynamic scene, wherein the first intensity image is blurred based on motion in the scene; calculate a first model of the first intensity image based on the first plurality of intensity values; calculate a second model of the second intensity image based on the second plurality of intensity values; determine estimated lateral motion in the scene between the first period of time and the second period of time based on the first model and the second model. Species I corresponds to the instant application publication [0166] method 2. Species II claims a system for estimating depths of a dynamic scene using indirect time-of-flight (I-ToF), wherein generate a first blurred intensity image using the first set of correlation images; generate a second blurred intensity image using the second set of correlation images; determine estimated lateral motion in the scene between the first period of time and the second period of time based on a distribution of intensity values in the first blurred image and a distribution of intensity values in the second blurred image. Species II corresponds to the instant application publication [0165] method 1. In addition, these species are not obvious variants of each other based on the current record. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 8, 12, 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kholodenko et al. (US 20160232684 A1) in view of SCHAALE et al. (US 20220075064 A1). Regarding claim 1. Kholodenko discloses A system for estimating depths of a dynamic scene ([0002] A depth image may be generated directly using a depth imager such as a structured light (SL) camera or a time of flight (ToF) camera. Such cameras may provide both depth information and intensity information, in the form of respective depth and amplitude images; [0004] A motion compensated first depth image is generated), the system comprising: a light source ([0002] a structured light (SL) camera); an image sensor comprising a plurality of pixels (figure 1 unit 104 image sensor; [0014] receives raw depth images from an image sensor 104); a signal generator configured to output at least: a first signal corresponding to a modulation function ([0047] The time difference between emitting and receiving light may be measured, for example, by using a periodic light signal, such as a sinusoidal light signal or a triangle wave light signal); and one or more processors configured to: cause the light source to emit modulated light toward the scene, with modulation based on the first signal ([0047] The time difference between emitting and receiving light may be measured, for example, by using a periodic light signal, such as a sinusoidal light signal or a triangle wave light signal); cause the image sensor to generate, during a first period of time, a first set of correlation images comprising a first plurality of correlation images ([0047] measuring the phase shift between the emitted periodic light signal and the reflected periodic signal received back at the image sensor; [0018] obtain from the image sensor 104 multiple phase images; figure 5, [0036] a sequence of four phase images each having a different capture time, as illustrated in FIG. 5, left side), wherein each correlation image of the first plurality of correlation images comprises a plurality of pixel values (figure 5, [0036] a sequence of four phase images)), and each pixel value of the plurality of pixel values is based on a correlation between modulated light received from a portion of the scene at that pixel and a demodulation function of a plurality of demodulation functions ([0047] measuring the phase shift between the emitted periodic light signal and the reflected periodic signal received back at the image sensor (the emitted periodic light signal and the reflected periodic signal are correlated in order to measure the phase shift); [0047] The time difference between emitting and receiving light may be measured, for example, by using a periodic light signal, such as a sinusoidal light signal or a triangle wave light signal (inherently, a demodulation function corresponding to the modulation function is applied at the receiving side)); generate a first intensity image based on the first set of correlation images ([0021] a motion compensated first amplitude image corresponding to the first depth image is generated in amplitude image computation module 116, also utilizing the given phase image and the adjusted other phase images of the first depth frame), wherein the first intensity image comprises a first plurality of intensity values ([0017] A given depth image generated by the depth imager 100 may comprise not only depth data but also intensity or amplitude data with such data being arranged in the form of one or more rectangular arrays of pixels); cause the image sensor to generate, during a second period of time, a second set of correlation images comprising a second plurality of correlation images ([0047] measuring the phase shift between the emitted periodic light signal and the reflected periodic signal received back at the image sensor; [0018] obtain from the image sensor 104 multiple phase images; figure 5, [0036] a sequence of four phase images each having a different capture time, as illustrated in FIG. 5, right side); generate a second intensity image based on the second set of correlation images (figure 1, [0018] The image processor 102 of depth imager 100 illustratively comprises an amplitude image computation module 116. The image processor 102 is configured to obtain from the image sensor 104 multiple phase images for each of first and second depth frames in a sequence of depth frames; figure 5, right side; [0021] a motion compensated first amplitude image corresponding to the first depth image is generated in amplitude image computation module 116, also utilizing the given phase image and the adjusted other phase images of the first depth frame (inherently, the same to the second depth frame)), wherein the second intensity image comprises a second plurality of intensity values ([0017] A given depth image generated by the depth imager 100 may comprise not only depth data but also intensity or amplitude data with such data being arranged in the form of one or more rectangular arrays of pixels); determine estimated lateral motion in the scene between the first period of time and the second period of time (figure 6, [0062] Movement of an exemplary point in an imaged scene between the phase images of the first and second depth frames is illustrated in FIG. 6; [0063]-[0067] A process of the type previously described in FIG. 2 but more particularly adapted to the scenario of FIGS. 5 and 6, Step 1, determine motion); and determine a set of depth estimates for the scene based on the first plurality of correlation images and the estimated lateral motion in the scene (figure 2 step 204, [0043] a motion compensated first depth image is generated utilizing the given phase image and the adjusted other phase images of the first depth frame; figure 3, [0055] movement of an exemplary point in an imaged scene over multiple sequential capture times for respective phase images of a depth frame (lateral motion); figure 4, [0057] determining the movement of a point in an imaged scene, and adjusting pixel values to compensate for the movement, compensates for this motion by adjusting pixel values such that the one-pixel object always appears in the first pixel position; figure 6, [0062] Movement of an exemplary point in an imaged scene between the phase images of the first and second depth frames is illustrated in FIG. 6; [0063]-[0067] A process of the type previously described in FIG. 2 but more particularly adapted to the scenario of FIGS. 5 and 6, Step 2, Step 4, estimate motion compensated depth based on transformed phase images), wherein the set of depth estimates comprises, for each of the plurality of pixels, a depth estimate for a corresponding portion of the scene during the first period of time ([0021] generation of the motion compensated first depth image in module 114 utilizing the given phase image and the adjusted other phase images of the first depth frame). However, Kholodenko does not explicitly disclose calculate a first model of the first intensity image based on the first plurality of intensity values; calculate a second model of the second intensity image based on the second plurality of intensity values; determine estimated lateral motion in the scene between the first period of time and the second period of time based on the first model and the second model. SCHAALE discloses calculate a first model of the first intensity image based on the first plurality of intensity values (figure 5, [0043] obtains motion vectors from successive intensity images. The motion vectors are obtained and depicted in a manner known to the person skilled in the art, e.g. through a translation model, e.g. pixel-recursive algorithms, methods for determining the optical flux, etc); calculate a second model of the second intensity image based on the second plurality of intensity values (figure 5, [0043] obtains motion vectors from successive intensity images. The motion vectors are obtained and depicted in a manner known to the person skilled in the art, e.g. through a translation model, e.g. pixel-recursive algorithms, methods for determining the optical flux, etc); determine estimated lateral motion in the scene between the first period of time and the second period of time based on the first model and the second model (figure 5, [0043] obtains motion vectors from successive intensity images. The motion vectors are obtained and depicted in a manner known to the person skilled in the art, e.g. through a translation model, e.g. pixel-recursive algorithms, methods for determining the optical flux, etc). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kholodenko according to the invention of SCHAALE, to estimate motion from intensity images, in order to more efficiently estimate motion, especially large motion. Regarding claim 2. Kholodenko discloses The system of claim 1, wherein the one or more processors are further configured to: generate a refined intensity image based on the first plurality of correlation images and the estimated lateral motion in the scene ([0063]-[0067] A process of the type previously described in FIG. 2 but more particularly adapted to the scenario of FIGS. 5 and 6, Step 2, Step 4, Calculate the motion compensated amplitude values based on transformed phase images; [0068] Step 5. Apply filtering to suppress noise), wherein a signal-to-noise ratio of the refined intensity image is higher than a signal-to-noise ratio of the intensity image ([0067] Step 4. Calculate the amplitude values; [0068] Step 5. Apply filtering to suppress noise). Regarding claim 8. Kholodenko discloses The system of claim 1, wherein the modulation function is a unipolar sinusoidal modulation function ([0047]-[0048] using a periodic light signal, such as a sinusoidal light signal). Regarding claim 12. Kholodenko discloses The system of claim 1, wherein the one or more processors are further configured to: identify a set of corresponding pixels in the first set of correlation images based on the estimated lateral motion ([0067] Calculate the depth values for respective pixels of the motion compensated depth frame comprising the transformed phase images); and determine a depth estimate for a portion of the scene corresponding to the set of corresponding pixels based on pixel values of the set of corresponding pixels ([0067] Calculate the depth values for respective pixels of the motion compensated depth frame comprising the transformed phase images). Regarding claim 14. The same analysis has been stated in claim 1. Regarding claim 15. The same analysis has been stated in claim 2. Claims 3-4 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kholodenko et al. (US 20160232684 A1) in view of SCHAALE et al. (US 20220075064 A1) as applied above in claim 1, and further in view of Wehr et al. (US 20230033470 A1). Regarding claim 3. Kholodenko discloses The system of claim 1, wherein the one or more processors are further configured to: determine a second set of depth estimates for the scene based on the second plurality of correlation images and the estimated lateral motion in the scene ([0017] generate depth images using the motion compensation techniques; [0054] generation of consecutive depth images; [0096] providing motion compensation for only depth data associated with a given depth image or sequence of depth images (inherently, multiple depth images are generated using the same technique as disclosed in the following citations for the first depth image); figure 2 step 204, [0043] a motion compensated first depth image is generated utilizing the given phase image and the adjusted other phase images of the first depth frame; figure 3, [0055] movement of an exemplary point in an imaged scene over multiple sequential capture times for respective phase images of a depth frame (lateral motion); figure 4, [0057] determining the movement of a point in an imaged scene, and adjusting pixel values to compensate for the movement, compensates for this motion by adjusting pixel values such that the one-pixel object always appears in the first pixel position; figure 6, [0062] Movement of an exemplary point in an imaged scene between the phase images of the first and second depth frames is illustrated in FIG. 6; [0063]-[0067] A process of the type previously described in FIG. 2 but more particularly adapted to the scenario of FIGS. 5 and 6, Step 2, Step 4, estimate motion compensated depth based on transformed phase images), wherein the second set of depth estimates comprises, for each of the plurality of pixels, a depth estimate for a corresponding portion of the scene during the second period of time ([0017] generate depth images using the motion compensation techniques; [0054] generation of consecutive depth images; [0096] providing motion compensation for only depth data associated with a given depth image or sequence of depth images (inherently, multiple depth images are generated using the same technique as disclosed in the following citations for the first depth image); [0021] generation of the motion compensated first depth image in module 114 utilizing the given phase image and the adjusted other phase images of the first depth frame); and Wehr discloses determine an estimate of axial motion for at least a portion of a scene based on a first set of depth estimates, a second set of depth estimates, and an estimated lateral motion in the scene ([0003] an enhanced two-dimensional (“2.5D”) depth flow space (e.g., x, y, and depth flow—indicating a change in depth values between LiDAR range images), calculating 3D motion vectors for pixels by converting the 2.5D information back to 3D space (e.g., based on known associations between the 2.5D image space locations and 3D world space locations); [0059] The refined 2D motion vector may be indicative of an optical and/or depth flow change; [0061] calculate or compute the one or more motion vectors in 2D or 2.5D space, and then convert the one or more motion vectors to 3D space to generate one or more 3D motion vectors; [0068] at least depth values corresponding to the first range image and the second range image may be compared to determine depth flow values—indicating changes in depth for particular (e.g., matched) pixels or points in the range images over time). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Kholodenko, SCHAALE and Wehr, to also estimate axial motion. Regarding claim 4. Wehr discloses The system of claim 3, wherein the one or more processors are further configured to: identify, for each of the plurality of pixels represented in the first set of depth estimates, a corresponding pixel represented in the second set of depth estimates using the estimated lateral motion for the pixel represented in the first set of depth estimates (abstract, “2.5D” optical flow space (e.g., x, y, and depth flow)); and estimate, for each of the plurality of pixels represented in the first set of depth estimates, the axial motion for a portion of the scene corresponding to that pixel based on a difference between the depth estimate for the pixel represented in the first set of depth estimates and the depth estimate for the corresponding pixel represented in the second set of depth estimates ([0003] an enhanced two-dimensional (“2.5D”) depth flow space (e.g., x, y, and depth flow—indicating a change in depth values between LiDAR range images), calculating 3D motion vectors for pixels by converting the 2.5D information back to 3D space (e.g., based on known associations between the 2.5D image space locations and 3D world space locations); [0059] The refined 2D motion vector may be indicative of an optical and/or depth flow change; [0061] calculate or compute the one or more motion vectors in 2D or 2.5D space, and then convert the one or more motion vectors to 3D space to generate one or more 3D motion vectors; [0068] at least depth values corresponding to the first range image and the second range image may be compared to determine depth flow values—indicating changes in depth for particular (e.g., matched) pixels or points in the range images over time). The same motivation has been stated in claim 3. Regarding claim 16. The same analysis has been stated in claim 3. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kholodenko et al. (US 20160232684 A1) in view of SCHAALE et al. (US 20220075064 A1) as applied above in claim 1, and further in view of Borer (US 6381285 B1). Regarding claim 9. Borer discloses the first model comprises a spatial gradient of the first intensity image, the second model comprises a spatial gradient of the second intensity image (column 1 lines 40-45, estimate the image gradients; the spatial and temporal gradients of brightness. In principle these are easily calculated by applying straightforward linear horizontal, vertical and temporal filters to the image sequence), and wherein the one or more processors are further configured to: determine the estimated lateral motion in the scene based on correlations between the first model and the second model (column 2 lines 29-32, Once the image gradients have been estimated the constraint equation is used to calculate the corresponding motion vector; figure 2, figure 3, column 5 lines 13-20, lines 51-58, column 6 lines 12-23, generating motion vectors based on the spatial image gradient vectors). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Kholodenko, SCHAALE and Borer, to perform gradient-based motion estimation to estimate motion. Claims 7, 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Kholodenko et al. (US 20160232684 A1) in view of SCHAALE et al. (US 20220075064 A1) as applied above in claim 1, and further in view of Dehlinger et al. (US 20210231805 A1). Regarding claim 7. Dehlinger discloses the plurality of demodulation functions comprises a plurality of versions of the modulation function, each having a different phase shift ([0020] the respective phase offsets may correspond to portions of the respective measurement frequencies; [0054] measuring (at one or more controllers/processors) the phase of optical signals at multiple (e.g., two) different measurement frequencies (e.g., with respect to emitter operation); [0057] component measurements at the respective phase offsets may be generated for each of a plurality of different measurement frequencies [0065] for each of the measurement frequencies of the optical signals output by the emitter array 115, the control circuit 105 may perform a phase measurement that is based on multiple component measurements (referred to herein with reference to four phase vector component measurements, D0, D1, D2, D3) indicative of the different phases of the detection signals output from the detector array 110). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Kholodenko, SCHAALE and Dehlinger, to use multi-frequency techniques, in order to resolve phase wrap around (Dehlinger [0006] Multi-frequency techniques may be used based on light emission at different modulation frequencies such that a matching reported range for the different modulation frequencies indicates the actual range). Regarding claim 10. Dehlinger discloses the one or more processors are further configured to: generate a first set of burst correlation images based on a plurality of sets of correlation images generated using the plurality of demodulation functions, a plurality of sets of correlation images includes the first set of correlation images ([0054] indirect ToF systems, the emitters 115e emit optical signals as bursts of pulsed light (also referred as pulses), with each burst having a respective repetition rate/frequency and pulse width, with burst duration (e.g., in terms of number or cycles of pulses per burst); [0057] The phases may be measured with a series of separate component measurements at the respective phase offsets, which correspond to “subframes” or sub-measurements of operation of the detector pixels; [0056] the detector acquisitions or subframes for the respective phase delays or phase offsets may include more emitter pulse cycles per burst (defining subframes with longer acquisition integration times)), wherein pixel values of a first burst correlation image in the first set of burst correlation images are based pixel values of correlation images in the plurality of sets of correlation images generated using the same demodulation function and correlations between the correlation images in the plurality of sets of correlation images generated using the same demodulation function ([0054] each burst having a respective repetition rate/frequency and pulse width; [0056] the detector acquisitions or subframes for the respective phase delays or phase offsets may include more emitter pulse cycles per burst (defining subframes with longer acquisition integration times)); generate a second set of burst correlation images based on at least the second set of correlation images ([0054] indirect ToF systems, the emitters 115e emit optical signals as bursts of pulsed light (also referred as pulses), with each burst having a respective repetition rate/frequency and pulse width, with burst duration (e.g., in terms of number or cycles of pulses per burst); [0057] The phases may be measured with a series of separate component measurements at the respective phase offsets, which correspond to “subframes” or sub-measurements of operation of the detector pixels; [0056] the detector acquisitions or subframes for the respective phase delays or phase offsets may include more emitter pulse cycles per burst (defining subframes with longer acquisition integration times)); generate the first intensity image using the first set of burst correlation images ([0068] Each subframe may represent the aggregated returns (e.g., the integrated intensity c(φ) of the detected charges), shifted in time by one-quarter of the period corresponding to the measurement frequency for each of the remaining 90, 180, and 270 degree subframes; [0088] an intensity image can be generated based on the component measurements from the detector pixels, e.g., from the phase-offset subframes; [0056] the detector acquisitions or subframes for the respective phase delays or phase offsets may include more emitter pulse cycles per burst (defining subframes with longer acquisition integration times) (a set of subframes are detected per burst)); and generate the second intensity image using the second set of burst correlation images ([0068] Each subframe may represent the aggregated returns (e.g., the integrated intensity c(φ) of the detected charges), shifted in time by one-quarter of the period corresponding to the measurement frequency for each of the remaining 90, 180, and 270 degree subframes; [0088] an intensity image can be generated based on the component measurements from the detector pixels, e.g., from the phase-offset subframes; [0056] the detector acquisitions or subframes for the respective phase delays or phase offsets may include more emitter pulse cycles per burst (defining subframes with longer acquisition integration times) (a set of subframes are detected per burst)). The same motivation has been stated in claim 7. Regarding claim 11. Dehlinger discloses The system of claim 10, wherein the first signal is a periodic signal with a first fundamental frequency f.sub.1, and the plurality of sets of correlation images were generated based on the first signal ([0006] Multi-frequency techniques may be used to resolve phase wrap around, based on light emission at different modulation frequencies; [0054] Some iToF lidar systems operate by transmitting (from one or more emitters, e.g., defining an emitter unit), receiving (at one or more detectors, e.g., defining a detector pixel), and measuring (at one or more controllers/processors) the phase of optical signals at multiple (e.g., two) different measurement frequencies (e.g., with respect to emitter operation) and/or acquisition integration times (e.g., with respect to detector operation); [0057] component measurements at the respective phase offsets may be generated for each of a plurality of different measurement frequencies), and wherein the second set of burst correlation images are based on a second plurality of sets generated based on a second signal that is a periodic signal with a second fundamental frequency f.sub.2≠f.sub.1 ([0006] Multi-frequency techniques may be used to resolve phase wrap around, based on light emission at different modulation frequencies; [0054] Some iToF lidar systems operate by transmitting (from one or more emitters, e.g., defining an emitter unit), receiving (at one or more detectors, e.g., defining a detector pixel), and measuring (at one or more controllers/processors) the phase of optical signals at multiple (e.g., two) different measurement frequencies (e.g., with respect to emitter operation) and/or acquisition integration times (e.g., with respect to detector operation); [0057] component measurements at the respective phase offsets may be generated for each of a plurality of different measurement frequencies). The same motivation has been stated in claim 7. Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Kholodenko et al. (US 20160232684 A1) in view of SCHAALE et al. (US 20220075064 A1) and Wehr et al. (US 20230033470 A1) as applied above in claim 3, and further in view of Dehlinger et al. (US 20210231805 A1). Regarding claim 5. Dehlinger discloses cause the light source to emit modulated light toward the scene with modulation based on a second signal ([0006] Multi-frequency techniques may be used to resolve phase wrap around, based on light emission at different modulation frequencies; [0054] Some iToF lidar systems operate by transmitting (from one or more emitters, e.g., defining an emitter unit), receiving (at one or more detectors, e.g., defining a detector pixel), and measuring (at one or more controllers/processors) the phase of optical signals at multiple (e.g., two) different measurement frequencies (e.g., with respect to emitter operation) and/or acquisition integration times (e.g., with respect to detector operation); [0057] component measurements at the respective phase offsets may be generated for each of a plurality of different measurement frequencies), wherein the first signal is a periodic signal with a first fundamental frequency f.sub.1, and the second signal is a periodic signal with a second fundamental frequency f.sub.2 that is different than the first fundamental frequency ([0006] Multi-frequency techniques may be used to resolve phase wrap around, based on light emission at different modulation frequencies; [0054] Some iToF lidar systems operate by transmitting (from one or more emitters, e.g., defining an emitter unit), receiving (at one or more detectors, e.g., defining a detector pixel), and measuring (at one or more controllers/processors) the phase of optical signals at multiple (e.g., two) different measurement frequencies (e.g., with respect to emitter operation) and/or acquisition integration times (e.g., with respect to detector operation); [0057] component measurements at the respective phase offsets may be generated for each of a plurality of different measurement frequencies), and wherein each correlation image of the second plurality of correlation images comprises a second plurality of pixel values, and each pixel value of the second plurality of pixel values is based on a correlation between modulated light of the second fundamental frequency received from a portion of the scene at that pixel and a demodulation function of a second plurality of demodulation functions ([0054] indirect ToF systems, the emitters 115e emit optical signals as bursts of pulsed light (also referred as pulses), with each burst having a respective repetition rate/frequency and pulse width, with burst duration (e.g., in terms of number or cycles of pulses per burst); [0057] The phases may be measured with a series of separate component measurements at the respective phase offsets, which correspond to “subframes” or sub-measurements of operation of the detector pixels; [0056] the detector acquisitions or subframes for the respective phase delays or phase offsets may include more emitter pulse cycles per burst (defining subframes with longer acquisition integration times)). The same motivation has been stated in claim 7. Regarding claim 6. Dehlinger discloses The system of claim 5, wherein a maximum unambiguous measurable depth range measurable using a modulation function with the first fundamental frequency f.sub.1 is Z.sub.max(f.sub.1), and a maximum unambiguous measurable depth range measurable using a modulation function with the second fundamental frequency f.sub.2 is Z.sub.max(f.sub.2), such that if the scene has a maximum depth Z.sub.max′>Z.sub.max(f.sub.1)>Z.sub.max(f.sub.2), depth estimates in an initial first set of depth estimates based on the first set of correlation images are ambiguous, and depth estimates in an initial second set of depth estimates based on the first set of correlation images are ambiguous ([0006] Since the maximum phase is 2π, the unambiguous range UR=c/2 f.sub.m for the frequency f.sub.m of operation. The unambiguous range may refer to the range beyond which the phase to distance mapping “wraps around” for an iToF system, such that targets therebeyond may be reported as having a shorter range than their real or actual range, where phase_reported=phase_real mod (2π)), and wherein the one or more processors are further configured to: decode the set of depth estimates and the second set of depth estimates using the initial first set of depth estimates and the initial second set of depth estimates, such that the set of depth estimates and the second set of depth estimates include unambiguous depth estimates ([0006] Multi-frequency techniques may be used to resolve phase wrap around, based on light emission at different modulation frequencies such that a matching reported range for the different modulation frequencies indicates the actual range; [0021] In some embodiments, identifying the distance of the target may be based on a correspondence of respective distances indicated by detection signals corresponding to two or more of the respective measurement frequencies. In some embodiments, the correspondence may be indicated by a lookup table that correlates respective phase shift pairs to respective subranges of an unambiguous range for the respective measurement frequencies; [0058] That is, distance may be determined by analyzing respective signals at multiple (e.g., two) separate or distinct modulation or measurement frequencies and/or acquisition integration times, where each measurement frequency has a different unambiguous range, such that the true or actual location of the target may be indicated where the measurements at the different measurement frequencies agree or match). The same motivation has been stated in claim 7. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kholodenko et al. (US 20160232684 A1) in view of SCHAALE et al. (US 20220075064 A1) as applied above in claims 1 and 12, and further in view of Payne et al. (Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras). Regarding claim 13. Kholodenko in view of Payne discloses The system of claim 12, wherein the one or more processors are further configured to: generate the first intensity image based on the first set of correlation images according to the following expression: PNG media_image1.png 119 613 media_image1.png Greyscale where I.sub.1 is the first intensity image, I.sub.1(p) is the intensity value of a pixel p in the first intensity image, C.sub.1 is the first set of correlation images, C.sub.1,n(p) is the value for pixel p in the n.sup.th correlation image in C.sub.1, N is a number of correlation images in C.sub.1, and ψ.sub.n is a phase shift of the demodulation function used to generate the n.sup.th correlation image, such that the first intensity image is blurred based on motion in the scene (Kholodenko [0049] second equation, when ψ.sub.1=0, ψ.sub.2=pi/2; ψ.sub.3=pi, ψ.sub.4=3pi/2; Payne page 4393 column 2 equation (1), column 1, an indirect measurement is performed where the illumination source is amplitude modulated and the propagation delay is manifested as a phase shift of the modulation envelope of the reflected light, the correlation measurement is repeated N times with a phase step of 2πj=N rad introduced to the sensor or illumination modulation signal); and determine the set of depth estimates for the scene according to the following expression: PNG media_image2.png 98 581 media_image2.png Greyscale where Z.sub.1 is the set of depth estimates for the scene based on C.sub.1, Z.sub.1(p) is the depth estimate of pixel p in the first intensity image, C.sub.1,n(p′) is the value for a pixel p′ in the n.sup.th correlation image in C.sub.1 in the set of corresponding pixels that includes C.sub.1,1(p), and f.sub.1 is a fundamental frequency of the first signal (Kholodenko [0049] first equation, [0050] equation, when ψ.sub.1=0, ψ.sub.2=pi/2; ψ.sub.3=pi, ψ.sub.4=3pi/2; Payne page 4393 column 2 equations (3) and (4), column 1, an indirect measurement is performed where the illumination source is amplitude modulated and the propagation delay is manifested as a phase shift of the modulation envelope of the reflected light, the correlation measurement is repeated N times with a phase step of 2πj=N rad introduced to the sensor or illumination modulation signal). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Kholodenko, SCHAALE and Payne, to derive intensity and depth according to corresponding definitions and phase steps. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached at (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOLAN XU/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jun 26, 2024
Application Filed
Mar 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598315
IMAGE ENCODING/DECODING METHOD AND DEVICE FOR DETERMINING SUB-LAYERS ON BASIS OF REQUIRED NUMBER OF SUB-LAYERS, AND BIT-STREAM TRANSMISSION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586255
CONFIGURABLE POSITIONS FOR AUXILIARY INFORMATION INPUT INTO A PICTURE DATA PROCESSING NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12587652
IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12581120
Method and Apparatus for Signaling Tile and Slice Partition Information in Image and Video Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12581092
TEMPORAL INITIALIZATION POINTS FOR CONTEXT-BASED ARITHMETIC CODING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
87%
With Interview (+13.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 334 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month