Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed August 11th, 2025 has been entered. Claims 1-7, 10-17, and 19-20 remain pending in the application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 10-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ortiz Egea (United States Patent No.10598768 B2), hereinafter Ortiz Egea, in view of Hoberg et al. (DE 102015225192 A1), hereinafter Hobert, and further in view of Ortiz Egea et al. (United States Patent Application Publication 20190349536 A1), hereinafter Ortiz Egea and Fenton.
Regarding claim 1, Ortiz Egea teaches Apparatus for optical sensing ([col.2,line 13] a ToF system 100), comprising:
an illumination assembly, which is configured to direct a first array of beams of optical radiation toward different, respective areas in a target scene while temporally modulating the beams with a carrier wave having a carrier frequency ([col.2,line24] For example, the light source 110 may be an array of m×n light source elements; [col. 3,line52] the processor 106 temporally modulates the source light generated by the light source 110);
a detection assembly, which is configured to receive the optical radiation that is reflected from the target scene ([col.2,line 51] camera 120 includes a plurality of camera pixels including a camera pixel 120a, wherein each camera pixel is assumed to be able to observe at most one significant scattering event from the surface 152), and comprises:
a second array of sensing elements, which are configured to output respective signals in response to the optical radiation that is incident on the sensing elements during one or more detection intervals, which are synchronized with the carrier frequency ([col.2,line 64] the light sampling array 124, and the sampler 124 receives and converts the total light signal 114 into a sampled signal 126; [col.4, line 16] at the single temporal modulation frequency); and
objective optics, which are configured to form an image of the target scene on the second array [col.1, line 62] Time-of-flight (ToF) systems produce a depth image of an object or a scene; [Col. 2, line 57-58] FIG. 1 illustrates one such camera pixel including its imaging lens 116)and
processing circuitry, which is configured to drive the illumination assembly to apply a spatial modulation pattern to the first array of beams and to process the signals output by the sensing elements responsively to the spatial modulation pattern in order to generate a depth map of the target scene ([col.3,line 52] the processor 106 temporally modulates the source light generated by the light source 110 using the spatial patterns 132;),
wherein the spatial modulation pattern defines a spatial variation of the carrier wave, such that first beams illuminating respective first areas of the target scene are modulated at a first carrier frequency, ([Col. 2, lines 27-30] In one implementation, the processor 106 may communicate with the light source 110 to cause the illumination signal generated by the light source to be modulated by the spatial patterns 132, 134;),
Ortiz Egea fails to teach the second modulated beam with a second carrier frequency at twice that of the first carrier frequency, or the apparatus wherein the detection intervals of the sensing elements have a sampling frequency that is equal to the first carrier frequency and a duty cycle that is not equal to 50%.
However, Hoberg teaches a second beam illuminating respective second areas of the target scene modulated at a second carrier frequency that is twice the first carrier frequency (Fig. 8; [0053] In the example shown, the second frequency f<sub>2</sub> is twice as high as the first frequency)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Ortiz Egea to comprise the second carrier frequency at twice that of the first carrier frequency similar to Hoberg, with a reasonable expectation of success. This would have the predictable result of providing a high contrast second beam with clearly distinguishable frequencies for later analysis.
Ortiz Egea, as modified, still fails to teach the apparatus wherein the detection intervals of the sensing elements have a sampling frequency that is equal to the first carrier frequency and a duty cycle that is not equal to 50%.
However, Ortiz Egea and Fenton teaches wherein the detection intervals of the sensing elements have a sampling frequency that is equal to the first carrier frequency and a duty cycle that is not equal to 50% ([0023] The time-of-flight controller machine 120 is configured to repeatedly (e.g., periodically) activate the time-of-flight illuminator 114 and synchronously address the differential sensors 106 of sensor array 104 to acquire IR images; [0055] In an example shown in FIG. 3, the ToF illuminator is activated on a periodic basis with a 33% duty cycle).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Ortiz Egea to comprise the detection intervals of the sensing elements having a sampling frequency that is equal to the first carrier frequency and a duty cycle that is not equal to 50% similar to Ortiz Egea and Fenton, with a reasonable expectation of success. This would have the predictable result of ensuring the first and second signals do not interfere with each other upon reception at the sensing device.
Regarding claim 2, Ortiz Egea, as modified above, teaches the apparatus according to claim 1, wherein the processing circuitry is configured to:
use the spatial modulation pattern in estimating a contribution of multipath interference to the signals, and to subtract out the contribution in computing depth coordinates of points in the target scene ([Eq.8]; [col.5,line 48] The multipath mitigation module 128 may solve one or more of the above values for each pixel of the camera 120 using the sampled values of the image U-acquired at the camera 120 when no pattern is used to modulate the source light 110, and the image P-acquired at the camera 120 when one of the spatial patterns 132 or 134 is used to modulate the source light 110).
Regarding claim 3, Ortiz Egea, as modified above, teaches the apparatus according to claim 2, wherein the processing circuitry is configured to:
receive, with respect to each of the points, first and second signals output by the array of sensing elements in response, respectively, to first and second phases of the spatial modulation pattern ([col.4,line 21] the image acquired at the camera 120 when no pattern (or uniform pattern) is used to modulate the source light 110 can be represented by U as provided below in equation 3, while the image acquired at the camera 120 when one of the spatial patterns 132, 134 is used to modulate the source light 110 can be represented by P as provided below in equation 3; [Eq.3]),
to compute first and second phasors based on a relation of the first and second signals, respectively, to the carrier wave ([col.4,line 44] While the phases of the direct component and the global components for each of the U and P are represented by, respectively, ϕdu, ϕgu, ϕdp, and ϕgp ),
and to compute a difference between the first and second phasors in order to subtract out the contribution of the multipath interference ([Eq.8]; [col.5,line 48] The multipath mitigation module 128 may solve one or more of the above values for each pixel of the camera 120 using the sampled values of the image U-acquired at the camera 120 when no pattern is used to modulate the source light 110, and the image P-acquired at the camera 120 when one of the spatial patterns 132 or 134 is used to modulate the source light 110).
Regarding claim 4, Ortiz Egea, as modified above, teaches the apparatus according to claim 3, wherein the processing circuitry is configured to:
derive the first and second signals from different, respective first and second sensing elements in a vicinity of each of the points ([col.2,line 51] An implementation of the camera 120 includes a plurality of camera pixels including a camera pixel 120a; [col.3,line 5] the light source 110 and the camera 120 may have same number of pixels represented by a matrix with a resolution of m×n such that m is number of rows and n is the number of columns. Each pixel of the camera 120 may be represented by (i, j), with i=1, 2,... , m and j=1, 2, ... n)
wherein different, respective phases of the spatial modulation pattern on the target scene are imaged onto the first and second sensing elements ([col.3,line 34] The signal Pij received by the camera pixel 120a is a complex number that can be written in phasor notation as the multiplication of the amplitude A and the complex exponential of the associated angle ϕ).
Regarding claim 5, Ortiz Egea, as modified above, teaches the apparatus according to claim 3, wherein the processing circuitry is configured to:
derive the first and second signals from a respective sensing element in a vicinity of each of the points, due to different, first and second phases of the spatial modulation pattern on the target scene that are imaged onto the respective sensing element during respective first and second periods of operation of the illumination assembly ([col.2,line51] An implementation of the camera 120 includes a plurality of camera pixels including a camera pixel 120a; [col.3,line 5] the light source 110 and the camera 120 may have same number of pixels represented by a matrix with a resolution of m×n such that m is number of rows and n is the number of columns. Each pixel of the camera 120 may be represented by (i, j), with i=1, 2,... , m and j=1, 2, ... n; [col.3,line 34] The signal Pij received by the camera pixel 120a is a complex number that can be written in phasor notation as the multiplication of the amplitude A and the complex exponential of the associated angle ϕ).
Regarding claim 6, Ortiz Egea, as modified above, teaches the apparatus according to claim 1, wherein the spatial modulation pattern defines:
a binary amplitude variation such that during at least some periods of operation of the illumination assembly, first areas of the target scene are illuminated by the temporally-modulated beams, while second areas of the target scene, interleaved between the first areas, are not illuminated by the temporally-modulated beams ([col. 3,line 54] In one implementation, the processor 106 temporally modulates the source light generated by the light source 110 such that alternate source light images are modulated using one of the spatial patterns 132, 134 and alternate images are unmodulated).
Regarding claim 7, Ortiz Egea, as modified above, teaches the apparatus according to claim 6, wherein the processing circuitry is configured to:
drive the illumination assembly so that the first areas of the target scene are illuminated by the temporally-modulated beams while the second areas of the target scene are not illuminated by the temporally-modulated beams during first periods of the operation, and the second areas of the target scene are illuminated by the temporally-modulated beams while the first areas of the target scene are not illuminated by the temporally-modulated beams during second periods of the operation ([col.3,line 52] The ToF system 100, the processor 106 temporally modulates the source light generated by the light source 110 using the spatial patterns 132, 134. In one implementation, the processor 106 temporally modulates the source light generated by the light source 110 such that alternate source light images are modulated using one of the spatial patterns 132, 134 and alternate images are unmodulated. In one such implementation, at time interval t0, the light source 110 is unmodulated, at time source t1, the light source 110 is modulated by the spatial pattern 132, at time t2 light source 110 is unmodulated, at time t3 the light source 110 is modulated by the spatial pattern 132, etc. In an alternative implementation, the spatial pattern 134 may be used in a similar manner to modulate the source light generated by the light source 110. In one implementation, the processor 106 modulates the source light using one of the spatial patterns 132 and 134 in the manner described above at a single temporal modulation frequency.).
Regarding claim 10, Ortiz Egea, as modified above, teaches the apparatus according to claim 1, wherein:
the spatial modulation pattern defines multiple parallel stripes extending across the target scene, including at least a first set of the stripes and a second set of the stripes interleaved in alternation with the first set, having different, respective first and second modulation characteristics ([col.13, line 21] In one implementation of the physical hardware system the first spatial pattern is a uniform spatial pattern. In an alternative implementation of the physical hardware system, second spatial pattern is a non-uniform spatial pattern… the non-uniform spatial pattern is at least one of a dot-pattern, a vertical-line pattern, and a horizontal line pattern.).
Regarding claim 11, Ortiz Egea, as modified above, teaches the apparatus according to claim 1, wherein:
the spatial modulation pattern defines a grid including at least first and second interleaved sets of areas, having different, respective first and second modulation characteristics ([col.13,line 39] acquiring a first image represented by a first matrix in response to illuminating a target with a light source using a first spatial pattern, acquiring a second image represented by a second matrix in response to illuminating the target with the light source using a second spatial pattern, the second spatial pattern being different than the first spatial pattern).
Regarding claim 12, Ortiz Egea teaches a method for optical sensing, comprising:
directing a first array of beams of optical radiation toward different, respective areas in a target scene ([col.2,line 24] For example, the light source 110 may be an array of m×n light source elements) while temporally modulating the beams with a carrier wave having a carrier frequency ([col. 3,line 52] the processor 106 temporally modulates the source light generated by the light source 110);
forming an image of the target scene on a second array of sensing elements, which output respective signals in response to the optical radiation that is reflected from the target scene and is incident on the sensing elements during one or more detection intervals ([col.2,line 64] the light sampling array 124, and the sampler 124 receives and converts the total light signal 114 into a sampled signal 126), which are synchronized with the carrier frequency ([col.4, line 16] at the single temporal modulation frequency);
applying a spatial modulation pattern to the first array of beams ([col.3,line 52] the processor 106 temporally modulates the source light generated by the light source 110 using the spatial patterns 132;); and
processing the signals output by the sensing elements responsively to the spatial modulation pattern in order to generate a depth map of the target scene ([col.1, line 62] Time-of-flight (ToF) systems produce a depth image of an object or a scene).
wherein the spatial modulation pattern defines a spatial variation of the carrier wave, such that first beams illuminating respective first areas of the target scene are modulated at a first carrier frequency ([Col. 2, lines 27-30] In one implementation, the processor 106 may communicate with the light source 110 to cause the illumination signal generated by the light source to be modulated by the spatial patterns 132, 134;)
Ortiz Egea fails to teach the second modulated beam with a second carrier frequency at twice that of the first carrier frequency, or the apparatus wherein the detection intervals of the sensing elements have a sampling frequency that is equal to the first carrier frequency and a duty cycle that is not equal to 50%.
However, Hoberg teaches a second beam illuminating respective second areas of the target scene modulated at a second carrier frequency that is twice the first carrier frequency (Fig. 8; [0053] In the example shown, the second frequency f<sub>2</sub> is twice as high as the first frequency)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Ortiz Egea to comprise the second carrier frequency at twice that of the first carrier frequency similar to Hoberg, with a reasonable expectation of success. This would have the predictable result of providing a high contrast second beam with clearly distinguishable frequencies for later analysis.
Ortiz Egea, as modified, fails to teach the method wherein the detection intervals of the sensing elements have a sampling frequency that is equal to the first carrier frequency and a duty cycle that is not equal to 50%.
However, Ortiz Egea and Fenton teach the method wherein the detection intervals of the sensing elements have a sampling frequency that is equal to the first carrier frequency and a duty cycle that is not equal to 50% ([0023] The time-of-flight controller machine 120 is configured to repeatedly (e.g., periodically) activate the time-of-flight illuminator 114 and synchronously address the differential sensors 106 of sensor array 104 to acquire IR images; [0055] In an example shown in FIG. 3, the ToF illuminator is activated on a periodic basis with a 33% duty cycle)
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Ortiz Egea to comprise the detection intervals of the sensing elements having a sampling frequency that is equal to the first carrier frequency and a duty cycle that is not equal to 50% similar to Ortiz Egea and Fenton, with a reasonable expectation of success. This would have the predictable result of ensuring the first and second signals do not interfere with each other upon reception at the sensing device.
Regarding claim 13, Ortiz Egea, as modified above, teaches the method according to claim 12, processing the signals comprises:
estimating a contribution of multipath interference to the signals using the spatial modulation pattern, and subtracting out the contribution in computing depth coordinates of points in the target scene ([Eq.8]; [col.5,line 48] The multipath mitigation module 128 may solve one or more of the above values for each pixel of the camera 120 using the sampled values of the image U-acquired at the camera 120 when no pattern is used to modulate the source light 110, and the image P-acquired at the camera 120 when one of the spatial patterns 132 or 134 is used to modulate the source light 110).
Regarding claim 14, Ortiz Egea, as modified above, teaches the method according to claim 13, wherein:
processing the signals comprises receiving, with respect to each of the points, first and second signals output by the array of sensing elements in response, respectively, to first and second phases of the spatial modulation pattern, and wherein estimating the contribution comprises computing first and second phasors based on a relation of the first and second signals, respectively, to the carrier wave ([col.4,line 21] the image acquired at the camera 120 when no pattern (or uniform pattern) is used to modulate the source light 110 can be represented by U as provided below in equation 3, while the image acquired at the camera 120 when one of the spatial patterns 132, 134 is used to modulate the source light 110 can be represented by P as provided below in equation 3; [Eq.3]; [col.4,line 44] While the phases of the direct component and the global components for each of the U and P are represented by, respectively, ϕdu, ϕgu, ϕdp, and ϕgp), and
computing a difference between the first and second phasors in order to subtract out the contribution of the multipath interference ([Eq.8]; [col.5,line 48] The multipath mitigation module 128 may solve one or more of the above values for each pixel of the camera 120 using the sampled values of the image U-acquired at the camera 120 when no pattern is used to modulate the source light 110, and the image P-acquired at the camera 120 when one of the spatial patterns 132 or 134 is used to modulate the source light 110).
Regarding claim 15, Ortiz Egea, as modified above, teaches the method according to claim 14, wherein:
receiving the first and second signals comprises deriving the first and second signals from different, respective first and second sensing elements in a vicinity of each of the points, wherein different, respective phases of the spatial modulation pattern on the target scene are imaged onto the first and second sensing elements ([col.2,line 51] An implementation of the camera 120 includes a plurality of camera pixels including a camera pixel 120a; [col.3,line 5] the light source 110 and the camera 120 may have same number of pixels represented by a matrix with a resolution of m×n such that m is number of rows and n is the number of columns. Each pixel of the camera 120 may be represented by (i, j), with i=1, 2,... , m and j=1, 2, ... n; [col.3,line 34] The signal Pij received by the camera pixel 120a is a complex number that can be written in phasor notation as the multiplication of the amplitude A and the complex exponential of the associated angle ϕ).
Regarding claim 16, Ortiz Egea, as modified above, teaches the method according to claim 14, wherein:
receiving the first and second signals comprises deriving the first and second signals from a respective sensing element in a vicinity of each of the points, due to different, first and second phases of the spatial modulation pattern on the target scene that are imaged onto the respective sensing element during respective first and second periods of operation ([col.2,line 51] An implementation of the camera 120 includes a plurality of camera pixels including a camera pixel 120a; [col.3,line 5] the light source 110 and the camera 120 may have same number of pixels represented by a matrix with a resolution of m×n such that m is number of rows and n is the number of columns. Each pixel of the camera 120 may be represented by (i, j), with i=1, 2,... , m and j=1, 2, ... n; [col.3,line 34] The signal Pij received by the camera pixel 120a is a complex number that can be written in phasor notation as the multiplication of the amplitude A and the complex exponential of the associated angle ϕ).
Regarding claim 17, Ortiz Egea, as modified above, teaches the method according to claim 12, wherein:
the spatial modulation pattern defines a binary amplitude variation such that during at least some periods of operation, first areas of the target scene are illuminated by the temporally-modulated beams, while second areas of the target scene, interleaved between the first areas, are not illuminated by the temporally-modulated beams [col. 3,line54] In one implementation, the processor 106 temporally modulates the source light generated by the light source 110 such that alternate source light images are modulated using one of the spatial patterns 132, 134 and alternate images are unmodulated).
Regarding claim 18, Ortiz Egea, as modified above, teaches the method according to claim 12, wherein:
the spatial modulation pattern defines a spatial variation of the carrier wave, such that first beams illuminating respective first areas of the target scene are modulated at a first carrier frequency, while second beams illuminating respective second areas of the target scene are modulated at a second carrier frequency, different from the first carrier frequency ([col.13,line 30] Alternatively, the light source illuminates a target using N temporal modulation frequencies with the first spatial pattern and N temporal modulation frequency with the second spatial pattern).
Regarding claim 19, Ortiz Egea, as modified above, teaches the method according to claim 12, wherein:
the spatial modulation pattern defines multiple parallel stripes extending across the target scene, including at least a first set of the stripes and a second set of the stripes interleaved in alternation with the first set, having different, respective first and second modulation characteristics ([col.13, line 21] In one implementation of the physical hardware system the first spatial pattern is a uniform spatial pattern. In an alternative implementation of the physical hardware system, second spatial pattern is a non-uniform spatial pattern… the non-uniform spatial pattern is at least one of a dot-pattern, a vertical-line pattern, and a horizontal line pattern.).
Regarding claim 20, Ortiz Egea, as modified above, teaches the method according to claim 12, wherein:
the spatial modulation pattern defines a grid including at least first and second interleaved sets of areas, having different, respective first and second modulation characteristics ([col.13,line 39] acquiring a first image represented by a first matrix in response to illuminating a target with a light source using a first spatial pattern, acquiring a second image represented by a second matrix in response to illuminating the target with the light source using a second spatial pattern, the second spatial pattern being different than the first spatial pattern).
Response to Arguments
Applicant's arguments filed August 11th, 2025 have been fully considered but they are not persuasive.
Regarding the applicant’s argument that the prior art of Hoberg does not teach the illumination of different parts of a scene with different carrier frequencies as described by the limitations of claim 1, it is noted by the examiner that the prior art must be considered in its entirety. The prior art of Ortiz Egea teaches the limitation of the two sensor arrays that are programmed to scan the various areas of a scene, which when taken in combination with the art of Hoberg would be led to a combination obvious to incorporate in the configuration of the claim limitations, as argued in the above 103 rejection.
In response to the argument that Ortiz Egea and Fenton do not teach a sensor which has a non-50% duty cycle, as outlined by the limitations of claim 1, it is noted that the paragraphs referenced in the rejection make note that the illuminators and sensors are operated synchronically and sequentially, which a person of reasonable skill in the art would understand to operate under the same conditions as the illuminator. As such the prior art of record is maintained in the current final rejection.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT WILLIAM VASQUEZ JR whose telephone number is (571)272-3745. The examiner can normally be reached Monday thru Thursday, Flex Friday, 8:00-5:00 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ROBERT HODGE can be reached at (571)272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT W VASQUEZ/Examiner, Art Unit 3645
/ROBERT W HODGE/Supervisory Patent Examiner, Art Unit 3645