Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description:
Figure 4B, Cathode 438 is not included in the description.
Figure 10, element 1002 is not included in the description.
Figure 12, element 1200 is not included in the description.
Figure 13, element 13 is not included in the description.
Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to because
In Figure 13, element 606a should likely be changed to figure 1306a.
In figure 14, element 1402f should likely read “Autonomous Vehicle Computer” instead of Autonomous Vehicle Compute”
In figure 16, element 1600 should likely read “Autonomous Vehicle Computer” instead of Autonomous Vehicle Compute”
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “202” has been used to designate both the “Emission System” of Fig. 2A and the “Flash Lidar System” of Fig. 2B. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
In paragraph [0062] lines 10-11, "first 226 a, second 226 b, and third 226 c objects respectively." should likely read "first 226 a, second 226 b, and third 226 c detectors respectively."
In paragraph [0122], lines 5-6, “readout circuitry 610” should likely read “readout circuitry 632”.
In paragraph [0127], lines 1, 13, and 16, “Fig. 7A” should likely read “Fig. 7”.
The term “autonomous vehicle compute” should likely read “autonomous vehicle computer” in paragraph [0178], [0183], [0185], [0187-0193], [0211], and [0217].
The term “reduce signal to noise ratio” should likely read “increase signal to noise ratio in paragraph [0033] line 11, [0045] line 2, and [0292] line 3.
The term “reduce signal to noise ratio and/or reliability” should likely read “increase signal to noise ration and/or reliability” in paragraph [0134] lines 9-10, [0135] lines 6-7, and 0137] line 11.
Appropriate correction is required.
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7 and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Where applicant acts as his or her own lexicographer to specifically define a term of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The term “reduce a signal-to-noise ratio” in claim 7 and 19 is used by the claim to mean “reduce the level of noise relative to the desired signal,” while the accepted meaning is “increase the noise level in a received signal.” The term is indefinite because the specification does not clearly redefine the term.
For examination purposes, the claim language will be interpreted as the limitations within the same claim describe the process, that is reducing the noise relative to the signal, as would be expected with an increased signal to noise ratio.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3-10, 12-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Pacala (United States Patent Application Publication 20220291387 A1), hereinafter Pacala
Regarding claim 1, Pacala teaches a range finding system, comprising:
an optical system configured to receive light from an environment through a field of view of the range finding system;
a sensor configured to receive the light from the optical system and generate a plurality of sensor signals in response to the light, the sensor comprising a plurality of pixels, wherein a pixel of the plurality of pixels generates a sensor signal of the plurality of sensor signals;
a readout system configured to:
generate a plurality of return signals based on the plurality of sensor signals received from the sensor, wherein a return signal of the plurality of return signals is generated using the sensor signal, wherein the return signal is configured to indicate a reflection of an optical probe signal, wherein the optical probe signal is generated by a first light source of the range finding system,
generate a plurality of background signals based at least in part on the plurality of sensor signals received from the sensor, wherein a background signal of the plurality of background signals is generated based at least in part on the sensor signal, wherein the background signal is configured to indicate a magnitude of light generated by a second light source different from the first light source, and
generate a feedback signal based at least in part on the background signal,
a detection control system configured to use the feedback signal to dynamically adjust at least one of the optical system, sensor, or readout system.
Regarding claim 3, Pacala teaches the range finding system of claim 1, wherein the second light source comprises a light emitting system different from the range finding system, or sun light ([0157] The light ranging device can include a transmission circuit (e.g., 240 of FIG. 2), which can comprise a plurality of light sources that emit light pulses, and a detection circuit (e.g., 230 of FIG. 2), which can comprise an array of photosensors that detect reflected light pulses and output signals measured over time.).
Regarding claim 4, Pacala teaches the range finding system of claim 1, wherein the detection control system is configured to control the optical system by:
identifying one or more noisy pixels of the sensor that are associated with background signals larger than a threshold value to identify a portion of the field of view from which light is directed to the one or more noisy pixels ([0060] The optical receiver system can compare the detected light intensity against a detection threshold to identify the laser pulse reflection 320. The detection threshold can distinguish the background light 330 from light corresponding to the laser pulse reflection 320.); and
changing the field of view based on the identified portion of the field of view to reduce a level of background light received by at least a portion of the one or more noisy pixels ([0176] As a result of applying the filter, a strength of the signal to noise in a signal can be increased such that a depth value can be kept.).
Regarding claim 5, Pacala teaches the range finding system of claim 1, wherein the optical system comprises at least one reconfigurable spatial optical filter ([0175] A filter kernel can determine kernel weights or “sameness” for each pixel with respect to a center pixel) and wherein the detection control system is configured to control the optical system by:
identifying a noisy pixel of the sensor that is associated with a background signal having magnitudes larger than a threshold value ([0060] The optical receiver system detects background light 330 initially and after some time detects the laser pulse reflection 320. The optical receiver system can compare the detected light intensity against a detection threshold to identify the laser pulse reflection 320. The detection threshold can distinguish the background light 330 from light corresponding to the laser pulse reflection 320.);
identifying a direction along which at least a portion of light directed to the noisy pixel is received from the environment ([0133] In a scanning sensor system, uniformity of pixel spacing in the scanning direction can be achieved by controlling the shutter intervals relative to the rotation angle of the sensor system (e.g., as described below) and by limiting the intrapixel pointing error, or by having independent shutter control on each pixel to eliminate intrapixel error completely. In the non-scanning direction, it is desirable that the object-space pixels along a column are uniformly spaced and that columns in object space map to columns in image space.);
adjusting the reconfigurable spatial optical filter to reduce an amount of light received from the environment along the identified direction ([0176] For instance, if there are two pixels next to each other in the XY plane and both pixels have a peak at a similar distance (e.g., within a distance threshold or an accumulated value above a threshold), the peak can be identified as a real peak. In this manner, peak detection can use a variable (adjusted) threshold based on the signals at neighboring pixels. The threshold(s) and/or the underlying data can be changed (adjusted). The signal processor can make such an identification. The adjusting of a peak value or a detection threshold can be based on the aggregated information of the subset.).
Regarding claim 6, Pacala teaches the range finding system of claim 1, wherein the detection control system is configured to control the readout system by adjusting an event validation threshold level ([0176] The threshold(s) and/or the underlying data can be changed (adjusted). The signal processor can make such an identification. The adjusting of a peak value or a detection threshold can be based on the aggregated information of the subset.).
Regarding claim 7, Pacala teaches the range finding system of claim 1, wherein the detection control system is configured to reduce a signal-to-noise ratio of at least a portion of the plurality of sensor signals and/or the plurality of return signals ([0091] As described in more detail below, matched filters can be used to identify a pulse pattern, thereby effectively increasing the signal-to-noise ratio) by:
identifying pixels that generate background signals having magnitudes larger than a threshold level using the feedback signal ([0060] The optical receiver system detects background light 330 initially and after some time detects the laser pulse reflection 320. The optical receiver system can compare the detected light intensity against a detection threshold to identify the laser pulse reflection 320. The detection threshold can distinguish the background light 330 from light corresponding to the laser pulse reflection 320.); and
adjusting the readout system to reduce contribution of sensor signals generated by the identified pixels, in generation of the at least a portion of the plurality of sensor signals and/or the plurality of return signals ([0174] The filter kernels can be swept over a lidar frame. As examples, the application of filter kernels can provide range smoothing on neighboring range pixels and/or a time series of range values for a current lidar pixel, edge smoothing, or reduction in noise (e.g., using statistics).).
Regarding claim 8, Pacala teaches the range finding system of claim 7, wherein the detection control system is configured to turning off the identified pixels or changing a bias voltage of the identified pixels ([0075] A suitable circuit senses the leading edge of the avalanche current, generates a standard output pulse synchronous with the avalanche build-up, quenches the avalanche by lowering the bias down below the breakdown voltage, and restore the photodiode to the operative level.).
Regarding claim 9, Pacala teaches a method implemented by a at least one processor of a range finding system, the method comprising:
obtaining, by the at least one processor, a plurality of sensor signals from a sensor, wherein the plurality of sensor signals are response to light received at an optical system from an environment, wherein the sensor comprises a plurality of pixels, and wherein a pixel of the plurality of pixels generates a sensor signal of the plurality of sensor signals ([0041] The scanning LIDAR system 101 shown in FIG. 1A can employ a scanning architecture, where the orientation of the LIDAR light source 107 and/or detector circuitry 109 can be scanned around one or more fields of view 110; [0061] Pulses from the laser device reflect from objects in the scene at different times and the pixel array detects the pulses of radiation reflection.);
generating, by the at least one processor, a plurality of return signals based on the plurality of sensor signals using a readout system, wherein a return signal of the plurality of return signals is generated using the sensor signal, wherein the return signal is configured to indicate a reflection of an optical probe signal, wherein the optical probe signal is generated by a first light source of the range finding system ([0061] Pulses from the laser device reflect from objects in the scene at different times and the pixel array detects the pulses of radiation reflection.; [0064] Light ranging system 400 includes a light emitter array 402 and a light sensor array 404. The light emitter array 402 includes an array of light emitters, e.g., an array of VCSELs and the like, such as emitter 403 and emitter 409. Light sensor array 404 includes an array of photosensors, e.g., sensors 413 and 415. The photosensors can be pixelated light sensors that employ, for each pixel, a set of discrete photodetectors such as single photon avalanche diodes (SPADs) and the like. However, various embodiments can deploy any type of photon sensors.),
generating, by the at least one processor, a plurality of background signals based at least in part on the plurality of sensor signals using the readout system, wherein a background signal of the plurality of background signals is generated based at least in part on the sensor signal, and wherein the background signal is configured to indicate a magnitude of light generated by a second light source different from the first light source ([0060] The optical receiver system detects background light 330 initially and after some time detects the laser pulse reflection 320. The optical receiver system can compare the detected light intensity against a detection threshold to identify the laser pulse reflection 320. The detection threshold can distinguish the background light 330 from light corresponding to the laser pulse reflection 320.; [0146] Peak detector 1320 can also measure a signal value and a noise value, effectively providing some signal to noise measurement for a lidar pixel. A signal value can correspond to the number of photon counts at a peak in the histogram, and a noise value can correspond to a background level in time bins outside of a peak region.), and
generating, by the at least one processor, a feedback signal based at least in part on the background signal using the readout system ([0081] The time bins can be measured relative to a start signal, e.g., at start time 315 of FIG. 3. Thus, the counters for time bins right after the start signal may have low values corresponding to a background signal, e.g., background light 330.),
dynamically adjusting, by the at least one processor, at least one of the optical system, sensor, or the readout system, based on the feedback signal ([0175] A filtered value can be determined for a pixel from the weighted sum of the kernel applied to a center pixel, and the accumulated (aggregated) value (e.g., range, signal, noise, or color) can be used to determine whether the value is kept (e.g., sufficient confidence above a threshold), and passed to the user or a next stage in the pipeline.).
Regarding claim 10, Pacala teaches the method of claim 9, wherein the sensor comprises at least one reference pixel or reference subpixel and the background signal is generated at least partly by the at least one reference pixel or reference subpixel ([0175] A filter kernel can determine kernel weights or “sameness” for each pixel with respect to a center pixel, so as to provide a filtered value for the center pixel.).
Regarding claim 12, Pacala teaches the method of claim 9, wherein generating a plurality of background signals comprises generating, by the at least one processor, at least a portion of the plurality of background signals at least partially based on a portion of the light received from the optical system having wavelengths different from a wavelength of the optical probe signal ([0146] In various embodiments, the amount of light at the operating wavelength (e.g., of the emitted light source) can be used to estimate noise, or other wavelengths can be used. Thus, a lidar pixel can include a range (depth) value, a signal value, and a noise value.; [0183] As described above, lidar image processor 1410 and color image processor 1430 can transmit information to each other, including values for lidar and color images, so that a combined processing can be performed. For example, in some implementations, the color values in any of the color images (e.g., initial, filter, or processed) can be used to estimate noise, which can then be used in determining a depth value, an accuracy of a depth value, and/or whether or not to report a depth value in a final lidar image, e.g., as provided to a user. For example, when the level of ambient light is low, just measuring the noise in the wavelength of the light source might lead to inaccuracies, particularly when the background light is no uniform over time.).
Regarding claim 13, Pacala teaches the method of claim 9, wherein the second light source comprises a light emitting system different from the range finding system, or sun light ([0157] The light ranging device can include a transmission circuit (e.g., 240 of FIG. 2), which can comprise a plurality of light sources that emit light pulses, and a detection circuit (e.g., 230 of FIG. 2), which can comprise an array of photosensors that detect reflected light pulses and output signals measured over time.).
Regarding claim 14, Pacala teaches the method of claim 9, wherein dynamically adjusting the optical system comprises, by the at least one processor:
identifying one or more noisy pixels of the sensor that are associated with background signals larger than a threshold value to identify a portion of a field of view from which light is directed to the one or more noisy pixels ([0060] The optical receiver system can compare the detected light intensity against a detection threshold to identify the laser pulse reflection 320. The detection threshold can distinguish the background light 330 from light corresponding to the laser pulse reflection 320.); and
changing the field of view based on the identified portion of the field of view to reduce a level of background light received by at least a portion of the one or more noisy pixels ([0176] As a result of applying the filter, a strength of the signal to noise in a signal can be increased such that a depth value can be kept.).
Regarding claim 15, Pacala teaches the method of claim 9, wherein dynamically adjusting the readout system comprises, by the at least one processor, adjusting an event validation threshold level ([0176] The threshold(s) and/or the underlying data can be changed (adjusted). The signal processor can make such an identification. The adjusting of a peak value or a detection threshold can be based on the aggregated information of the subset.).
Regarding claim 16, Pacala teaches at least one non-transitory storage media storing machine-executable instructions ([0296] The software code may be stored as a series of instructions or commands on a computer readable medium for storage and/or transmission.) that, when executed by at least one processor, cause the at least one processor to:
obtain a plurality of sensor signals from a sensor, wherein the plurality of sensor signals are response to light received at an optical system from an environment, wherein the sensor comprises a plurality of pixels, and wherein a pixel of the plurality of pixels generates a sensor signal of the plurality of sensor signals ([0041] The scanning LIDAR system 101 shown in FIG. 1A can employ a scanning architecture, where the orientation of the LIDAR light source 107 and/or detector circuitry 109 can be scanned around one or more fields of view 110; [0061] Pulses from the laser device reflect from objects in the scene at different times and the pixel array detects the pulses of radiation reflection.);
generate a plurality of return signals based on the plurality of sensor signals using a readout system, wherein a return signal of the plurality of return signals is generated using the sensor signal, wherein the return signal is configured to indicate a reflection of an optical probe signal, wherein the optical probe signal is generated by a first light source of a range finding system ([0061] Pulses from the laser device reflect from objects in the scene at different times and the pixel array detects the pulses of radiation reflection.; [0064] Light ranging system 400 includes a light emitter array 402 and a light sensor array 404. The light emitter array 402 includes an array of light emitters, e.g., an array of VCSELs and the like, such as emitter 403 and emitter 409. Light sensor array 404 includes an array of photosensors, e.g., sensors 413 and 415. The photosensors can be pixelated light sensors that employ, for each pixel, a set of discrete photodetectors such as single photon avalanche diodes (SPADs) and the like. However, various embodiments can deploy any type of photon sensors.),
generate a plurality of background signals based at least in part on the plurality of sensor signals using the readout system, wherein a background signal of the plurality of background signals is generated based at least in part on the sensor signal, and wherein the background signal is configured to indicate a magnitude of light generated by a second light source different from the first light source ([0060] The optical receiver system detects background light 330 initially and after some time detects the laser pulse reflection 320. The optical receiver system can compare the detected light intensity against a detection threshold to identify the laser pulse reflection 320. The detection threshold can distinguish the background light 330 from light corresponding to the laser pulse reflection 320.; [0146] Peak detector 1320 can also measure a signal value and a noise value, effectively providing some signal to noise measurement for a lidar pixel. A signal value can correspond to the number of photon counts at a peak in the histogram, and a noise value can correspond to a background level in time bins outside of a peak region.), and
generate a feedback signal based at least in part on the background signal using the readout system ([0081] The time bins can be measured relative to a start signal, e.g., at start time 315 of FIG. 3. Thus, the counters for time bins right after the start signal may have low values corresponding to a background signal, e.g., background light 330.),
dynamically adjust at least one of the optical system, the sensor, or the readout system, based on the feedback signal ([0175] A filtered value can be determined for a pixel from the weighted sum of the kernel applied to a center pixel, and the accumulated (aggregated) value (e.g., range, signal, noise, or color) can be used to determine whether the value is kept (e.g., sufficient confidence above a threshold), and passed to the user or a next stage in the pipeline.).
Regarding claim 17, Pacala teaches the at least one non-transitory storage media of claim 16, wherein the machine-executable instructions cause the at least one processor to generate the plurality of background signals by generating at least a portion of the plurality of background signals at least partially based on a portion of the light received from the optical system having wavelengths different from a wavelength of the optical probe signal ([0146] In various embodiments, the amount of light at the operating wavelength (e.g., of the emitted light source) can be used to estimate noise, or other wavelengths can be used. Thus, a lidar pixel can include a range (depth) value, a signal value, and a noise value.; [0183] As described above, lidar image processor 1410 and color image processor 1430 can transmit information to each other, including values for lidar and color images, so that a combined processing can be performed. For example, in some implementations, the color values in any of the color images (e.g., initial, filter, or processed) can be used to estimate noise, which can then be used in determining a depth value, an accuracy of a depth value, and/or whether or not to report a depth value in a final lidar image, e.g., as provided to a user. For example, when the level of ambient light is low, just measuring the noise in the wavelength of the light source might lead to inaccuracies, particularly when the background light is no uniform over time.)
Regarding claim 18, Pacala teaches the at least one non-transitory storage media of claim 16, wherein the second light source comprises a light emitting system different from the range finding system, or sun light ([0157] The light ranging device can include a transmission circuit (e.g., 240 of FIG. 2), which can comprise a plurality of light sources that emit light pulses, and a detection circuit (e.g., 230 of FIG. 2), which can comprise an array of photosensors that detect reflected light pulses and output signals measured over time.).
Regarding claim 19, Pacala teaches the at least one non-transitory storage media of claim 16, wherein the machine-executable instructions cause the at least one processor to dynamically adjust at least one of the optical system, sensor, or readout system, to reduce a signal-to-noise ratio of at least a portion of the plurality of sensor signals and/or the plurality of return signals ([0176] The threshold(s) and/or the underlying data can be changed (adjusted). The signal processor can make such an identification. The adjusting of a peak value or a detection threshold can be based on the aggregated information of the subset.) by:
identifying pixels that generate background signals having magnitudes larger than a threshold level using the feedback signal ([0060] The optical receiver system can compare the detected light intensity against a detection threshold to identify the laser pulse reflection 320. The detection threshold can distinguish the background light 330 from light corresponding to the laser pulse reflection 320.); and
adjusting the readout system to reduce contribution of sensor signals generated by the identified pixels, in generation of the at least a portion of the plurality of sensor signals and/or the plurality of return signals ([0176] As a result of applying the filter, a strength of the signal to noise in a signal can be increased such that a depth value can be kept.).
Regarding claim 20, Pacala teaches the at least one non-transitory storage media of claim 19, wherein the machine-executable instructions cause the at least one processor to dynamically adjust the optical system by turning off the identified pixels or changing a bias voltage of the identified pixels ([0075] A suitable circuit senses the leading edge of the avalanche current, generates a standard output pulse synchronous with the avalanche build-up, quenches the avalanche by lowering the bias down below the breakdown voltage, and restore the photodiode to the operative level.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Pacala in view of Meyers et al. (United States Patent Application Publication 20140340570 A1), hereinafter Meyers.
Regarding claim 2, Pacala teaches the range finding system of claim 1, wherein the sensor comprises at least one reference pixel or reference subpixel and the background signal is generated at least partly by the at least one reference pixel or reference subpixel ([0175] A filter kernel can determine kernel weights or “sameness” for each pixel with respect to a center pixel, so as to provide a filtered value for the center pixel.),
Pacala fails to teach the system wherein the at least one reference pixel or reference subpixel comprises an optical filter having a passband broader than an operating wavelength range of the range finding system.
However, Meyers teaches the system wherein the at least one reference pixel or reference subpixel comprises an optical filter having a passband broader than an operating wavelength range of the range finding system ([0229] Modern advanced infrared cameras may provide per-pixel co-located measurements of infrared wavelengths in, for instance, the mid-wave infrared (MWIR) and long-wave infrared (LWIR) wavelength bands. One band may be used to provide reference pixel values and pixel in the other band can be allocated or summed to provide bucket values. A further, local G.sup.(2) type calculation may take place wherein deviations from the ensemble mean of one wavelength band can be multiplied with deviations from the ensemble mean of the other wavelength band.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Pacala to comprise the broader wavelength reference pixel similar to Meyers, with a reasonable expectation of success. This would have the predictable result of ensuring the reference pixel has more context within the image to reduce the amount of noise present in the scan.
Regarding claim 11, Pacala teaches the method of claim 10,
Pacala fails to teach the method wherein the at least one reference pixel or reference subpixel comprises an optical filter having a passband broader than an operating wavelength range of the range finding system.
However, Meyers teaches the method wherein the at least one reference pixel or reference subpixel comprises an optical filter having a passband broader than an operating wavelength range of the range finding system ([0229] Modern advanced infrared cameras may provide per-pixel co-located measurements of infrared wavelengths in, for instance, the mid-wave infrared (MWIR) and long-wave infrared (LWIR) wavelength bands. One band may be used to provide reference pixel values and pixel in the other band can be allocated or summed to provide bucket values. A further, local G.sup.(2) type calculation may take place wherein deviations from the ensemble mean of one wavelength band can be multiplied with deviations from the ensemble mean of the other wavelength band.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Pacala to comprise the broader wavelength reference pixel similar to Meyers, with a reasonable expectation of success. This would have the predictable result of ensuring the reference pixel has more context within the image to reduce the amount of noise present in the scan.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT WILLIAM VASQUEZ JR whose telephone number is (571)272-3745. The examiner can normally be reached Monday thru Thursday, Flex Friday, 8:00-5:00 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ROBERT HODGE can be reached at (571)272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT W VASQUEZ/Examiner, Art Unit 3645
/ROBERT W HODGE/Supervisory Patent Examiner, Art Unit 3645