DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is responsive to the correspondence filled on 2/3/26.
Claims 1-4, 9, 11, 13-17, 19, 22-23, 26, 34 and 36 are presented for examination.
IDS Considerations
The information disclosure statement (IDS) submitted on 9/6/24 is/are being considered by the examiner as the submission is in compliance with the provisions of 37 CFR 1.97.
Election/Restriction
Applicant has elected Group I, (claims 1-4, 9, 11, 13-17, 19, 22-23, 26, 34 and 36), without traverse in response to restriction requirement sent on 8/4/25. Claims 27, 41, and 45, which are directed to the non-elected groups, are withdrawn from consideration.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Examiner is invoking 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, for examining claim(s) 1 because these claim(s) are drawn to a functionality comprising module which use a generic placeholder, “module” coupled with functional language “wherein the night vision imaging module is configured to capture a first night vision image of a scene” in claim 1 without reciting sufficient structure to achieve the function.
However, a review of the specification paragraph [0093], [0096] and FIG. 1 shows corresponding structure.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 9, 11, 13, 15-16, 22-23 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Van (U.S. Pub. No. 20200126378 A1), in view of Coleman (U.S. Pub. No. 20210112647 A1).
Regarding to claim 1 and 22:
1. Van teach an observation device comprising: (Van [0006] In an electronic video monitoring system for security and surveillance, a recording device can improve image quality, particularly in a night mode) a night vision imaging module; and image processing means; (Van [0007] In one aspect, the present invention can provide an enhancement to night video quality. A white (visible light) LED (Light Emitting Diode) or spotlight can be used in combination with IR/night vision to collect images using different light sources to blend both IR and color images to create an enhanced (visually improved) video image. By blending the images, a richer, more valuable experience in night mode can be created by enabling some of the blended image/video to contain color on closer objects.) wherein the night vision imaging module is configured to capture a first night vision image of a scene (Van [0007] In one aspect, the present invention can provide an enhancement to night video quality. A white (visible light) LED (Light Emitting Diode) or spotlight can be used in combination with IR/night vision to collect images using different light sources to blend both IR and color images to create an enhanced (visually improved) video image) and a second night vision image of substantially the same scene; (Van [0007] In one aspect, the present invention can provide an enhancement to night video quality. A white (visible light) LED (Light Emitting Diode) or spotlight can be used in combination with IR/night vision to collect images using different light sources to blend both IR and color images to create an enhanced (visually improved) video image.) wherein the image processing means is configured to generate a merged night-vision image from the first image and the second image; (Van [0007] In one aspect, the present invention can provide an enhancement to night video quality. A white (visible light) LED (Light Emitting Diode) or spotlight can be used in combination [merged] with IR/night vision to collect images using different light sources to blend both IR and color images to create an enhanced (visually improved) video image. By blending the images, a richer, more valuable experience in night mode can be created by enabling some of the blended image/video to contain color on closer objects.)
Van [0008] In many locations where security cameras are positioned, there may be some low level of ambient light. However, this level of ambient light may not be sufficient to enable the production of color video. By selectively activating white LED's, color features can be captured over a limited range. To minimize excessive current draw, the IR and white LED's can be selectively synchronized with frame captures, and a blending function similar to High Dynamic Range (HDR) blending can execute to blend the night vision (IR image) with the day mode image (color using light LED's).
Van do not explicitly teach wherein the first night-vision image is captured at a first illumination and the second night-vision image is captured at a second illumination such that the merged night- vision image exhibits a greater dynamic range than the first image or the second image alone.
However Coleman teach and wherein the first night-vision image (Coleman [0044] light emitting system comprises a plurality of AVLEDs on drones such that the drones provide safe illumination and/or irradiation in a battlefield, blind enemy combatants by illuminating and/or irradiating them from one or more directions and optionally illuminating and/or irradiating them only such that in a night battle, the night vision of the group attacking the enemy combatants is substantially maintained) is captured at a first illumination and the second night-vision image is captured at a second illumination (a more accurate measurement of the reflective properties (Coleman [0109] such as spectral reflectance) of one or more surfaces in the environment may be obtained, particularly if the calibration is configured for measuring the reflective properties using two or more light sources emitting light from the AVLED with different spectral properties (such as red, green, and/or blue LEDs). In one embodiment, the spectral properties of one or more spectral filters for one or more sensors, or each photosensor in an array of photosensors (such as an imager for a camera) is evaluated at the factory such that the accuracy of the device is increased due to variations in color filter properties in manufacturing, for example. In one embodiment, an AVLED comprises a color CCD imager or color CMOS imager, and a color filter array with red, green, and blue color filters, wherein the AVLED (or system comprising the AVLED) estimates the color of the light from one or more angular bins, spatial zones, or surfaces from information derived from the color CCD imager or color CMOS imager. In one embodiment, the AVLED comprises an imager with a filter array positioned between the imager and the environment wherein the filter array comprises visible light filters (such as red, green, blue, or one or more tristimulus filters, for example) and one or more filters for non-visible light (such as bandpass filters that have an average transmittance above 80% for light with wavelengths between 800 nm and 1200 nm and an average transmittance less than 20% for light between 400 nm and 700 nm, for example).) such that the merged night- vision image (Coleman [0253] analyzing the images [first and second image] from the imager from the first AVLED and the imager from the second AVLED may deduce that the light from angular bin 55 from the first AVLED illuminates a wall opposite the desk such that the light reflected from the wall partially illuminates the first region and that light from angular bin 65 from the first AVLED illuminates the ceiling above the desk such that the light reflected from the ceiling illuminates the first region and the combined [merged image from combined illumination] indirect illumination from angular bin 55 and angular bin 65 on the first region may match the neighboring or target illuminance or luminance. Thus, in this case, light from the first AVLED could illuminate what would normally be a shadow region using indirect illumination The indirect illumination could also be supplemented by direct and/or indirect illumination from the second AVLED or other AVLED.) exhibits a greater dynamic range than the first image or the second image alone. (Coleman [0212] The AVLED could communicate with the device used with the imager and the AVLED could send more light to under-illuminated (and/or under-irradiated) areas. In one embodiment, the portable device, such as a camera or smartphone comprises an AVLED illuminator and/or irradiator for images taken by the smartphone or camera. In this embodiment, the illumination and/or irradiation could be specifically tailored to provide a more uniform illumination and/or irradiation by sending less light into some angular bins (where there is more than sufficient illumination and/or irradiation) and more light into other angular bins where there may be shadows (or more illumination and/or irradiation needed). This can increase the visibility of the content for the image without requiring as much post-processing and can improve the dynamic range and/or detail contrast of the final image. The AVLED could also illuminate the background more than the foreground (such as in the case of some nighttime photos where this is needed).)
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Van, further incorporating Coleman in video/camera technology. One would be motivated to do so, to incorporate the first night-vision image is captured at a first illumination and the second night-vision image is captured at a second illumination such that the merged night- vision image exhibits a greater dynamic range than the first image or the second image alone. This functionality will improve efficiency with predictable results.
Regarding to claim 2-4:
4. Van teach the observation device of claim 1, Van do not explicitly teach wherein the first night-vision image is captured at a low level of illumination and/or wide beam angle, and the second night-vision image is captured at a high level of illumination and/or narrow beam angle, optionally by controlling a single illuminator or switching between two different illuminators.
However Coleman teach wherein the first night-vision image is captured at a low level of illumination and/or wide beam angle, (Coleman [0178] In one embodiment, the AVLED emits different light from different colored light sources into a single angular bin (such as red, green, and blue light), the flux ratios and total flux output could be maximized to maintain the correct CIE 1976 Uniform Color Scale u′v′ color coordinates of the corresponding object, region, or spatial zone when illuminated with a white or other light source from one or more AVLEDs. For example, a white broadband light source from an AVLED could illuminate the object or region of interest directly (such as by only emitting light flux into the angular bin corresponding to the object (or by emitting 20, 30, or 40 times or more light flux than in adjacent angular bins) such that the reflectance spectrum and/or the color coordinate may be determined. In this example, any potential issues related to colored illumination that might occur with a substantially isotropic or wide angle sources (such as an LED bulb only providing wide angle diffuse lighting) illuminated a blue wall, for example such that the reference illumination is not standard or known is reduced or eliminated) and the second night-vision image is captured at a high level of illumination and/or narrow beam angle, optionally by controlling a single illuminator or switching between two different illuminators. (Coleman [0280] the user may point a laser or other light source with a narrow divergence (such as less than 10 degrees FWHM luminous intensity, for example) at and object/individual/animal, etc. and press or press and hold a button on the device or other remote or portable device to identify the object/individual/animal of interest to the processor for the AVLED with the imager imaging the object/individual or animal or a processor remote from the AVLED receiving information or images from the AVLED with the imager. [0289] The light properties of these light sources or light emitting devices may be measured at particular times and/or events, in on-state, in the off-state, or at varying light flux levels if dimmable (optionally light flux output controllable by the system comprising AVLED))
Regarding to claim 9:
9. Van teach the observation device of claim 1, wherein the image processing means is configured to generate the merged night vision image from the first image, the second image, and any additional images, using tone mapping and/or exposure fusion. (Van [0008] In many locations where security cameras are positioned, there may be some low level of ambient light. However, this level of ambient light may not be sufficient to enable the production of color video. By selectively activating white LED's, color features can be captured over a limited range. To minimize excessive current draw, the IR and white LED's can be selectively synchronized with frame captures, and a blending function similar to High Dynamic Range (HDR) blending can execute to blend the night vision (IR image) with the day mode image (color using light LED's). Moreover, modulation of the IR and white LED's could coincide with the frame capture rate to improve battery life in such cameras that are battery powered devices. [0010] Accordingly, by enhancing the control and timing of each different type of LED illumination (IR or white LED's, for example), images captured within each different type of illumination can be processed and blended to create an enhanced, more valuable video for our customers. The white LED's can provide a chrominance component of the video. The IR LED's can provide a luminance component of the video. The “blended” image could enable the ability to better identify potential intruders, such as skin tone [tone mapping], clothing and shoe color, and the like)
Regarding to claim 11:
11. Van teach the observation device of claim 1, wherein the first night-vision image of the scene is captured at a first exposure and the second night-vision image is captured at a second exposure, (Van [0008] In many locations where security cameras are positioned, there may be some low level of ambient light. However, this level of ambient light may not be sufficient to enable the production of color video. By selectively activating white LED's, color features can be captured over a limited range. To minimize excessive current draw, the IR and white LED's can be selectively synchronized with frame captures, and a blending function similar to High Dynamic Range (HDR) blending can execute to blend the night vision (IR image) with the day mode image (color using light LED's)) optionally the first and second images are captured at different shutter times and/or different aperture sizes. (This is optional feature and rejection is not required)
Regarding to claim 13:
13. Van teach the observation device of claim 1, Van do not explicitly teach wherein merging the first image, the second image, and any additional images, includes identifying preferred or desired parameters or quantities of each image, generating a corresponding weight map for each image, and combining the first image, the second image, and any additional images using weighted blending based on the weight maps.
However Coleman teach wherein merging the first image, the second image, and any additional images, (Coleman [0253] analyzing the images [first and second image] from the imager from the first AVLED and the imager from the second AVLED may deduce that the light from angular bin 55 from the first AVLED illuminates a wall opposite the desk such that the light reflected from the wall partially illuminates the first region and that light from angular bin 65 from the first AVLED illuminates the ceiling above the desk such that the light reflected from the ceiling illuminates the first region and the combined [merged image from combined illumination] indirect illumination from angular bin 55 and angular bin 65 on the first region may match the neighboring or target illuminance or luminance. Thus, in this case, light from the first AVLED could illuminate what would normally be a shadow region using indirect illumination The indirect illumination could also be supplemented by direct and/or indirect illumination from the second AVLED or other AVLED.) includes identifying preferred or desired parameters or quantities of each image, (Coleman [0157] In one embodiment, the AVLED, system comprising one or more AVLEDs, portable device comprising an AVLED, or vehicle comprising an AVLED comprises one or more spatial zones corresponding to one or more areas of surfaces of an environment which can be illuminated and/or irradiated more or less (or not illuminated and/or irradiated at all if desired [preferred or desired parameters]) by one or more light sources from one or more angular bins from one or more AVLEDs.) generating a corresponding weight map for each image, (Coleman [0160] In one embodiment, one or more angular bins and/or corresponding spatial zones for an AVLED operate in a plurality of modes of illumination and/or irradiation wherein the mode priority and/or weighting factor may be set by the user (using a graphical interface on a portable device such as a cellular phone, for example), at the factory, remotely, or automatically determined by a processor on the illumination system comprising the AVLED. For example, in one embodiment a first set of angular bins of an AVLED is configured to operate in a high efficiency mode with a priority weighting of 80 and shadow reduction mode with a priority weighting of 20 and a second set of angular bins different from the first set of angular bins configured to operate in an illuminance specification mode of 500 lux with a priority weighting of 80 and a glare reduction mode with a priority weighting of 20. In one embodiment the priority weighting for each mode is on a relative scale, such as 1 to 100, where a weighting of the highest value, such as 100, means that the mode requirement or specification is optimized or met before other modes are considered, and scales less than the maximum value, such as less than 100, are relative weightings relative to the other modes (such as priority weighting of 90 will be prioritized or weighted twice as much as a mode with a priority weighting of 45).) and combining the first image, the second image, and any additional images (Coleman [0253] analyzing the images [first and second image] from the imager from the first AVLED and the imager from the second AVLED may deduce that the light from angular bin 55 from the first AVLED illuminates a wall opposite the desk such that the light reflected from the wall partially illuminates the first region and that light from angular bin 65 from the first AVLED illuminates the ceiling above the desk such that the light reflected from the ceiling illuminates the first region and the combined [merged image from combined illumination] indirect illumination from angular bin 55 and angular bin 65 on the first region may match the neighboring or target illuminance or luminance. Thus, in this case, light from the first AVLED could illuminate what would normally be a shadow region using indirect illumination The indirect illumination could also be supplemented by direct and/or indirect illumination from the second AVLED or other AVLED.) using weighted blending (Coleman [0058] an AVLED comprises a spatial array light source and an AROE wherein one or more light sources of the spatial array light source (such as the outer light sources) are imaged onto one or more surfaces of the room or environment such that they are blurry and may blend to one or more neighboring pixels to avoid spatial non-uniform light properties between spatial zones (such as dark or low luminance lines, rings, or grids between the images of the light source in the far field). In one embodiment, the modulation transfer function of the AROE for the frequencies of outer, neighboring spatial zones corresponding to neighboring outer angular bins is less than one selected from the group of 0.7, 0.6, 0.5, 0.4, 0.3, and 0.2. In one embodiment, the modulation transfer function of the AROE for the frequencies of outer, neighboring spatial zones corresponding to neighboring outer angular bins is greater than one selected from the group of 0.5, 0.6, 0.7, 0.8, and 0.9. In this embodiment, for example a higher MTF enables more accurate/defined illumination and/or irradiation.) based on the weight maps. (Coleman [0243] color or spectral properties of the light analyzed by one or more sensors or imagers on the one or more AVLEDS and/or remote devices (such as only analyzing pixels on a full color imaging sensor on an AVLED beneath a green color filter, averaging the red green and blue pixels of a full color imaging sensor, chromatically weighting the red, green, and/or blue filter pixel intensities for closest approximation of luminance values for white light with a specific color temperature (such as 2700K, 3000K, 3500K, 4000K, 4100K, 5000K, or 6500K, for example) or other colors, using a monochrome imaging sensor, or choosing the bit depth of the image sensor on one or more AVLEDs))
Regarding to claim 15:
15. Van teach the observation device of claim 1, Van do not explicitly teach wherein the image processing means is configured to generate the merged night vision image by selecting regions from the first image, the second image, and any additional images which contain greatest image detail, and combining the selected regions from the first image, the second image, and any additional images into a single image.
However Coleman teach wherein the image processing means is configured to generate the merged night vision image (Coleman [0253] analyzing the images [first and second image] from the imager from the first AVLED and the imager from the second AVLED may deduce that the light from angular bin 55 from the first AVLED illuminates a wall opposite the desk such that the light reflected from the wall partially illuminates the first region and that light from angular bin 65 from the first AVLED illuminates the ceiling above the desk such that the light reflected from the ceiling illuminates the first region and the combined [merged image from combined illumination] indirect illumination from angular bin 55 and angular bin 65 on the first region may match the neighboring or target illuminance or luminance. Thus, in this case, light from the first AVLED could illuminate what would normally be a shadow region using indirect illumination The indirect illumination could also be supplemented by direct and/or indirect illumination from the second AVLED or other AVLED.) by selecting regions from the first image, the second image, and any additional images which contain greatest image detail, and combining the selected regions from the first image, the second image, and any additional images (Coleman [0052] Similarly, in this example, the AROE may direct the light from the remaining 1020 micro-LEDs into 255 angular bins in a configuration with equal number of LEDs per angular bin. In another embodiment, the number of light sources per angular bin of an AVLED changes across the AROE. For example, in one embodiment, the central angular region of light output from an AVLED comprises more than one selected from the group: 2, 4, 6, 10, and 15 light sources per angular bin and the wider angular region comprises less than one selected from the group: 2, 4, 6, 10, and 15 light sources per angular bin. In one embodiment, the central angular region is the angular region within one selected from the group: 1, 2, 4, 10, 15, 20, 25, 30, 40, 50, and 60 degrees of the optical axis of the AVLED (such as the direction of nadir in a downlight light fixture or the angularly central angle of light output when all of the light source and the AVLED are emitting light at their largest intensity and flux in all angular bins). In one embodiment, the wider angular region is the angular region within one selected from the group: 1, 2, 4, 10, 15, 20, 25, 30, 40, 50, and 60 degrees of the largest angle of light emitting from the AVLED in an angular bin. In another embodiment, the wider angular region is the angular region greater than one selected from the group: 40, 50, 60, 65, 70, 75, 80 and 85 degrees from the optical axis of the AVLED. [0053] In one embodiment, the AVLED comprises a spatial array of light sources and the AROE redirects the optical axis of the light emitting pixels or regions in the central region of the array to angular bins within the central angular region and the optical axis of the light emitting pixels or regions outside the central region of the array to the wider angular region. In one embodiment, the central region of the array of the light emitting pixels or regions is the area of a circle centered at the geometric center of the spatial array of light source with an area less than one selected from the group of 50%, 40%, 30%, 20%, and 10% of the total area defined by the outer boundaries of light emitting region of the spatial array light source. In one embodiment, the light emitting pixels or regions are considered within the central region if all or a portion of the light emitting pixel or region is within the central region boundary) into a single image. (Coleman [0253] analyzing the images [first and second image] from the imager from the first AVLED and the imager from the second AVLED may deduce that the light from angular bin 55 from the first AVLED illuminates a wall opposite the desk such that the light reflected from the wall partially illuminates the first region and that light from angular bin 65 from the first AVLED illuminates the ceiling above the desk such that the light reflected from the ceiling illuminates the first region and the combined [merged image from combined illumination] indirect illumination from angular bin 55 and angular bin 65 on the first region may match the neighboring or target illuminance or luminance. Thus, in this case, light from the first AVLED could illuminate what would normally be a shadow region using indirect illumination The indirect illumination could also be supplemented by direct and/or indirect illumination from the second AVLED or other AVLED.)
Regarding to claim 16:
16. Van teach the observation device of claim 1, Van do not explicitly teach wherein the illumination, and optionally the exposure, is controlled responsive to a brightness of the merged night vision image, and optionally the exposure reduced or increased if the image brightness exceeds or falls below a threshold.
However Coleman teach wherein the illumination, and optionally the exposure, is controlled responsive to a brightness of the merged night vision image, and optionally the exposure reduced or increased if the image brightness exceeds or falls below a threshold. (Coleman [0146] one or more AVLEDs automatically determines which images or light property information sources (such as light sensors or images from a particular AVLED) to use for one or more spatial zones based on rules that include one or more selected from the group: location from the one or more AVLEDs to the spatial zone, evaluation of noise of the light property (or pixel noise) of the spatial zone evaluated from the one or more AVLEDs, imagers, or light sensors (such as by evaluating the noise present from reflected light at the spatial zone for a particular illumination/irradiation by a particular AVLED), threshold light property value (such as minimum measured or estimated illuminance or luminance for the spatial zone, for example), threshold exposure time, threshold intensity [brightness] value for the pixel, or threshold value for one or more other light properties for the spatial zone. In one embodiment, the reduction in information from the reduction in spatial zones analyzed reduces the information transferred to one or more central processors. For example, in the example above, an illumination system comprising AVLEDs 1, 2, 3, 4 in the hallway and a processor remote from the AVLEDs, the remote processor may direct AVLED #1 to not process information in the image taken from its camera corresponding to the spatial zone under AVLED #4, and thus the information sent by AVLED #1 (or further processed by AVLED #1) is reduced and if the information is sent to the remote processer, the reduced information facilitates faster transmission/reduced network bandwidth. In one embodiment, the information from one or more AVLEDs and/or imagers or sensors to be used for analysis may be determined individually for the light output from each (or a selection) of angular bins from each or a selection of AVLEDs. In one embodiment, the determination is done initially, at random or regular intervals, or when one or more light property minimums or maximum thresholds has been met. Coleman [0253] analyzing the images [first and second image] from the imager from the first AVLED and the imager from the second AVLED may deduce that the light from angular bin 55 from the first AVLED illuminates a wall opposite the desk such that the light reflected from the wall partially illuminates the first region and that light from angular bin 65 from the first AVLED illuminates the ceiling above the desk such that the light reflected from the ceiling illuminates the first region and the combined [merged image from combined illumination] indirect illumination from angular bin 55 and angular bin 65 on the first region may match the neighboring or target illuminance or luminance. Thus, in this case, light from the first AVLED could illuminate what would normally be a shadow region using indirect illumination The indirect illumination could also be supplemented by direct and/or indirect illumination from the second AVLED or other AVLED.)
Regarding to claim 23:
23. Van teach the method of claim 22, Van do not explicitly teach wherein the first night vision image is captured at a first illumination level, first beam angle and/or first exposure and the second night vision image is captured at a second illumination level, second beam angle and/or second exposure.
However Coleman teach wherein the first night vision image is captured at a first illumination level, first beam angle and/or first exposure and the second night vision image is captured at a second illumination level, second beam angle and/or second exposure. (Coleman [0253] analyzing the images [first and second image] from the imager from the first AVLED and the imager from the second AVLED may deduce that the light from angular bin 55 from the first AVLED illuminates a wall opposite the desk such that the light reflected from the wall partially illuminates the first region and that light from angular bin 65 from the first AVLED illuminates the ceiling above the desk such that the light reflected from the ceiling illuminates the first region and the combined [merged image from combined illumination] indirect illumination from angular bin 55 and angular bin 65 on the first region may match the neighboring or target illuminance or luminance. Thus, in this case, light from the first AVLED could illuminate what would normally be a shadow region using indirect illumination The indirect illumination could also be supplemented by direct and/or indirect illumination from the second AVLED or other AVLED.)
Regarding to claim 26:
26. Van teach the method of claim 14, Van do not explicitly teach comprising controlling the illumination, beam angle and/or the exposure responsive to a brightness of the merged night vision image.
However Coleman teach comprising controlling the illumination, beam angle and/or the exposure responsive to a brightness of the merged night vision image. (Coleman [0146] one or more AVLEDs automatically determines which images or light property information sources (such as light sensors or images from a particular AVLED) to use for one or more spatial zones based on rules that include one or more selected from the group: location from the one or more AVLEDs to the spatial zone, evaluation of noise of the light property (or pixel noise) of the spatial zone evaluated from the one or more AVLEDs, imagers, or light sensors (such as by evaluating the noise present from reflected light at the spatial zone for a particular illumination/irradiation by a particular AVLED), threshold light property value (such as minimum measured or estimated illuminance or luminance for the spatial zone, for example), threshold exposure time, threshold intensity [brightness] value for the pixel, or threshold value for one or more other light properties for the spatial zone. In one embodiment, the reduction in information from the reduction in spatial zones analyzed reduces the information transferred to one or more central processors. For example, in the example above, an illumination system comprising AVLEDs 1, 2, 3, 4 in the hallway and a processor remote from the AVLEDs, the remote processor may direct AVLED #1 to not process information in the image taken from its camera corresponding to the spatial zone under AVLED #4, and thus the information sent by AVLED #1 (or further processed by AVLED #1) is reduced and if the information is sent to the remote processer, the reduced information facilitates faster transmission/reduced network bandwidth. In one embodiment, the information from one or more AVLEDs and/or imagers or sensors to be used for analysis may be determined individually for the light output from each (or a selection) of angular bins from each or a selection of AVLEDs. In one embodiment, the determination is done initially, at random or regular intervals, or when one or more light property minimums or maximum thresholds has been met. Coleman [0253] analyzing the images [first and second image] from the imager from the first AVLED and the imager from the second AVLED may deduce that the light from angular bin 55 from the first AVLED illuminates a wall opposite the desk such that the light reflected from the wall partially illuminates the first region and that light from angular bin 65 from the first AVLED illuminates the ceiling above the desk such that the light reflected from the ceiling illuminates the first region and the combined [merged image from combined illumination] indirect illumination from angular bin 55 and angular bin 65 on the first region may match the neighboring or target illuminance or luminance. Thus, in this case, light from the first AVLED could illuminate what would normally be a shadow region using indirect illumination The indirect illumination could also be supplemented by direct and/or indirect illumination from the second AVLED or other AVLED.)
Claims 17 and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Van (U.S. Pub. No. 20200126378 A1), in view of Coleman (U.S. Pub. No. 20210112647 A1), further in view of Messinger (U.S. Pub. No. 20140207875 A1).
Regarding to claim 17:
17. Van teach the observation device of claim 1, Van do not explicitly teach wherein the observation device is configured to determine the distance to a target or object of interest, optionally based on the focussing distance of the night vision imaging module, using a rangefinder, and wherein the observation device is configured to control the illumination, and optionally the exposure, responsive to the distance.
However Messinger teach wherein the observation device is configured to determine the distance to a target or object of interest, optionally based on the focussing distance of the night vision imaging module, using a rangefinder, and wherein the observation device is configured to control the illumination, and optionally the exposure, responsive to the distance. (Messinger [0052] FIG. 4 Lights 138 may be similarly controlled, for example, to active or deactivate, and to increase or decrease a level of illumination (e.g., lux) to a desired value. Sensors 140, such as a laser rangefinder, may also be mounted onto the PTZ camera 16, suitable for measuring distance to certain objects. Optional features are not required of claim language, however obvious)
The motivation for combining Van and Coleman as set forth in claim 1 is equally applicable to claim 17. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Van, further incorporating Coleman and Messinger in video/camera technology. One would be motivated to do so, to incorporate the observation device is configured to determine the distance to a target or object of interest, optionally based on the focussing distance of the night vision imaging module, using a rangefinder, and wherein the observation device is configured to control the illumination, and optionally the exposure, responsive to the distance. This functionality will improve user experience with predictable results.
Regarding to claim 36:
36. Van teach the observation device of claim 16, wherein the image processing means is configured to analyse a histogram and determine an illumination to increase or maximise the number of intensity values within the histogram, (Part of OR condition and no rejection is required)
Van do not explicitly teach and/or to determine the distance to the object of interest and to control the illumination responsive to the distance.
However Messinger teach and/or to determine the distance to the object of interest and to control the illumination responsive to the distance. (Messinger [0052] FIG. 4 Lights 138 may be similarly controlled, for example, to active or deactivate, and to increase or decrease a level of illumination (e.g., lux) to a desired value. Sensors 140, such as a laser rangefinder, may also be mounted onto the PTZ camera 16, suitable for measuring distance to certain objects. Optional features are not required of claim language, however obvious)
Allowable subject matter
Regarding to claim 14, 19 and 34:
Claims 14, 19 and 34 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims because the limitations of these dependent claims are not obvious from the prior art search when all the limitations of independent and intervening claims are taken into account.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NASIM N NIRJHAR whose telephone number is (571) 272-3792. The examiner can normally be reached on Monday - Friday, 8 am to 5 pm ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William F Kraig can be reached on (571) 272-8660. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NASIM N NIRJHAR/Primary Examiner, Art Unit 2896