DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3, 11 and 13 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Chenderovitch1.
Regarding claim 1, Chenderovitch teaches a digital imaging device (note the device is addressed through addressing the limitations in the body of the claim below) comprising:
an image intensifier tube (IIT) configured to receive a scene image corresponding to an observed scene and to produce an enhanced image based upon the scene image (see Chenderovitch, paragraph 0025 teaching “a night vision device 12 includes an image intensifier tube 12A” and as in paragraph 0030 and figure 8, “images pass from the image intensifier tube 12A of the night vision device 12…and into the MTAR-HUD 10” where “In particular, the enhanced image from the night vision device 12 passes through a lens 30 in the MTAR-HUD 10 and into a camera 24” such that here a scene image corresponding to the observed scene goes through the tube and produces an “enhanced image” based upon the images that pass through the tube);
a digital image sensor configured to receive the enhanced image and to generate digital image data corresponding to the enhanced image (note that a “digital image sensor” is considered to be any image sensor or sensing system functioning as a sensor that senses image data and provides a digital image of the sensor data through any provided digitization means; see Chenderovitch, paragraphs 0030-0032 and figure 8, teaching “images pass from the image intensifier tube 12A of the night vision device 12…and into the MTAR-HUD 10” where “In particular, the enhanced image from the night vision device 12 passes through a lens 30 in the MTAR-HUD 10 and into a camera 24” where “camera 24 captures the image and transmits the image to a multiplexer 32” and “multiplexer 32 can transmit the information from the camera 24 and/or the video in connection 26 to a video decoder 34” where “video decoder 34 is an electronic circuit, and can be a single integrated circuit chip, that converts base-band analog video signals to digital video” where here the camera functioning with its provided video decoder to create the digital video is a digital image sensor that is configured receive the enhanced image data and is configured to generate digital image data corresponding to the enhanced image data in connection with the “video decoder” that “converts base-band analog video signals to digital video”);
a digital display configured to receive the digital image data and to generate a displayed image corresponding to the digital image data (see Chenderovitch, paragraph 0031 and figure 8 teaching as above the digital image data and teaching “video decoder 34 is an electronic circuit, and can be a single integrated circuit chip, that converts base-band analog video signals to digital video” and “video decoder 34 then sends the information to a On screen Display subsystem (OSD) 36” where this “digital video” is digital image data which is sent to a digital display which receives it and displays it as in paragraph 0037 teaching “OSD 36 can combine the information into the desired transmission. For example, the OSD 36 can process the data from the video encoder 46 and combine this information with the information from the ECM 44” and “OSD 36 provides a data output that overlies the information from the IMU 42 and the location determination device 40 onto the video” and “OSD 36 transmits this combined information through three separate outputs to: 1) the video encoder 46; 2) the video display 18; and 3) a digital video reorder (DVR) 48” such that here “video display 18” is a digital display that displays the digital image data from the camera and decoder 34 that provides the digital video of the enhanced image data of the scene); and
pass-through electronics configured to receive the digital image data from the digital image sensor and to provide the digital image data to the digital display without performing data conversion processing on the digital image data (here note that it is important to understand the negative limitation that the “pass-through electronics” are “configured…to provide the digital image data to the digital display without performing data conversion” as “performing data conversion” is extremely broad and data conversion could be seen as including any change or addition or modification to the digital image data as these would convert the data based on such changes, additions, modifications- however, this “conversion” will be interpreted in light of the Specification at paragraph 0060 explaining that “without performing data conversion processing on the digital image data” means that “the digital image data 1250 that is generated by the sensor 1200 is configured as display data suitable for use as input to the display 1300” and as in paragraphs 0088-0090 it is further explained that “digital image data 1250 generated by the second electronics 1240 is formatted for consumption by the display 1320 without conversion” but that also “the pass-through electronics 1500 includes a first add function 1520” and “second add function 1530 of the pass-through electronics 1500 receives augmentation data 1605 from the external electronics 1600, as previously described, and combines the augmentation data 1605 with the digital image data 1250” and specifically it is explained that it is “notable that the augmented image data remains formatted for consumption, without conversion, by the electrical interface of the display 1300 with the addition information (e.g., augmentation data 1605 or calibration data 1635) to generate augmented digital image data 1255. Advantageously, the pass-through electronics 1500 receive, from the sensor 1200, digital image data 1250 that is configured to be used as input data by the display 1300 such that the pass-through electronics 1500 need only perform non-conversion, i.e. non-reformatting, operations on the digital image data 1250” such that here then “data conversion” may correspond to formatting or reformatting operations to make a digital signal further compatible for display such that if the pass-through electronics passes the digital data to the display and does so without performing a formatting or reformatting of the digital image data to make it compatible with the digital display this would mean it is provided without performing at least that type of data conversion and if other operations are performed which may correspond to other modifications/changes or even other types of conversion then the claim language is still satisfied as at least with regard to reformatting conversions the digital image data is provided without that data conversion- note that this comports with the meaning in the Specification as noted above and also informs the meaning of dependent claims 3 and 5 where the pass-through electronics may “add” information but can still be seen as passing the digital image to the display without performing data conversion processing as the adding is done without any reformatting operations disclosed with regard to the adding function; see Chenderovitch, paragraphs 0031 and figure 8 teaching as above pass-through electronics in the form of the “OSD 36” where “video decoder 34 is an electronic circuit, and can be a single integrated circuit chip, that converts base-band analog video signals to digital video” and “video decoder 34 then sends the information to a On screen Display subsystem (OSD) 36” such that the image data is not subject to any conversion but is passed through to “video display 18” as in paragraph 0037 teaching “OSD 36 can combine the information into the desired transmission. For example, the OSD 36 can process the data from the video encoder 46 and combine this information with the information from the ECM 44” and “OSD 36 provides a data output that overlies the information from the IMU 42 and the location determination device 40 onto the video” and “OSD 36 transmits this combined information through three separate outputs to…video display 18” and as can be seen in figure 8, the already decoded digital image video data from the camera is passed through without any conversion to “video display 18”; note paragraph 0048 teaches that the data from the ECM that is combined with the digital image data by the pass-through OSD 36 “can be displayed or hidden through the device's menu to show an operator or user only information needed a particular situation. Thus, the user can customize the display of information through a user input 56, reducing operator distraction and improving focus on objectives. As seen in FIGS. 2 and 6 , the user can operate buttons 56 a-56 d to scroll through a menu or list of options to specifically enable certain parameters or information to be displayed” such that in the case the overlying displaying elements are hidden then the “digital video” digital image data is passed through to the video display 18 without any further data conversion or even combination processing, where combination processing is not considered a conversion processing on the digital image data regardless);
wherein the digital image sensor is configured to output the digital image data utilizing a same electrical interface as the digital display is configured to input (note that here “the image sensor is configured to output the digital image data utilizing a same electrical interface as the digital display is configured to input” is understood in line with the explanation from the Specification at paragraph 0060 explaining “In other words, the digital image data 1250 that is generated by the sensor 1200 is configured as display data suitable for use as input to the display 1300”; see Chenderovitch, paragraphs 0030-0032 and figure 8 where the digital sensor is configured to output the digital image data in connection with the “video decoder 34” to output the “digital video” such that as decoded video it is ready to be passed through the OSD 36 to “video display 18” as further explained in paragraph 0037 teaching “OSD 36 can combine the information into the desired transmission. For example, the OSD 36 can process the data from the video encoder 46 and combine this information with the information from the ECM 44” (note that Chenderovitch mistakenly refers to “video encoder 46” in this passage instead of “video decoder 34” as can be seen with reference to figure 8 the “video encoder 46” does not pass information to the OSD 36 but rather obtains information from the OSD 36) and “OSD 36 provides a data output that overlies the information from the IMU 42 and the location determination device 40 onto the video” and “OSD 36 transmits this combined information through three separate outputs to: 1) the video encoder 46; 2) the video display 18; and 3) a digital video reorder (DVR) 48” such that here “video display 18” is configured to input the digital image data utilizing the display data that is already suitable for use as input to the digital video display 18 which is the output digital data that the digital image sensor is configure to output in connection with the video encoder 34).
Regarding claim 3, Chenderovitch teaches all that is required as applied to claim 1 above and further teaches wherein the pass-through electronics are configured to add information to the digital image data, wherein the information comprises one or more of color information, brightness information, calibration information, and augmented reality (AR) overlay information (see Chenderovitch, paragraph 0032-0037 and figure 8 teaching the pass-through electronics “OSD 36” “can combine the information into the desired transmission” and “can process the data from the video…and combine this information with the information from the ECM 44 and “provides a data output that overlies the information from the IMU 42 and the location determination device 40 onto the video” thus adding information to the digital image data comprising augmented reality overlay information).
Regarding claim 11, the instant claim recites a “digital image device” which contains the same effective limitations as claim 1, but does not specifically recite the “pass-through electronics” or the “electrical interface” limitations, instead more broadly reciting wherein the digital image sensor is configured to output the digital image data formatted for consumption by the digital display without conversion. Here of course the pass-through electronics combined with the “electrical interface” limitation of claim 1 then function as the manner in which the digital image sensor is “configured to output the digital image data formatted for consumption by the digital display without conversion.” Thus claim 11 is simply a broader genus to the more specific limitations of claim 1. As the limitations of the more specific version of claim 11 have been addressed with respect to claim 1, this means all of the limitations of claim 11 are addressed by the rejection of claim 1. In light of this, the limitations of claim 11 correspond to the limitations of claim 1; thus they are rejected on the same grounds as claim 1.
Regarding claim 13, Chenderovitch teaches all that is required as applied to claim 1 above and further teaches wherein the IIT tube comprises the digital image sensor, the digital imaging device further comprising pass-through electronics disposed between the IIT tube and the digital image sensor for communicating the digital image data from the digital image sensor to the digital display (see Chenderovitch, paragraphs 0030-0031 and figure 8 as explained above where the IIT tube may be considered to comprise the digital image sensor of the camera and video decoder when they are connected and in that case pass-through electronic could be considered to correspond to the connection between the camera and the video decoder which passes through the analog signal to the decoding electronics in order to communicate the digital image data from the digital image sensor to the digital video display 18).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2, 9 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chenderovitch in view of Smith2.
Regarding claim 2, Chenderovitch teaches all that is required as applied to claim 1 above but fails to specifically teach a first control clock, wherein: a functional resolution of the digital image sensor is equal to a functional resolution of the digital display; and the digital image sensor and the digital display both operate using the first control clock. Rather Chenderovitch is silent as to any control clock that deals with the sensor and digital display having equal functional resolutions where they both operate using a first control clock in some manner.
In the same field of endeavor relating to digital imaging utilizing enhanced images from IIT devices, Smith teaches that it is known to provide an IIT and a digital image sensor configured to receive the enhanced image and to generate digital image data corresponding to the enhanced image and a digital display configured to receive the digital image data and to generate a displayed image corresponding to the digital image data and a first control clock, wherein a functional resolution of the digital image sensor is equal to a functional resolution of the digital display; and the digital image sensor and the digital display both operate using the first control clock (see Smith, paragraphs 0039-0043 teaching “Night vision system 52…combines an addressable display within analog image intensifier 58” and “electrical signals from digital imager 54 are sent across an electrical bus 55 onto at least one and preferably two electron multipliers 61 a and 61 b” which are digitally addressable displays where “electrons emitted from each emitter can be electrically addressed by control circuitry corresponding to that emitter” and another identical “image intensifier 58b” is provided and “At the backside of the image intensifier 58 b is a digital sensor 56 b mounted on or separate from the backside surface of image intensifier tube 58 b. The digital sensor 56 b comprises a plurality of active or passive pixel sensor devices arranged in an array operating as optical pixels with CMOS circuity to convert the photons emitted from image intensifier 58 b to electrical signals, similar to the pixel array 40 shown in FIG. 3” where “Digital sensor 56 b can be a CMOS imager used as an active pixel sensor device or passive pixel device. Digital sensor 56 b can be a CMOS imager chip or die with integrated amplifiers as an active pixel sensor device that incorporates both the photodiode and a read out amplifier” and “the improved night vision system 52 of FIG. 6 has a digital channel dimension or width DCW that matches the analog channel dimension or width ACW” where the “the viewer will see the digitally derived image overlaid across the entire field of view of the analog derived image” and paragraphs 0029-0031 provide the teachings referenced for the digital imager which are further explained where “the pixel array 40 can be controlled by a timing and control circuit 42, and the signals can be processed by processors 44, which may comprise analog-to-digital converters arranged on each column as the signals are read out by a column select unit 46. The electrical signals corresponding to each pixel output can be then placed on a bus 25” such that here a first control clock is disclosed such as the “timing and control circuit” used for “readout” where the digital image sensor and digital image display both operate according to the control clock as the digital image display operates by receiving the digital image data from the digital image sensor meaning that this is how the digital imager 56b works to image and read out the digitized enhanced image data from the IIT ; and further see paragraph 0049 and paragraphs 0059-0060 teaching that the functional resolution of the digital image sensor is equal to a functional resolution of the digital display where as explained above as in paragraph 0042 the digital image sensor 56b converts the enhanced image data to a digital form using ”a plurality of active or passive pixel sensor devices arranged in an array operating as optical pixels with CMOS circuity to convert the photons emitted from image intensifier 58 b to electrical signals, similar to the pixel array 40 shown in FIG. 3” where this array corresponds to the addressable array of output emitters responsible for displaying the digitized image as “digital imager 54, and specifically, the CMOS digital sensor 56 a sends the electrical signals corresponding to the optical readings on the pixel array to the backside surfaces 64 a and 64 b of corresponding primary and secondary electron multipliers 61 a and 61 b” and “electrical signals are sent to addressable electron Spindt emitters on the backside surfaces 64 a and 64 b” and “resolution of the screen is important for both the image intensified channel and the incorporated screen” and “electronically addressable screen will have about the same resolution as the image intensifier. Due to each intensifier pixel having the capability including an addressable field emission electron emitter array, the pixel count will also be the same” such that here the functional resolution of the digital image sensor is designed to match the functional resolution of the digital display ). Thus Smith teaches known techniques applicable to the base system of Chenderovitch.
Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Chenderovitch by applying the known teachings of Smith as doing so would be no more than application of a known technique to a base system ready for improvement, which would yield predictable results and result in an improved system. Here the predictable result of the combination would be that the digital display and digital image sensor in Chenderovitch would be adapted to have the same functional resolution as in Smith and would operate using a control clock that would cause the digital image sensor to output a digital image to display the generated digital display data such that prior to passthrough the data is ready to be sent for display to the digital display, just as it is in Smith. This would result in an improved system as it would eliminate any need for scaling or other like operations to bring the sensor data and display data to the same resolution, allowing more efficient processing that does not have to perform such operations.
Regarding claim 9, Chenderovitch teaches all that is required as applied to claim 1 above and further teaches the digital image sensor comprises an array of digital pixels, wherein a digital pixel of the array of digital pixels is configured to perform image sensing and analog-to-digital conversion of analog image data to generate the digital image data (see Chenderovitch, as explained above in reference to paragraphs 0030-0032 and figure 8, teaching “images pass from the image intensifier tube 12A of the night vision device 12…and into the MTAR-HUD 10” where “In particular, the enhanced image from the night vision device 12 passes through a lens 30 in the MTAR-HUD 10 and into a camera 24” where “camera 24 captures the image and transmits the image to a multiplexer 32” and “multiplexer 32 can transmit the information from the camera 24 and/or the video in connection 26 to a video decoder 34” where “video decoder 34 is an electronic circuit, and can be a single integrated circuit chip, that converts base-band analog video signals to digital video” and paragraph 0046 teaching “” where here the camera functioning with its provided video decoder to create the digital video is a digital image sensor that is configured receive the enhanced image data and is configured to generate digital image data corresponding to the enhanced image data in connection with the “video decoder” that “converts base-band analog video signals to digital video” );
the digital image sensor comprises a sensor array comprising a plurality of sensor pixels (see Chenderovitch, paragraphs 0030-0031 teaching the digital image sensor comprising the camera 24 passes image data to the decoder 34 which generates video signals for video display such that in some manner the sensor comprises a sensor array of pixels as whatever captures the image must convert the image to an image signal that can be digitized into respective pixels), the sensor array comprising a plurality of rows of sensor pixels;
the digital display comprises a display array comprising a plurality of display pixels (see Chenderovitch, paragraph 0037 teaching “the video display” which displays the digital video signal such that it must be displayed on some array of corresponding display pixels), the display array comprising a plurality of rows of display pixels;
the digital image sensor is configured to read out a first row of the plurality of rows of sensor pixels and to bin image data corresponding to the first row in to a first data packet; and the digital display is configured to receive the first data packet and to fill a first row of display pixels with data comprising the first data packet, the first row of display pixels corresponding to the first row of sensor pixels (see Chenderovitch, paragraphs 0030-0031 and figure 8 where in order for the video display to display the image it must receive data read out from the digital image sensor system).
Chenderovitch teaches all of the above but fails to detail the digital image sensor and digital display pixel formats with respect to rows and reading of rows according to some sort of basic row based binning and packet sending of row data from an image sensor to the digital display. Thus Chenderovitch stands as a base device upon which the claimed invention can be seen as an improvement by providing a readout and filling operation that could lead to more efficient operations such as through pipelining and more efficient data access and processing.
In the same field of endeavor relating to capturing image data and providing it for digital display, Smith teaches that it is known to provide a digital image sensor wherein the digital image sensor comprises an array of digital pixels, wherein a digital pixel of the array of digital pixels is configured to perform image sensing and analog-to-digital conversion of analog image data to generate the digital image data (see Smith, paragraphs 0039-0043 teaching “Night vision system 52…combines an addressable display within analog image intensifier 58” and “electrical signals from digital imager 54 are sent across an electrical bus 55 onto at least one and preferably two electron multipliers 61 a and 61 b” which are digitally addressable displays where “electrons emitted from each emitter can be electrically addressed by control circuitry corresponding to that emitter” and another identical “image intensifier 58b” is provided and “At the backside of the image intensifier 58 b is a digital sensor 56 b mounted on or separate from the backside surface of image intensifier tube 58 b. The digital sensor 56 b comprises a plurality of active or passive pixel sensor devices arranged in an array operating as optical pixels with CMOS circuity to convert the photons emitted from image intensifier 58 b to electrical signals, similar to the pixel array 40 shown in FIG. 3” where “Digital sensor 56 b can be a CMOS imager used as an active pixel sensor device or passive pixel device. Digital sensor 56 b can be a CMOS imager chip or die with integrated amplifiers as an active pixel sensor device that incorporates both the photodiode and a read out amplifier” and “the improved night vision system 52 of FIG. 6 has a digital channel dimension or width DCW that matches the analog channel dimension or width ACW” where the “the viewer will see the digitally derived image overlaid across the entire field of view of the analog derived image” and as in paragraph 0042 the digital image sensor 56b converts the enhanced image data to a digital form using ”a plurality of active or passive pixel sensor devices arranged in an array operating as optical pixels with CMOS circuity to convert the photons emitted from image intensifier 58 b to electrical signals, similar to the pixel array 40 shown in FIG. 3” where this array corresponds to the addressable array of output emitters responsible for displaying the digitized image and as in paragraphs 0030-0031 “Each pixel produces an electrical output signal in response to incident light or photons. The electrical signals are oftentimes read out, typically one row at a time, to form an image” and “the signals can be processed by processors 44, which may comprise analog-to-digital converters arranged on each column as the signals are read out by a column select unit 46. The electrical signals corresponding to each pixel output can be then placed on a bus 25” such that here the digital sensor 56b operates in this manner and provides a digital pixel array where these digital pixels are configured to be processed by “analog to digital converters” operating on each pixel); the digital image sensor is configured to read out a first row of the plurality of rows of sensor pixels and to bin image data corresponding to the first row in to a first data packet (see Smith, paragraphs 0030-0031 as explained in reference to the above teachings and explanation teaching “Each pixel produces an electrical output signal in response to incident light or photons. The electrical signals are oftentimes read out, typically one row at a time, to form an image” such that here the sensor is configured to ready a first row such as a first row of “one row at a time” of the multiple rows and the image data is functionally binned as it is read out by “one row at a time” bins and as they are read out in such bins this means the binned image data corresponds to the first row in a first data packet comprising the signals sent to the next stage for display given that the sending of the display data on the bus corresponds to display of the corresponding pixel on the corresponding digital array for display as in paragraphs 0039-0043 as explained above); and the digital display is configured to receive the first data packet and to fill a first row of display pixels with data comprising the first data packet, the first row of display pixels corresponding to the first row of sensor pixels (see Smith, paragraphs 0030-0031 as explained in reference to the above teachings and explanation teaching “Each pixel produces an electrical output signal in response to incident light or photons. The electrical signals are oftentimes read out, typically one row at a time, to form an image” such that here the sensor is configured to ready a first row such as a first row of “one row at a time” of the multiple rows and the image data is functionally binned as it is read out by “one row at a time” bins and as they are read out in such bins this means the binned image data corresponds to the first row in a first data packet comprising the signals sent to the next stage for display given that the sending of the display data on the bus corresponds to display of the corresponding pixel on the corresponding digital array for display as in paragraphs 0039-0043 as explained above). Thus Smith provides known techniques applicable to the base system of Chenderovitch.
Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Chenderovitch with teachings of Smith to arrive at the claimed invention as doing so would be no more than application of a known technique applied to a base system ready for improvement, which would yield predictable results and result in an improved system. The predictable result of the combination would be that the digital image sensor and camera sensors of Chenderovitch would take the form of those in Smith to provide a digital camera sensor and technique for reading out such sensor data to a digital display such as the video display in Chenderovitch. Thus the modified Chenderovitch digital image sensor would output video according to the same principles to avoid conversion in a pass-through stage while reading out and writing/filling data from the sensor to the output of the digital image sensor as in Smith. This would result in an improved system as the modified Chenderovitch system would then be capable of utilizing different arrangements for reading and processing image data according to techniques for improving reading/writing of digital data in arrays such as the suggested batching operation in Smith suggesting row-based readout, which further makes the system compatible with other arrangements such as pipelining and parallel operation for example.
Regarding claim 12, the instant claim limitations correspond to the limitations of claim 9 as addressed above. In light of this, the limitations of claim 12 correspond to the limitations of claim 9; therefore it is rejected on the same grounds as claim 9.
Claim(s) 6 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chenderovitch in view of Dobbie et al3 (“Dobbie”).
Regarding claim 6, Chenderovitch teaches all that is required as applied to claim 1 above but fails to specifically teach that the IIT comprises a photocathode, a microchannel plate (MCP), a power supply for providing power to the photocathode and to the MCP, a fiber optic plate, and a phosphor screen. Rather Chenderovith clearly teaches an “image intensifier tube 12A” and relies on those having ordinary skill in the art to understand what such a “image intensifier tube” that produces “monochrome green” and its components.
In the same field of endeavor relating to converting enhanced images from an image enhancement tube into digital signals for display, Dobbie teaches that it is know when using an IIT for night vision system to provide a digital image sensor comprising an image intensifier tube which comprises a photocathode, a microchannel plate (MCP), a power supply for providing power to the photocathode and to the MCP, a fiber optic plate, and a phosphor screen (see Dobbie, column 8, lines 55-67 through column 9, lines 1-28 teaching “basic functional architecture for an image intensified camera module” including “the objective lens 90 focuses light from the scene onto the photocathode of the image intensifier 92. The tube also contains a microchannel plate (MCP) for amplifying electrons and a phosphor screen having a screen optic 95” and “auto-gate 94 controls the HVPS 97, which supplies voltage to the microchannel plate and screen, and also controls the gate driver 99 which supplies the cathode voltage” and “image from the image intensifier is fiber optically coupled by screen fiber optic 95 to the imaging chip 96”) and to convert data from such an IIT into a a digital signal to be output to a fusion processor for fusion of the video signal from the digital image sensor with digital information (see Dobbie, column 9, lines 11-39 teaching “a CMOS “camera-on-a-chip” at this position in the architecture, although other solid state imaging arrays could also be used” and “CMOS camera functional block has the purpose of sensing the 2-D image on its pixel array and generating a real-time video signal representation of that image” where “Depending on specific type, the CMOS camera may output digital video, analog video, or both signals”).
Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Chenderovtich and Dobbie to arrive at the claimed invention as doing so would be no more than combining prior art elements according to known methods to yield predictable results. This is because it is found that the prior art included each element claimed, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. One of ordinary skill in the art could have combined the elements as claimed by the known methods above and in combination, each element merely performs the same function as it does separately, and the results of such a combination would have been predictable. Here Chenderovitch already explicitly suggests to use an IIT as applied to the claims above and this IIT would function as it does in Chenderovitch to supply an enhanced image to a corresponding processor and according to the functioning of the Dobbie IIT and its corresponding components, it would use the components of the Dobbie IIT to provide the enhanced image to the next stage. The predictable result would then be that enhanced images are supplied through a digital image sensor as in Dobbie to where they are to be passed through as in Chenderovitch as a video signal.
Regarding claim 7, Chenderovitch as modified teaches all that is required as applied to claim 6 above and further teaches wherein the digital image sensor comprises a CMOS sensor and the phosphor screen is deposited onto the CMOS sensor (see Chenderovitch as modified by Dobbie already above to utilize the more specific digital image sensor arrangement utilizing the IIT and CMOS camera as explained above where Dobbie teaches the sensor comprises a CMOS sensor where of course the purpose is to take the image of the phosphor screen which contains the enhanced image from the IIT as in column 9, lines 11-39 teaching “image from the image intensifier is fiber optically coupled by screen fiber optic 95 to the imaging chip 96” and “a CMOS “camera-on-a-chip” at this position in the architecture, although other solid state imaging arrays could also be used” and “CMOS camera functional block has the purpose of sensing the 2-D image on its pixel array and generating a real-time video signal representation of that image” where “Depending on specific type, the CMOS camera may output digital video, analog video, or both signals” ).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chenderovitch in view of Vakil4.
Regarding claim 8, Chenderovitch teaches all that is required as applied to claim 1 above and further teaches wherein the digital image sensor is disposed on a first side of a circuit card, the digital display is disposed on a second side of the circuit card and the digital image sensor and the digital display are aligned along a shared axis (see Chenderovitch, paragraph 0007 teaching “displaying an image, comprising connecting a first end of a housing of a display device to a night vision device, connecting a second end of the housing to an eyepiece, receiving an image from the night vision device, via a camera disposed within the housing, converting the image from the night vision device, via a decoder, from an analogue signal into a digital signal, and displaying the converted image on a display, so that the converted image can be viewed though the eyepiece” and paragraph 0022 and figure 1 teaching “the MTAR-HUD 10 can be threaded to the night vision device 12 and the eyepiece 14. It is noted that such a coupling is merely one embodiment and the MTAR-HUD 10 can couple to a night vision device 12 in any manner desired or be integrated into a night vision device 12” and paragraphs 0028-0029 teaching “housing 16 can be similarly sized to a night vision device 12 and/or an eyepiece 14, such that the night vision device 12 does not become overly bulky and/or maintains the aesthetics or operability of the corresponding night vision device 12 and/or eyepiece 14” and “housing 16 can have a display 18 at a first end 20, a lens for a camera 24 at a second or opposite end” and paragraphs 0050-0053 and figures 11-12 such that in these portions it can be seen that the digital image sensor and the digital display are aligned along a shared axis as the IIT component was previously aligned with the eyepiece 14 such that the digital image sensor taking in the IIT data and its digital display is aligned to the eyepiece as well as the video signal is output to the video display which is viewed through the eyepiece aligned in the same axis). Chenderovitch fails to teach the digital image sensor is disposed on a first side of a circuit card and the digital display is disposed on a second side of a circuit card, rather Chenderovitch is silent as to circuit cards, however Chenderovitch does suggest that “coupling” of the night vision device to the digital image sensor as exemplified is “merely one embodiment and the MTAR-HUD 10 can couple to a night vision device 12 in any manner desired or be integrated into a night vision device 12.” Thus Chenderovitch stands as a base device upon which the claimed invention can be seen as an improvement through the integration of the digital sensor and digital display with respect to sides of a circuit card which could lead to a more beneficial compact form factor for example.
In the same field of endeavor relating to arranging digital displays and digital image sensors with respect to a circuit card, Vakil teaches that it is known to dispose a digital image sensor on a first side of a circuit card, and to dispose a digital display on a second side of the circuit card, where the sensor and display are arranged long a shared axis (see Vakil, paragraphs 0031-0033 and figure 1 teaching “first camera module 100 includes a camera 102 and a substrate 104” paragraphs 0054-0060 and figure 5A teaching a circuit card such as “substrate 104” aligned with “display 502” where it can be seen in figure 5b that the image sensor is disposed on a first side of the circuit card 104 and the digital display is disposed on a second side of the circuit card 104 and as in paragraphs 0094-0095 “a display unit (e.g., display unit 502) or some other component of the multi-media device is positioned in a space at least partly defined by the second side of the first substrate” which “advantageously creates a compact form factor and reduces the thickness of the multi-media device housing the third substrate (e.g., substrate 504), the coupled first camera module and second camera module, and the display unit (e.g., display unit 502)”). Thus Vakil teaches a known technique for image sensor and display integration applicable to the base system of Chenderovitch ready for improvements relating to integrating displays and image sensors.
Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Chenderovitch by applying the known techniques of Vakil as doing so would be no more than application of a known technique to a base system ready for improvement which would yield predictable results and result in an improved system. The predictable result of the combination would be that Chenderovitch’s display and image sensor would be ensured as disposed along first and second sides of a circuit card and would be aligned as in Chenderovitch such that the digital image sensor and display of Chenderovitch utilize sides of a circuit card to achieve their similar function of taking in image data and converting it to display data for the digital display. This would result in an improved system as such integration “advantageously creates a compact form factor” as suggested by Vakil. Furthermore one of ordinary skill in the art would have been motivated to modify Chenderovitch using the teachings of Vakil as Chenderovitch suggests in paragraph 0022 that “and the MTAR-HUD 10 can couple to a night vision device 12 in any manner desired or be integrated into a night vision device 12” and Vakil provides a method for integrating similar types of components and suggests the teachings “advantageously creates a compact form factor” (see Vakil, paragraph 0022).
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chenderovitch as modified as applied to claim 9 above, and further in view of Guidash et al5 (“Guidash”).
Regarding claim 10, Chenderovitch as modified teaches all that is required as applied to claim 9 above but is silent to wherein: the digital image sensor is further configured to, while reading out the first row of sensor pixels, read out a second row of the plurality of rows of sensor pixels and to bin image data corresponding to the second row in to a second data packet; and the digital display is further configured to receive the second data packet and, while filling the first row of display pixels, to fill a second row of display pixels with data comprising the second data packet, the second row of display pixels corresponding to the second row of sensor pixels. Rather while Chenderovitch as modified as explained above would be compatible with such a pipelined concurrent filling approach, as rows are already read out binned into packets one row at a time to a display for output, there are no teachings as such. Thus Chenderovitch as modified stands as a base device upon which the claimed invention can be seen as an improvement through a reading and filling operation of data that would result in an increased throughput of data to data and could improve the latency response of the sensor to display pipeline.
In the same field of endeavor relating to low latency capturing and output of digital image data arranged in rows and read according to rows, Guidash teaches that it is known to while reading out the first row of sensor pixels, read out a second row of the plurality of rows of sensor pixels and to bin image data corresponding to the second row in to a second data packet; and the next stage is further configured to receive the second data packet and, while filling the first row of output pixels, to fill a second row of output pixels with data comprising the second data packet, the second row of display pixels corresponding to the second row of sensor pixels (see Guidash, paragraph 0201 teaching “readout of digital values (i.e., ADC results) stored within the digital output buffer 657 commences after completion of the large-signal ADC conversion (i.e., after any large-signal confirmed ADC results have been captured within the digital output buffer). In the embodiment shown, the digitized CDS results (i.e., ADC outputs or digital pixel values) may be shifted out of the digital output buffer for transmission to a memory IC and/or image-processing IC via a physical signaling interface (PHY) of the image sensor. In alternative embodiments, multiple digital pixel values may be output in parallel. Also, the digital line buffer may include separate “write-in” and “read-out” buffers (or an alternating buffer pair) to enable pixel data for a given pixel row to be output from the image sensor concurrently with storage of pixel data for the subsequent pixel row” such that here utilizing these “write-in” and “read-out” techniques it enables “pixel data for a given pixel row to be output from the image sensor concurrently with storage of pixel data for the subsequent pixel row”). Thus Guidash teaches a known technique applicable to the base system of Chenderovitch as modified.
Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to further modify Chenderovitch as modified as doing so would be no more than application of a known technique to a base system ready for improvement which would yield predictable results and result in an improved system. The predictable result of the combination would be that the data of the digital image sensor of Chenderovitch as already modified by Smith to be in row-by-row format would be read out and written according to Guidash’s technique. Thus the reading out of the sensor data would occur while concurrently writing or storing the next subsequent row to the next stage such that the data would be output to the display of Chenderovitch as modified by Smith and the display output stage would thus be filling a first row of display pixels while filling a second row of pixels allowing this concurrent reading and writing to an output stage. This would result in an improved system as the throughput to the display would be increased given the concurrent writing to the output display stage while not having to wait for other rows to finish reading, writing or filling.
Allowable Subject Matter
Claims 4, 5, 14 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 4, instant claim provides further limitations to those of claim 3 where it was specified that “wherein the pass-through electronics are configured to add information to the digital image data, wherein the information comprises one or more of color information, brightness information, calibration information, and augmented reality (AR) overlay information,” further requiring “wherein the pass-through electronics are configured to add information to the digital image data, wherein the information comprises one or more of color information, brightness information, calibration information, and augmented reality (AR) overlay information.” Thus it is required that the pass through electronics are now operating such that the digital display of claim 1 is now specifically configured to display YUV color data and the digital image sensor must be configured to generate a Y element of the YUV color data which the digital display is configured to display. Additionally “the color information” refers to the “color information” which is “information” which the pass-through electronics is configured to “add” per claim 3 and thus this also means the pass-through electronics are required to not add color information in the alternative but to be configured to add such color information using the pass-through electronics, which per claim 4 such color information that is added comprises a U element and a V elemen