DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
2. This communication is in response to the communication filed 9/23/2023. Claims 1-22 are currently pending.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3.1. Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), in view of Li et al. (US 2021/0152728), Zhang et al. (US 2017/0185871), and in view of Talbert et al. (US 2020/0404143).
CLAIM 1
Shameli teaches an apparatus (Shameli: abstract), comprising:
a computer processor and a memory (Shameli: abstract; ¶¶ [0033] “processor (12) of IGS navigation system (10) comprises one or more processing units (e.g., a microprocessor, logic processor, or other circuitry usable to execute programming instructions) communicating with one or more memories”; FIGS. 1-2);
the processor programmed to (Shameli: abstract; ¶¶ [0033] “processor (12) of IGS navigation system (10) comprises one or more processing units (e.g., a microprocessor, logic processor, or other circuitry usable to execute programming instructions) communicating with one or more memories”; FIGS. 1-2):
to receive video image data from an image sensor at the distal end of an endoscope and to display the image data to a surgeon in real time (Shameli: abstract; ¶¶ [0030], [0035] “provide video in real time via display screen (16) which may include showing an endoscopic image (e.g., captured via the dual camera endoscope”, [0037]-[0038] “one or more image sensors or image capture devices positioned at the distal end of the shaft”, [0050]; FIGS. 1-2); and
to process the image data received from the image sensor to simultaneously upsample the image data to a resolution higher than that captured by the image sensor, to sharpen edges, and to enhance local contrast (Shameli: abstract; ¶¶ [0037] “image data can be captured and used to provide a number of advantageous features such as improved resolution”, [0065]-[0066] “system may then prepare (block 234) a super resolution image from the combined, overlapping portions of the two input images, and display (block 236) the resulting image at a higher resolution”; FIGS. 1-13D).
Shameli may not teach the following:
to control the image sensor and/or an illumination source designed to illuminate a scene viewed by the image sensor, the controlling programmed to underexpose or overexpose every other frame of the video image data;
the video image data having a frame rate at which the image data are generated by the image sensor;
to process the image data received from the image sensor to combine successive pairs of frames of the image data to adjust dynamic range to enhance over-bright or over-dark portions of the image to expose detail, and to generate combined frames at the full frame rate of the video as generated by the image sensor;
via a machine learning model, the machine learning model trained; and
to sum an error for an intensity of the image relative to a setpoint intensity; and
to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation.
Li, however, teaches the following:
to control the image sensor and/or an illumination source designed to illuminate a scene viewed by the image sensor, the controlling programmed to underexpose or overexpose every other frame of the video image data (Li: abstract; ¶¶ [0041] “performing exposure only on an odd-numbered frame or only on an even-numbered frame, and a composite frame is generated by combining one fill light frame and one visible light frame that are adjacent”, [0048]-[0049] “Brightness of the fill light frame is higher than brightness of the visible light frame… the fill light frame may be alternatively generated by a sensor through exposure under only infrared light”; FIGS. 1-7);
the video image data having a frame rate at which the image data are generated by the image sensor (Li: abstract; ¶¶ [0047] “processor 150 is configured to control turned-on and turned-off of the fill light lamp 110, control a collection frame rate of the image sensor”; FIGS. 1-7);
to process the image data received from the image sensor to combine successive pairs of frames of the image data to adjust dynamic range to enhance over-bright or over-dark portions of the image to expose detail, and to generate combined frames at the full frame rate of the video as generated by the image sensor (Li: abstract; ¶¶ [0041] “performing exposure only on an odd-numbered frame or only on an even-numbered frame, and a composite frame is generated by combining one fill light frame and one visible light frame that are adjacent”; FIGS. 1-7);
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of improving imaging quality (Li: ¶¶ [0002]-[0005]).
Shameli and Li may not teach the following:
process image data via a machine learning model, the machine learning model trained;
to sum an error for an intensity of the image relative to a setpoint intensity; and
to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation.
Zhang, however, teaches the following:
process image data via a machine learning model, the machine learning model trained (Zhang: abstract; ¶¶ [0008]-[0009] “neural network trained…providing processed output; wherein the processed output includes input image data that has been adjusted for at least one image quality attribute… at least one image quality attribute may include image size, aspect ratio, brightness, intensity, bit depth, white value, dynamic range, gray level, contouring, smoothing, speckle, color space values, interleaving, correction, gamma correction, edge enhancement, contrast enhancement, sharpness and demosaicing”, [0013] “imaging device may include a medical imaging device”, [0029] “machine learning”; FIGS. 1-9B).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the neural network based image signal processor that uses machine learning to enhance video imaging attributes, as taught by Zhang, with the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of improving imaging quality (Zhang: ¶¶ [0004]-[0007]).
Shameli, Li and Zhang may not teach the following:
to sum an error for an intensity of the image relative to a setpoint intensity; and
to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation.
Talbert, however, teaches the following:
to sum an error for an intensity of the image relative to a setpoint intensity (Talbert: abstract; ¶¶ [0037] “each light pulse is adjusted proportionally based on a calculated error measurement, and the error measurement is calculated by comparing desired exposure levels against measured exposure levels. The measured exposure level may be calculated using the mean pixel value of all pixels or some portion of pixels in the image sensor”; FIGS. 1-21B); and
to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation (Talbert: abstract; ¶¶ [0037] “a PID (proportional, integral, and derivative) control algorithm is implemented to ensure the captured scene maintains a desired video exposure level to maximize the dynamic range of the image sensor or to achieve a desired scene response desired by the end user. The PID control algorithm may generally be referred to herein as the automatic shutter control (ASC). In some embodiments, each light pulse is adjusted proportionally based on a calculated error measurement”, [0051]; FIGS. 1-21B).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the imaging system including a PID algorithm, as taught by Talbert, with the neural network based image signal processor that uses machine learning to enhance video imaging attributes, as taught by Zhang, with the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of maintaining image exposure levels and imaging quality (Talbert: ¶¶ [0036]-[0037]).
3.2. Claims 2-3, and 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), in view of Zhang et al. (US 2017/0185871).
CLAIM 2
Shameli teaches an apparatus (Shameli: abstract), comprising:
a computer processor and a memory (Shameli: abstract; ¶¶ [0033] “processor (12) of IGS navigation system (10) comprises one or more processing units (e.g., a microprocessor, logic processor, or other circuitry usable to execute programming instructions) communicating with one or more memories”; FIGS. 1-2);
the processor programmed to (Shameli: abstract; ¶¶ [0033] “processor (12) of IGS navigation system (10) comprises one or more processing units (e.g., a microprocessor, logic processor, or other circuitry usable to execute programming instructions) communicating with one or more memories”; FIGS. 1-2):
to receive video image data from an image sensor at the distal end of an endoscope and to display the image data to a surgeon in real time (Shameli: abstract; ¶¶ [0030], [0035] “provide video in real time via display screen (16) which may include showing an endoscopic image (e.g., captured via the dual camera endoscope”, [0037]-[0038] “one or more image sensors or image capture devices positioned at the distal end of the shaft”, [0050]; FIGS. 1-2); and
to process to process the image data received from the image sensor to simultaneously upsample the image data to a resolution higher than that captured by the image sensor, to sharpen edges, and to enhance local contrast (Shameli: abstract; ¶¶ [0037] “image data can be captured and used to provide a number of advantageous features such as improved resolution”, [0065]-[0066] “system may then prepare (block 234) a super resolution image from the combined, overlapping portions of the two input images, and display (block 236) the resulting image at a higher resolution”; FIGS. 1-13D).
Shameli may not teach the following:
process image data via a machine learning model, the machine learning model trained.
Zhang, however, teaches the following:
process image data via a machine learning model, the machine learning model trained (Zhang: abstract; ¶¶ [0008]-[0009] “neural network trained…providing processed output; wherein the processed output includes input image data that has been adjusted for at least one image quality attribute… at least one image quality attribute may include image size, aspect ratio, brightness, intensity, bit depth, white value, dynamic range, gray level, contouring, smoothing, speckle, color space values, interleaving, correction, gamma correction, edge enhancement, contrast enhancement, sharpness and demosaicing”, [0013] “imaging device may include a medical imaging device”, [0029] “machine learning”; FIGS. 1-9B).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the neural network based image signal processor that uses machine learning to enhance video imaging attributes, as taught by Zhang, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of improving imaging quality (Zhang: ¶¶ [0004]-[0007]).
CLAIM 3
Shameli may not teach the apparatus of claim 2, the processor being further programmed to: enhance the video image data via dynamic range compensation.
Zhang, however, teaches the processor being further programmed to: enhance the video image data via dynamic range compensation (Zhang: abstract; ¶¶ [0011] “at least one image quality attribute may include…dynamic range”; FIGS. 1-9B).
The motivation to include the teachings of Zhang with the teachings of Shameli is the same as that of claim 2 above and is incorporated herein.
CLAIM 8
Shameli teaches the apparatus of claim 2, the processor being further programmed to enhance the video image data via noise reduction (Shameli: abstract; ¶¶ [0065] “produce an output image of higher resolution than its input image data, and with reduced digital noise and other visual artifacts”; FIG. 9).
CLAIM 9
Shameli may not teach the apparatus of claim 2, the processor being further programmed to: enhance the video image data via lens correction.
Zhang, however, teaches the processor being further programmed to: enhance the video image data via lens correction (Zhang: abstract; ¶¶ [0045] “image quality attributes that may be adjusted include: image size, aspect ratio, brightness, intensity, bit depth, white values, dynamic range, gray levels, contouring, smoothing, speckle (for example, as may be found in medical imaging), color space values, interleaving, correction, gamma correction, edge enhancement, contrast enhancement, sharpness, demosaicing and other aspects”; FIGS. 1-9B).
The motivation to include the teachings of Zhang with the teachings of Shameli is the same as that of claim 2 above and is incorporated herein.
CLAIM 10
Shameli may not teach the apparatus of claim 2, the processor being further programmed to: enhance the video image data via at least two of dynamic range compensation, noise reduction, and lens correction.
Zhang, however, teaches the processor being further programmed to: enhance the video image data via at least two of dynamic range compensation, noise reduction, and lens correction (Zhang: abstract; ¶¶ [0045] “A variety of image quality attributes may be manipulated during image signal processing. For example, the image quality attributes that may be adjusted include: image size, aspect ratio, brightness, intensity, bit depth, white values, dynamic range, gray levels, contouring, smoothing, speckle (for example, as may be found in medical imaging), color space values, interleaving, correction, gamma correction, edge enhancement, contrast enhancement, sharpness, demosaicing and other aspects”; FIGS. 1-9B).
The motivation to include the teachings of Zhang with the teachings of Shameli is the same as that of claim 2 above and is incorporated herein.
CLAIM 11
Shameli teaches the apparatus of claim 2, the processor being further programmed to: rotate the image display to compensate for rotation of the endoscope (Shameli: abstract; ¶¶ [0067]-[0071] “Changes in the visualization window may be displayed gradually to simulate the feeling or experience of motion within a three dimensional space, such as might be experienced when an endoscope providing the image is actually moved”; FIGS. 1-13D).
3.3. Claim 4, and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), Zhang et al. (US 2017/0185871), and further in view of Li et al. (US 2021/0152728).
CLAIM 4
Shameli and Zhang may not teach the apparatus of claim 3:
the video image data having a frame rate at which the image data are generated by the image sensor;
the processor being further programmed to:
to control the image sensor and/or an illumination source designed to illuminate a scene viewed by the image sensor, the controlling programmed to underexpose or overexpose every other frame of the video image data; and
to process the image data received from the image sensor to combine successive pairs of frames of the image data to adjust dynamic range to enhance over-bright or over-dark portions of the image to expose detail, and to generate combined frames at the full frame rate of the video as generated by the image sensor.
Li, however, teaches the apparatus of claim 3:
the video image data having a frame rate at which the image data are generated by the image sensor (Li: abstract; ¶¶ [0047] “processor 150 is configured to control turned-on and turned-off of the fill light lamp 110, control a collection frame rate of the image sensor”; FIGS. 1-7);
the processor being further programmed to (Li: abstract; ¶¶ [0047] “processor 150 is configured to control turned-on and turned-off of the fill light lamp 110, control a collection frame rate of the image sensor”; FIGS. 1-7):
to control the image sensor and/or an illumination source designed to illuminate a scene viewed by the image sensor, the controlling programmed to underexpose or overexpose every other frame of the video image data (Li: abstract; ¶¶ [0041] “performing exposure only on an odd-numbered frame or only on an even-numbered frame, and a composite frame is generated by combining one fill light frame and one visible light frame that are adjacent”, [0048]-[0049] “Brightness of the fill light frame is higher than brightness of the visible light frame… the fill light frame may be alternatively generated by a sensor through exposure under only infrared light”; FIGS. 1-7); and
to process the image data received from the image sensor to combine successive pairs of frames of the image data to adjust dynamic range to enhance over-bright or over-dark portions of the image to expose detail, and to generate combined frames at the full frame rate of the video as generated by the image sensor (Li: abstract; ¶¶ [0041] “performing exposure only on an odd-numbered frame or only on an even-numbered frame, and a composite frame is generated by combining one fill light frame and one visible light frame that are adjacent”; FIGS. 1-7).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the neural network based image signal processor that uses machine learning to enhance video imaging attributes, as taught by Zhang, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of improving imaging quality (Li: ¶¶ [0002]-[0005]).
CLAIM 6
Shameli and Zhang may not teach the apparatus of claim 2, the processor being further programmed to: enhance the video image data via adjustment of exposure time, illumination intensity, and/or gain in image capture to adjust exposure saturation.
Li, however, teaches the processor being further programmed to: enhance the video image data via adjustment of exposure time, illumination intensity, and/or gain in image capture to adjust exposure saturation (Li: abstract; ¶¶ [0070]-[0072]; FIGS. 1-7).
The motivation to include the teachings of Li with the teachings of Shameli and Zhang is the same as that of claim 4 above and is incorporated herein.
3.4. Claim 5, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), Zhang et al. (US 2017/0185871), and in view of Li et al. (US 2021/0152728), and further in view of Talbert et al. (US 2020/0404143).
CLAIM 5
Shameli, Zhang, and Li may not teach the apparatus of claim 4, the processor being further programmed to: to sum an error for an intensity of the image relative to a setpoint intensity; and to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation.
Talbert, however, teaches the processor being further programmed to: to sum an error for an intensity of the image relative to a setpoint intensity (Talbert: abstract; ¶¶ [0037] “each light pulse is adjusted proportionally based on a calculated error measurement, and the error measurement is calculated by comparing desired exposure levels against measured exposure levels. The measured exposure level may be calculated using the mean pixel value of all pixels or some portion of pixels in the image sensor”; FIGS. 1-21B); and to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation (Talbert: abstract; ¶¶ [0037] “a PID (proportional, integral, and derivative) control algorithm is implemented to ensure the captured scene maintains a desired video exposure level to maximize the dynamic range of the image sensor or to achieve a desired scene response desired by the end user. The PID control algorithm may generally be referred to herein as the automatic shutter control (ASC). In some embodiments, each light pulse is adjusted proportionally based on a calculated error measurement”, [0051]; FIGS. 1-21B)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the imaging system including a PID algorithm, as taught by Talbert, with the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the neural network based image signal processor that uses machine learning to enhance video imaging attributes, as taught by Zhang, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of maintaining image exposure levels and imaging quality (Talbert: ¶¶ [0036]-[0037]).
CLAIM 7
Shameli, Zhang, and Li may not teach the apparatus of claim 6, the processor being further programmed to: to sum an error for an intensity of the image relative to a setpoint intensity; and to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation.
Talbert, however, teaches the processor being further programmed to: to sum an error for an intensity of the image relative to a setpoint intensity (Talbert: abstract; ¶¶ [0037] “each light pulse is adjusted proportionally based on a calculated error measurement, and the error measurement is calculated by comparing desired exposure levels against measured exposure levels. The measured exposure level may be calculated using the mean pixel value of all pixels or some portion of pixels in the image sensor”; FIGS. 1-21B); and to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation (Talbert: abstract; ¶¶ [0037] “a PID (proportional, integral, and derivative) control algorithm is implemented to ensure the captured scene maintains a desired video exposure level to maximize the dynamic range of the image sensor or to achieve a desired scene response desired by the end user. The PID control algorithm may generally be referred to herein as the automatic shutter control (ASC). In some embodiments, each light pulse is adjusted proportionally based on a calculated error measurement”, [0051]; FIGS. 1-21B)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the imaging system including a PID algorithm, as taught by Talbert, with the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the neural network based image signal processor that uses machine learning to enhance video imaging attributes, as taught by Zhang, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of maintaining image exposure levels and imaging quality (Talbert: ¶¶ [0036]-[0037]).
3.5. Claim 12, 15-16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), in view of Li et al. (US 2021/0152728).
CLAIM 12
Shameli teaches and apparatus (Shameli: abstract), comprising:
a computer processor and a memory (Shameli: abstract; ¶¶ [0033] “processor (12) of IGS navigation system (10) comprises one or more processing units (e.g., a microprocessor, logic processor, or other circuitry usable to execute programming instructions) communicating with one or more memories”; FIGS. 1-2);
the processor being further programmed to (Shameli: abstract; ¶¶ [0033] “processor (12) of IGS navigation system (10) comprises one or more processing units (e.g., a microprocessor, logic processor, or other circuitry usable to execute programming instructions) communicating with one or more memories”; FIGS. 1-2):
to receive video image data from an image sensor at the distal end of an endoscope and to display the image data to a surgeon in real time, (Shameli: abstract; ¶¶ [0030], [0035] “provide video in real time via display screen (16) which may include showing an endoscopic image (e.g., captured via the dual camera endoscope”, [0037]-[0038] “one or more image sensors or image capture devices positioned at the distal end of the shaft”, [0050]; FIGS. 1-2).
Shameli may not teach:
the video image data having a frame rate at which the image data are generated by the image sensor;
to control the image sensor and/or an illumination source designed to illuminate a scene viewed by the image sensor, the controlling programmed to underexpose or overexpose every other frame of the video image data; and
to process the image data received from the image sensor to combine successive pairs of frames of the image data to adjust dynamic range to enhance over-bright or over-dark portions of the image to expose detail, and to generate combined frames at the full frame rate of the video as generated by the image sensor.
Li, however, teaches:
the video image data having a frame rate at which the image data are generated by the image sensor (Li: abstract; ¶¶ [0047] “processor 150 is configured to control turned-on and turned-off of the fill light lamp 110, control a collection frame rate of the image sensor”; FIGS. 1-7);
to control the image sensor and/or an illumination source designed to illuminate a scene viewed by the image sensor, the controlling programmed to underexpose or overexpose every other frame of the video image data (Li: abstract; ¶¶ [0041] “performing exposure only on an odd-numbered frame or only on an even-numbered frame, and a composite frame is generated by combining one fill light frame and one visible light frame that are adjacent”, [0048]-[0049] “Brightness of the fill light frame is higher than brightness of the visible light frame… the fill light frame may be alternatively generated by a sensor through exposure under only infrared light”; FIGS. 1-7); and
to process the image data received from the image sensor to combine successive pairs of frames of the image data to adjust dynamic range to enhance over-bright or over-dark portions of the image to expose detail, and to generate combined frames at the full frame rate of the video as generated by the image sensor (Li: abstract; ¶¶ [0041] “performing exposure only on an odd-numbered frame or only on an even-numbered frame, and a composite frame is generated by combining one fill light frame and one visible light frame that are adjacent”; FIGS. 1-7).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of improving imaging quality (Li: ¶¶ [0002]-[0005]).
CLAIM 15
Shameli may not teach the apparatus of claim 12, the processor being further programmed to: adjust exposure time, illumination intensity, and/or gain in image capture to adjust exposure saturation.
Li, however, teaches the processor being further programmed to: adjust exposure time, illumination intensity, and/or gain in image capture to adjust exposure saturation (Li: abstract; ¶¶ [0070]-[0072]; FIGS. 1-7).
The motivation to include the teachings of Li with the teachings of Shameli is the same as that of claim 12 above and is incorporated herein.
CLAIM 16
Shameli teaches the apparatus of claim 12, the processor being further programmed to enhance the video image data via noise reduction (Shameli: abstract; ¶¶ [0065] “produce an output image of higher resolution than its input image data, and with reduced digital noise and other visual artifacts”; FIG. 9).
CLAIM 19
Shameli teaches the apparatus of claim 12, the processor being further programmed to: rotate the image display to compensate for rotation of the endoscope (Shameli: abstract; ¶¶ [0067]-[0071] “Changes in the visualization window may be displayed gradually to simulate the feeling or experience of motion within a three dimensional space, such as might be experienced when an endoscope providing the image is actually moved”; FIGS. 1-13D).
3.6. Claim 13, and 17-18 is rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), in view of Li et al. (US 2021/0152728), and further in view of Zhang et al. (US 2017/0185871).
CLAIM 13
Shameli teaches apparatus of claim 12, the processor further programmed to: to process the image data received from the image sensor to simultaneously upsample the image data to a resolution higher than that captured by the image sensor, to sharpen edges, and to enhance local contrast (Shameli: abstract; ¶¶ [0037] “image data can be captured and used to provide a number of advantageous features such as improved resolution”, [0065]-[0066] “system may then prepare (block 234) a super resolution image from the combined, overlapping portions of the two input images, and display (block 236) the resulting image at a higher resolution”; FIGS. 1-13D).
Shameli and Li may not teach via a machine learning model, the machine learning model trained.
Zhang, however, teaches via a machine learning model, the machine learning model trained (Zhang: abstract; ¶¶ [0008]-[0009] “neural network trained…providing processed output; wherein the processed output includes input image data that has been adjusted for at least one image quality attribute… at least one image quality attribute may include image size, aspect ratio, brightness, intensity, bit depth, white value, dynamic range, gray level, contouring, smoothing, speckle, color space values, interleaving, correction, gamma correction, edge enhancement, contrast enhancement, sharpness and demosaicing”, [0013] “imaging device may include a medical imaging device”, [0029] “machine learning”; FIGS. 1-9B).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the neural network based image signal processor that uses machine learning to enhance video imaging attributes, as taught by Zhang, with the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of improving imaging quality (Zhang: ¶¶ [0004]-[0007]).
CLAIM 17
Shameli and Li may not teach the apparatus of claim 12, the processor being further programmed to: enhance the video image data via lens correction.
Zhang, however, teaches the processor being further programmed to: enhance the video image data via lens correction (Zhang: abstract; ¶¶ [0045] “image quality attributes that may be adjusted include: image size, aspect ratio, brightness, intensity, bit depth, white values, dynamic range, gray levels, contouring, smoothing, speckle (for example, as may be found in medical imaging), color space values, interleaving, correction, gamma correction, edge enhancement, contrast enhancement, sharpness, demosaicing and other aspects”; FIGS. 1-9B).
The motivation to include the teachings of Zhang with the teachings of Shameli and Li is the same as that of claim 13 above and is incorporated herein.
CLAIM 18
Shameli and Li may not teach the apparatus of claim 12, the processor being further programmed to: enhance the video image data via lens correction.
Zhang, however, teaches the processor being further programmed to: enhance the video image data via lens correction (Zhang: abstract; ¶¶ [0045] “image quality attributes that may be adjusted include: image size, aspect ratio, brightness, intensity, bit depth, white values, dynamic range, gray levels, contouring, smoothing, speckle (for example, as may be found in medical imaging), color space values, interleaving, correction, gamma correction, edge enhancement, contrast enhancement, sharpness, demosaicing and other aspects”; FIGS. 1-9B).
The motivation to include the teachings of Zhang with the teachings of Shameli and Li is the same as that of claim 13 above and is incorporated herein.
3.7. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), in view of Li et al. (US 2021/0152728), and further in view of Talbert et al. (US 2020/0404143).
CLAIM 14
Shameli and Li may not teach the apparatus of claim 12, the processor further programmed to: to sum an error for an intensity of the image relative to a setpoint intensity; and to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation.
Talbert, however, teaches the processor being further programmed to: to sum an error for an intensity of the image relative to a setpoint intensity (Talbert: abstract; ¶¶ [0037] “each light pulse is adjusted proportionally based on a calculated error measurement, and the error measurement is calculated by comparing desired exposure levels against measured exposure levels. The measured exposure level may be calculated using the mean pixel value of all pixels or some portion of pixels in the image sensor”; FIGS. 1-21B); and to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation (Talbert: abstract; ¶¶ [0037] “a PID (proportional, integral, and derivative) control algorithm is implemented to ensure the captured scene maintains a desired video exposure level to maximize the dynamic range of the image sensor or to achieve a desired scene response desired by the end user. The PID control algorithm may generally be referred to herein as the automatic shutter control (ASC). In some embodiments, each light pulse is adjusted proportionally based on a calculated error measurement”, [0051]; FIGS. 1-21B)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the imaging system including a PID algorithm, as taught by Talbert, with the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of maintaining image exposure levels and imaging quality (Talbert: ¶¶ [0036]-[0037]).
3.8. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), in view of Talbert et al. (US 2020/0404143).
CLAIM 20
Shameli teaches an apparatus (Shameli: abstract), comprising:
a computer processor and a memory (Shameli: abstract; ¶¶ [0033] “processor (12) of IGS navigation system (10) comprises one or more processing units (e.g., a microprocessor, logic processor, or other circuitry usable to execute programming instructions) communicating with one or more memories”; FIGS. 1-2);
the processor programmed to (Shameli: abstract; ¶¶ [0033] “processor (12) of IGS navigation system (10) comprises one or more processing units (e.g., a microprocessor, logic processor, or other circuitry usable to execute programming instructions) communicating with one or more memories”; FIGS. 1-2):
to receive video image data from an image sensor at the distal end of an endoscope and to display the image data to a surgeon in real time (Shameli: abstract; ¶¶ [0030], [0035] “provide video in real time via display screen (16) which may include showing an endoscopic image (e.g., captured via the dual camera endoscope”, [0037]-[0038] “one or more image sensors or image capture devices positioned at the distal end of the shaft”, [0050]; FIGS. 1-2).
Shameli may not teach the following:
to sum an error for an intensity of the image relative to a setpoint intensity; and
to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation.
Talbert, however, teaches the following:
to sum an error for an intensity of the image relative to a setpoint intensity (Talbert: abstract; ¶¶ [0037] “each light pulse is adjusted proportionally based on a calculated error measurement, and the error measurement is calculated by comparing desired exposure levels against measured exposure levels. The measured exposure level may be calculated using the mean pixel value of all pixels or some portion of pixels in the image sensor”; FIGS. 1-21B); and
to simultaneously control at least two of gain, exposure, and illumination via a PID control algorithm to achieve image display at the setpoint intensity, maximum change per step of the PID control damped to prevent oscillation (Talbert: abstract; ¶¶ [0037] “a PID (proportional, integral, and derivative) control algorithm is implemented to ensure the captured scene maintains a desired video exposure level to maximize the dynamic range of the image sensor or to achieve a desired scene response desired by the end user. The PID control algorithm may generally be referred to herein as the automatic shutter control (ASC). In some embodiments, each light pulse is adjusted proportionally based on a calculated error measurement”, [0051]; FIGS. 1-21B).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the imaging system including a PID algorithm, as taught by Talbert, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of maintaining image exposure levels and imaging quality (Talbert: ¶¶ [0036]-[0037]).
3.9. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), in view of Talbert et al. (US 2020/0404143), and further in view of Zhang et al. (US 2017/0185871).
CLAIM 21
Shameli teaches the apparatus of claim 20, the processor being further programmed to: to process the image data received from the image sensor to simultaneously upsample the image data to a resolution higher than that captured by the image sensor, to sharpen edges, and to enhance local contrast (Shameli: abstract; ¶¶ [0037] “image data can be captured and used to provide a number of advantageous features such as improved resolution”, [0065]-[0066] “system may then prepare (block 234) a super resolution image from the combined, overlapping portions of the two input images, and display (block 236) the resulting image at a higher resolution”; FIGS. 1-13D).
Shameli and Talbert may not teach via a machine learning model, the machine learning model trained.
Zhang, however, teaches via a machine learning model, the machine learning model trained (Zhang: abstract; ¶¶ [0008]-[0009] “neural network trained…providing processed output; wherein the processed output includes input image data that has been adjusted for at least one image quality attribute… at least one image quality attribute may include image size, aspect ratio, brightness, intensity, bit depth, white value, dynamic range, gray level, contouring, smoothing, speckle, color space values, interleaving, correction, gamma correction, edge enhancement, contrast enhancement, sharpness and demosaicing”, [0013] “imaging device may include a medical imaging device”, [0029] “machine learning”; FIGS. 1-9B).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the neural network based image signal processor that uses machine learning to enhance video imaging attributes, as taught by Zhang, with the imaging system including a PID algorithm, as taught by Talbert, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of improving imaging quality (Zhang: ¶¶ [0004]-[0007]).
3.91. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Shameli et al. (US 2020/0201022), in view of Talbert et al. (US 2020/0404143), and further in view of Li et al. (US 2021/0152728).
CLAIM 22
Shameli and Talbert may not teach the apparatus of claim 20, the processor being further programmed to:
the video image data having a frame rate at which the image data are generated by the image sensor;
the processor being further programmed to:
to control the image sensor and/or an illumination source designed to illuminate a scene viewed by the image sensor, the controlling programmed to underexpose or overexpose every other frame of the video image data; and
to process the image data received from the image sensor to combine successive pairs of frames of the image data to adjust dynamic range to enhance over-bright or over-dark portions of the image to expose detail, and to generate combined frames at the full frame rate of the video as generated by the image sensor.
Li, however, teaches the following:
the video image data having a frame rate at which the image data are generated by the image sensor (Li: abstract; ¶¶ [0047] “processor 150 is configured to control turned-on and turned-off of the fill light lamp 110, control a collection frame rate of the image sensor”; FIGS. 1-7);
to control the image sensor and/or an illumination source designed to illuminate a scene viewed by the image sensor, the controlling programmed to underexpose or overexpose every other frame of the video image data (Li: abstract; ¶¶ [0041] “performing exposure only on an odd-numbered frame or only on an even-numbered frame, and a composite frame is generated by combining one fill light frame and one visible light frame that are adjacent”, [0048]-[0049] “Brightness of the fill light frame is higher than brightness of the visible light frame… the fill light frame may be alternatively generated by a sensor through exposure under only infrared light”; FIGS. 1-7); and
to process the image data received from the image sensor to combine successive pairs of frames of the image data to adjust dynamic range to enhance over-bright or over-dark portions of the image to expose detail, and to generate combined frames at the full frame rate of the video as generated by the image sensor (Li: abstract; ¶¶ [0041] “performing exposure only on an odd-numbered frame or only on an even-numbered frame, and a composite frame is generated by combining one fill light frame and one visible light frame that are adjacent”; FIGS. 1-7).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the time division multiplexing fill light imaging apparatus and method that controls light exposure of frames, as taught by Li, with the imaging system including a PID algorithm, as taught by Talbert, with the endoscope with dual image sensors, as taught by Shameli, with the motivation of improving imaging quality (Li: ¶¶ [0002]-[0005]).
Conclusion
4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Tomaszewski whose telephone number is (313)446-4863. The examiner can normally be reached M-F 5:30 am - 2:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter H Choi can be reached at (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL TOMASZEWSKI/Primary Examiner, Art Unit 3681