DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/12/2023 and 02/06/2025 have been entered and considered. Initialed copy/copies of the PTO-1449 by the Examiner is/are attached.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Objections
Claim 21 is objected to because of the following informalities:
In claim 21, line 1, the word “An method”, should read “A method” because the word ‘method’ does not begin with a vowel.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 21, 24-25, 27, 28-36 and 38-39 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zhao (Pub No.: US20190139190A1).
As to independent claim 21, Zhao teaches an method for processing images (color imaging in surgical systems, and are more particularly related to demosaicing a frame of pixels – see [p][0002]), the method comprising: receiving an image frame (100, pixel layout – see Fig 1A) that includes a plurality of first pixel values at a plurality of first locations on the image frame (for e.g. blue pixels, represented by Bs in the pixel layout– see Fig 1A), a plurality of second pixel values at a plurality of second locations on the image frame (for e.g. green pixels, represented by Gs in the pixel layout– see Fig 1A – see Fig 1A), and a plurality of third pixel values at a plurality of third locations on the image frame (for e.g. blue pixels, represented by Bs in the pixel layout– see Fig 1A – see Fig 1A); removing the plurality of first pixel values from the image frame (100G with red and blue pixels removed – see Fig 1B); determining a plurality of estimated second pixel values at the plurality of first locations and the plurality of third locations on the image frame ([i]n RECONSTRUCT GREEN PIXELS process 310, sometimes referred to as process 310, demosaic module 243 reconstructs missing green pixels so that all locations in frame 100G (FIG. 1B) have a green pixel value -see [p][0060]); determining a plurality of estimated third pixel values at the plurality of first locations and the plurality of second locations on the image frame ([a]t each location missing a red pixel in frame 100R (FIG. 1C), RECONSTRUCT RED AND BLUE PIXELS process 311 reconstructs red pixels so that all these locations have a reconstructed red pixel r. In frame 100R, red pixels that were captured by image capture sensor 225 are represented by “R,” and are sometimes referred to as original red pixel R – see [p][0065]); and generating a processed image frame based on the plurality of second pixel values, the plurality of estimated second pixel values, the plurality of third pixel values, and the plurality of estimated third pixel values ([t]he color components of the full resolution frame created in the enhanced demosaicing process are defined by a color space. Typical color spaces include: a color space that includes a red color component R, a green color component G, and a blue color component B, which is referred to as a RGB color space – see [p][0146]).
Regarding claim 24, Zhoa teaches the method of claim 21, wherein determining the plurality of estimated second pixel values at the plurality of first locations and the plurality of third locations on the image frame comprises: interpolating from one or more of the plurality of second pixel values at one or more of the plurality of second locations neighboring each first location of the plurality of first locations to determine a corresponding estimated second pixel value at the respective first location (ADAPTIVE RECONSTRUCTION OF GREEN PIXELS process 305, sometimes referred to as process 305, generates a reconstructed green pixel g for the location. Herein, when it is stated that a reconstructed green pixel g is generated, it means that a value of the pixel is generated which represents the intensity of that pixel – see [p][0062], For example, when the current pixel is a green pixel G at location (x, y) (FIG. 12A or 12B), process 302 uses the second order series expansion to generate estimated values for the red and blue pixels at location (x, y) – [p][0132] and Through simple manipulations, it can be shown that the interpolations f_h(x, y) and f_v(x, y) for missing pixels can be expressed as follows (using the second order Taylor series expansion) Horizontally along x:
f_h(x, y)=[f(x+1, y)+f(x−1, y)]/2 −[f(x+2, y)+f( 31 2, y)−2*f(x, y)]/4
Vertically along y: f_v(x, y)=[f(x, y+1)+f(x, y31 1)]/2 −[f(x, y+2)+f(x, y−2)−2*f(x, y)]/4 – see [p][0134]); and interpolating from one or more of the plurality of second pixel values at one or more of the plurality of second locations neighboring each third location of the plurality of third locations to determine a corresponding estimated second pixel value at the respective third location (see [p][0137]).
Regarding claim 25, Zhoa teaches the method of claim 21, wherein determining the plurality of estimated third pixel values at the plurality of first locations and the plurality of second locations on the image frame comprises: interpolating from one or more of the plurality of third pixel values at one or more of the plurality of third locations neighboring each first location of the plurality of first locations to determine a corresponding estimated third pixel value at the respective first location (RECONSTRUCT RED AND BLUE PIXELS process 311 reconstructs red pixels so that all these locations have a reconstructed red pixel r. In frame 100R, red pixels that were captured by image capture sensor 225 are represented by “R,” and are sometimes referred to as original red pixel R. At each location missing a blue pixel in frame 100B (FIG. 1D), process 311 reconstructs blue pixels b so that all these locations have a reconstructed blue pixel b.- see [p][0065]; For example, when the current pixel is a green pixel G at location (x, y) (FIG. 12A or 12B), process 302 uses the second order series expansion to generate estimated values for the red and blue pixels at location (x, y) – [p][0132] and Through simple manipulations, it can be shown that the interpolations f_h(x, y) and f_v(x, y) for missing pixels can be expressed as follows (using the second order Taylor series expansion) Horizontally along x:
f_h(x, y)=[f(x+1, y)+f(x−1, y)]/2 −[f(x+2, y)+f( 31 2, y)−2*f(x, y)]/4
Vertically along y: f_v(x, y)=[f(x, y+1)+f(x, y31 1)]/2 −[f(x, y+2)+f(x, y−2)−2*f(x, y)]/4 – see [p][0134]); and interpolating from one or more of the plurality of third pixel values at one or more of the plurality of third locations neighboring each second location of the plurality of second locations to determine a corresponding estimated third pixel value at the respective second location (see [p][0137]).
Regarding claim 27, Zhoa teaches the method of claim 21, further comprising: modifying a brightness component for one or more of the plurality of second pixel values or one or more of the plurality of third pixel values to generate a contrast enhanced image frame (ENHANCED DEMOSAICING process 1300 with local contrast enhancement, sometimes referred to as process 1300. In process 1300, RECEIVE CAPTURED FRAME process 301 and RECONSTRUCT GREEN PIXELS process 310 – see [p][0150]); and blending the contrast enhanced image frame with the processed image frame (see [p][0138]).
Regarding claim 29, Zhoa teaches the method of claim 28, further comprising: based on one or more of the image frame, the processed image frame, the sharpened image frame, or the contrast enhanced image frame (local contrast enhancement is implemented in the enhanced demosaicing process – see [p][0142]), determining a current iteration of image processing has met a threshold (see [p]0056]); and generating the partial-resolution image frame in response to the determination (enhancement of contrast are also applicable to a system in which the response for one of a plurality of components in a color space is degraded – see [p][0145]).
Regarding claim 30, Zhoa teaches the method of claim 21, wherein the plurality of first pixel values correspond to a first color, the plurality of second pixel values correspond to a second color different from the first color, and the plurality of third pixel values correspond to a third color different from the first color and the second color (see Fig 1A-1D).
Regarding claim 31, Zhoa teaches the method of claim 30, wherein the image frame is captured by and received from a sensor including a Bayer color filter array, and wherein the first color, the second color, and the third color correspond to one of red, green, or blue (The pixels in the received frame have the pixel layout of frame 100 (FIG. 1A) due to Bayer color filter array 223 – see [p][0059] and Fig 1A-1D).
As to independent claim 32, Zhao teaches a method for processing images (color imaging in surgical systems , and are more particularly related to demosaicing a frame of pixels – see [p][0002]), the method comprising: receiving an image including a plurality of pixels having a plurality of captured pixel values at a plurality of locations (100, pixel layout – see Fig 1A); excluding, from the image, a first subset of the plurality of captured pixel values for a first subset of the plurality of pixels at a first subset of the plurality of locations (for e.g. blue pixels, represented by Bs in the pixel layout– see Fig 1A); inferring a plurality of unknown pixel values for a remaining second subset of the plurality of pixels at the first subset of the plurality of locations ([i]n RECONSTRUCT GREEN PIXELS process 310, sometimes referred to as process 310, demosaic module 243 reconstructs missing green pixels so that all locations in frame 100G (FIG. 1B) have a green pixel value -see [p][0060])); and generating a processed image using a second subset of the plurality of captured pixel values for the remaining second subset of the plurality of pixels and the plurality of unknown pixel values inferred for the remaining second subset of the plurality of pixels at the first subset of the plurality of locations ([t]he color components of the full resolution frame created in the enhanced demosaicing process are defined by a color space. Typical color spaces include: a color space that includes a red color component R, a green color component G, and a blue color component B, which is referred to as a RGB color space – see [p][0146]).
Regarding claim 33, Zhoa teaches the method of claim 32, wherein the first subset of the plurality of pixels includes first color pixels at the first subset of the plurality of locations in the image, and wherein the second subset of the plurality of pixels includes second color pixels at a second subset of the plurality of locations in the image and third color pixels at a third subset of the plurality of locations in the image (see Fig 1A-1D).
Regarding claim 34, Zhoa teaches the method of claim 33, further comprising: inferring a plurality of unknown pixel values for the second color pixels at the third subset of the plurality of locations in the image (ADAPTIVE RECONSTRUCTION OF GREEN PIXELS process 305, sometimes referred to as process 305, generates a reconstructed green pixel g for the location. Herein, when it is stated that a reconstructed green pixel g is generated, it means that a value of the pixel is generated which represents the intensity of that pixel – see [p][0062], For example, when the current pixel is a green pixel G at location (x, y) (FIG. 12A or 12B), process 302 uses the second order series expansion to generate estimated values for the red and blue pixels at location (x, y) – [p][0132] and Through simple manipulations, it can be shown that the interpolations f_h(x, y) and f_v(x, y) for missing pixels can be expressed as follows (using the second order Taylor series expansion) Horizontally along x:
f_h(x, y)=[f(x+1, y)+f(x−1, y)]/2 −[f(x+2, y)+f( 31 2, y)−2*f(x, y)]/4
Vertically along y: f_v(x, y)=[f(x, y+1)+f(x, y31 1)]/2 −[f(x, y+2)+f(x, y−2)−2*f(x, y)]/4 – see [p][0134]); inferring a plurality of unknown pixel values for the third color pixels at the second subset of the plurality of locations in the image (RECONSTRUCT RED AND BLUE PIXELS process 311 reconstructs red pixels so that all these locations have a reconstructed red pixel r. In frame 100R, red pixels that were captured by image capture sensor 225 are represented by “R,” and are sometimes referred to as original red pixel R. At each location missing a blue pixel in frame 100B (FIG. 1D), process 311 reconstructs blue pixels b so that all these locations have a reconstructed blue pixel b.- see [p][0065]; For example, when the current pixel is a green pixel G at location (x, y) (FIG. 12A or 12B), process 302 uses the second order series expansion to generate estimated values for the red and blue pixels at location (x, y) – [p][0132] and Through simple manipulations, it can be shown that the interpolations f_h(x, y) and f_v(x, y) for missing pixels can be expressed as follows (using the second order Taylor series expansion) Horizontally along x:
f_h(x, y)=[f(x+1, y)+f(x−1, y)]/2 −[f(x+2, y)+f( 31 2, y)−2*f(x, y)]/4
Vertically along y: f_v(x, y)=[f(x, y+1)+f(x, y31 1)]/2 −[f(x, y+2)+f(x, y−2)−2*f(x, y)]/4 – see [p][0134]); and generating the processed image further based on the plurality of unknown pixel values inferred for the second color pixels at the third subset of the plurality of locations in the image and the plurality of unknown pixel values inferred for the third color pixels at the second subset of the plurality of locations in the image (the reconstruction, a full resolution frame of pixels is obtained, e.g., a frame with a Y pixel, a U pixel, and a V pixel at each location in the frame. In one aspect, this full resolution frame of pixels is sent to a display unit for display – see [p][0041]).
Regarding claim 35, Zhoa teaches the method of claim 33, wherein the image is captured by and received from a sensor including a Bayer color filter array, and wherein the first color pixels, the second color pixels, and the third color pixels correspond to one of red, green, or blue (The pixels in the received frame have the pixel layout of frame 100 (FIG. 1A) due to Bayer color filter array 223 – see [p][0059] and Fig 1A-1D).
Regarding claim 36, Zhoa teaches the method of claim 32, further comprising: generating one or more of a sharpened image or a contrast enhanced image from the image (ENHANCED DEMOSAICING process 1300 with local contrast enhancement, sometimes referred to as process 1300. In process 1300, RECEIVE CAPTURED FRAME process 301 and RECONSTRUCT GREEN PIXELS process 310 – see [p][0150]); and generating a partial-resolution image based on the processed image and the one or more of the sharpened image or the contrast enhanced image (see [p][0150]).
Regarding claim 38, Zhoa teaches the method of claim 36, wherein generating the contrast enhanced image from the image comprises: modifying a brightness component for one or more of the remaining second subset of the plurality of pixels to generate a contrast enhanced image frame ([t]he key idea of enhancing image contrast is to build and boost a brightness component, e.g., a luminance component Y, from the green component, and build chromatic components, e.g., UV (or Cb, Cr) components, from all of the red, green, and blue color component – see [p][0040]).
As to independent claim 39, Zhao teaches a method for processing images, the method (color imaging in surgical systems , and are more particularly related to demosaicing a frame of pixels – see [p][0002]) comprising: receiving an image including a plurality of captured pixel values (100, pixel layout – see Fig 1A); filtering out a first subset of the plurality of captured pixel values from the image (100G with red and blue pixels removed – see Fig 1B)); generating a plurality of estimated pixel values ([i]n RECONSTRUCT GREEN PIXELS process 310, sometimes referred to as process 310, demosaic module 243 reconstructs missing green pixels so that all locations in frame 100G (FIG. 1B) have a green pixel value -see [p][0060]); and generating a processed image using a remaining second subset of the plurality of captured pixel values and the plurality of estimated pixel values, and omitting the filtered out first subset of the plurality of captured pixel values ([t]he color components of the full resolution frame created in the enhanced demosaicing process are defined by a color space. Typical color spaces include: a color space that includes a red color component R, a green color component G, and a blue color component B, which is referred to as a RGB color space – see [p][0146])).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 22 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao (Pub No.: US20190139190A1) in view of Xia et al (NPL titled: Endoscopic Raman Spectroscopy for Molecular Fingerprinting of Gastric Cancer: Principle to Implementation)
Regarding claim 22, Zhoa teaches the method of claim 21, further comprising: removing the plurality of first pixel values from the image frame (100G with red and blue pixels removed – see Fig 1B);
Zhao does not explicitly teach receiving a shifted wavelength input; and based on the shifted wavelength input and a first color associated with the plurality of first pixel values corresponding to the shifted wavelength input.
Xia explicitly teaches receiving a shifted wavelength input; and based on the shifted wavelength input and a first color associated with the plurality of first pixel values corresponding to the shifted wavelength input (“[t]hese shifted wavelengths then traverse the monochromator and detection system which measures their frequency. The specific frequencies of the shifted scattered light reveal the molecular structure of the sample material” – see section 2.2, [p][001]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Zhoa of having an method for processing images, the method comprising: receiving an image frame that includes a plurality of first pixel values at a plurality of first locations on the image frame, with the teachings of Xia having receiving a shifted wavelength input; and based on the shifted wavelength input and a first color associated with the plurality of first pixel values corresponding to the shifted wavelength input
Wherein having Zhoa receiving a shifted wavelength input; and based on the shifted wavelength input and a first color associated with the plurality of first pixel values corresponding to the shifted wavelength input.
The motivation behind the modification would have been to obtain a method for detecting an abnormal growth by analyzing image capture using an endoscopic image, since both Xia and Zhoa relate to processing medical image, wherein Zhoa implement an efficient demosaicing process for generating a plurality of color difference signals from pixels in the frame and then filters each of the color difference signals in the plurality of color difference signals thus constructing a full resolution frame of green pixels by adaptively generating missing green pixels from the filtered color difference signal, while Xia provide an effective method for identifying malignant ulcer and showed the capability to analyze the carcinogenesis process (Please see Zhoa (US20190139190A1 A1), Paragraph [0028], and Xia (NPL titled: Endoscopic Raman Spectroscopy for Molecular Fingerprinting of Gastric Cancer: Principle to Implementation), Abstract).
Regarding claim 23, Zhoa in view of Xia teach the method of claim 22, Zhoa teaches wherein the image frame is of a target site including at least a first feature and a second feature having a substantially similar color to the first color ([c]ontrast enhanced images make it easier for surgeons to differentiate critical structures, such as blood vessels from surrounding tissues, when viewing a surgical site on a display of a surgical system. In surgical imaging, the predominant color is red for both tissues and blood vessel – see [p][0039]).
Claims 26, 28, 37 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao (Pub No.: US20190139190A1) in view of Zhao’068 et al (Pub No.: US20170301068A1)
Regarding claim 26, Zhao teaches the method of claim 21, one or more edges detected in one or more of the plurality of second locations (process 305 generates a horizontal local edge estimator and a vertical local edge estimator for the missing green pixel. – see [p][0063]).
Zhao does not explicitly teach further comprising: increasing a sharpness of or the plurality of third locations to generate a sharpened image frame; and blending the sharpened image frame with the processed image frame.
However Zhao’068 explicitly teaches increasing a sharpness of or the plurality of third locations to generate a sharpened image frame ([a]n image driven sharpness measurement 162 then measures using MTF (modulation transfer function), local statistics or image gradient to measure sharpness values among the multiple channels 182, and produces a maximum sharpness value – see [p][0025]); and blending the sharpened image frame with the processed image frame (weight pixels added back to original and this resulting value provides sharpness transport among the channels by leveraging the different sharpness values at given levels among the different channels – see [p][0048]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Zhoa of having an method for processing images, the method comprising: receiving an image frame that includes a plurality of first pixel values at a plurality of first locations on the image frame, with the teachings of Zhao’068 having increasing a sharpness of or the plurality of third locations to generate a sharpened image frame; and blending the sharpened image frame with the processed image frame.
Wherein having Zhao increasing a sharpness of or the plurality of third locations to generate a sharpened image frame; and blending the sharpened image frame with the processed image frame.
The motivation behind the modification would have been to improving the sharpness of endoscopic image thus enhancing an endoscopic image, since both Zhao’068 and Zhoa relate to processing medical image, wherein Zhoa implement an efficient demosaicing process for generating a plurality of color difference signals from pixels in the frame and then filters each of the color difference signals in the plurality of color difference signals thus constructing a full resolution frame of green pixels by adaptively generating missing green pixels from the filtered color difference signal, while Zhao’068 improves the sharpness of a color image without compromising color (Please see Zhoa (US20190139190A1 A1), Paragraph [0028], and Zhao’068 US20170301068), [p][0011]).
Regarding claim 28, Zhao teaches the method of claim 21, further comprising:; generating a contrast enhanced image frame from the image frame (ENHANCED DEMOSAICING process 1300 with local contrast enhancement, sometimes referred to as process 1300. In process 1300, RECEIVE CAPTURED FRAME process 301 and RECONSTRUCT GREEN PIXELS process 310 – see [p][0150]); and generating a partial-resolution image frame based on the processed image frame, and the contrast enhanced image frame (see [p][0150]).
Zhao does not explicitly teach generating a sharpened image frame from the image frame the sharpened image frame,
However, Zhao’068 explicitly teach generating a sharpened image frame from the image frame the sharpened image frame (weight pixels added back to original and this resulting value provides sharpness transport among the channels by leveraging the different sharpness values at given levels among the different channels – see [p][0048])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Zhoa of having a method for processing images, the method comprising: receiving an image frame that includes a plurality of first pixel values at a plurality of first locations on the image frame, with the teachings of Zhao’068 having generating a sharpened image frame from the image frame the sharpened image frame.
Wherein having Zhao generating a sharpened image frame from the image frame the sharpened image frame.
The motivation behind the modification would have been to improving the sharpness of endoscopic image thus enhancing an endoscopic image, since both Zhao’068 and Zhoa relate to processing medical image, wherein Zhoa implement an efficient demosaicing process for generating a plurality of color difference signals from pixels in the frame and then filters each of the color difference signals in the plurality of color difference signals thus constructing a full resolution frame of green pixels by adaptively generating missing green pixels from the filtered color difference signal, while Zhao’068 improves the sharpness of a color image without compromising color (Please see Zhoa (US20190139190A1 A1), Paragraph [0028], and Zhao’068 US20170301068), [p][0011]).
Regarding claim 37, Zhao does not explicitly teach the method of claim 36, wherein generating the sharpened image from the image comprises: increasing a sharpness of one or more edges detected in one or more of the plurality of locations associated with the remaining second subset of the plurality of pixels to generate a sharpened image frame.
However Zhao’068 explicitly teaches the method of claim 36, wherein generating the sharpened image from the image comprises: increasing a sharpness of one or more edges detected in one or more of the plurality of locations associated with the remaining second subset of the plurality of pixels to generate a sharpened image frame ([a]n image driven sharpness measurement 162 then measures using MTF (modulation transfer function), local statistics or image gradient to measure sharpness values among the multiple channels 182, and produces a maximum sharpness value – see [p][0025]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Zhoa of having an method for processing images, the method comprising: receiving an image frame that includes a plurality of first pixel values at a plurality of first locations on the image frame, with the teachings of Zhao’068 having wherein generating the sharpened image from the image comprises: increasing a sharpness of one or more edges detected in one or more of the plurality of locations associated with the remaining second subset of the plurality of pixels to generate a sharpened image frame.
Wherein having Zhao wherein generating the sharpened image from the image comprises: increasing a sharpness of one or more edges detected in one or more of the plurality of locations associated with the remaining second subset of the plurality of pixels to generate a sharpened image frame.
The motivation behind the modification would have been to improving the sharpness of endoscopic image thus enhancing an endoscopic image, since both Zhao’068 and Zhoa relate to processing medical image, wherein Zhoa implement an efficient demosaicing process for generating a plurality of color difference signals from pixels in the frame and then filters each of the color difference signals in the plurality of color difference signals thus constructing a full resolution frame of green pixels by adaptively generating missing green pixels from the filtered color difference signal, while Zhao’068 improves the sharpness of a color image without compromising color (Please see Zhoa (US20190139190A1 A1), Paragraph [0028], and Zhao’068 US20170301068), [p][0011]).
Regarding claim 40, Zhoa teaches the method of claim 39, further comprising: generating a contrast enhanced image from the image (ENHANCED DEMOSAICING process 1300 with local contrast enhancement, sometimes referred to as process 1300. In process 1300, RECEIVE CAPTURED FRAME process 301 and RECONSTRUCT GREEN PIXELS process 310 – see [p][0150]); and generating a partial-resolution image based on the processed image, and the contrast enhanced image (see [p][0150]).
Zhao does not explicitly teach generating a sharpened image from the image; the sharpened image.
However Zhao’068 explicitly teaches generating a sharpened image from the image ([a]n image driven sharpness measurement 162 then measures using MTF (modulation transfer function), local statistics or image gradient to measure sharpness values among the multiple channels 182, and produces a maximum sharpness value – see [p][0025]) and the sharpened image (weight pixels added back to original and this resulting value provides sharpness transport among the channels by leveraging the different sharpness values at given levels among the different channels – see [p][0048]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Zhoa of having an method for processing images, the method comprising: receiving an image frame that includes a plurality of first pixel values at a plurality of first locations on the image frame, with the teachings of Zhao’068 having generating a sharpened image from the image and the sharpened image.
Wherein having Zhao having generating a sharpened image from the image and the sharpened image.
The motivation behind the modification would have been to improving the sharpness of endoscopic image thus enhancing an endoscopic image, since both Zhao’068 and Zhoa relate to processing medical image, wherein Zhoa implement an efficient demosaicing process for generating a plurality of color difference signals from pixels in the frame and then filters each of the color difference signals in the plurality of color difference signals thus constructing a full resolution frame of green pixels by adaptively generating missing green pixels from the filtered color difference signal, while Zhao’068 improves the sharpness of a color image without compromising color (Please see Zhoa (US20190139190A1 A1), Paragraph [0028], and Zhao’068 US20170301068), [p][0011]).
Double Patenting
The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 21-22, 26 and 28-31 are rejected on the ground of non-statutory double patenting as being unpatentable over claims 1, 3-5, 7, 9 and 12 of US Patent No.: US 11,790,487 B2. The conflicting claims are not identical because copending application claim 21 defines a method, while claim 1 of the application defines an apparatus corresponding to the steps performed by the method. However, the conflicting claims are not patentably distinct from each other because:
· Claims 21 and 1 recite common subject matter;
· Whereby patent claim 1, which recites the open ended transitional phrase “comprising”, does not preclude the method as being performed by an apparatus, and
· Whereby the elements of patent claim 1 are fully anticipated by claim 21, and anticipation is “the ultimate or epitome of obviousness” (In re Kalm, 154 USPQ 10 (CCPA 1967), also In re Dailey, 178 USPQ 293 (CCPA 1973) and In re Pearson, 181 USPQ 641 (CCPA 1974)).
This Application No.: 18465206
US Patent No.: US 11,790,487 B2
21. An method for processing images, the method comprising:
1. A medical device, comprising:
receiving an image frame that includes a plurality of first pixel values at a plurality of first locations on the image frame, a plurality of second pixel values at a plurality of second locations on the image frame, and a plurality of third pixel values at a plurality of third locations on the image frame;
a shaft; a sensor coupled to a distal end of the shaft and including a filter array, wherein the sensor is configured to capture a raw image, and the filter array is configured to filter the raw image into a frame of raw pixel values that includes a plurality of first pixel values, a plurality of second pixel values, and a plurality of third pixel values; and a processor and non-transitory computer readable medium storing demosaicing instructions that, when executed by the processor, causes the processor to:
removing the plurality of first pixel values from the image frame;
exclude the plurality of first pixel values from the frame of raw pixel values;
determining a plurality of estimated second pixel values at the plurality of first locations and the plurality of third locations on the image frame;
generate a plurality of estimated second pixel values at locations of the plurality of excluded first pixel values and the plurality of third pixel values on the frame;
determining a plurality of estimated third pixel values at the plurality of first locations and the plurality of second locations on the image frame;
generate a plurality of estimated third pixel values at locations of the plurality of excluded first pixel values and the plurality of second pixel values on the frame;
and generating a processed image frame based on the plurality of second pixel values, the plurality of estimated second pixel values, the plurality of third pixel values, and the plurality of estimated third pixel values.
and create a processed image having a partial-resolution frame from the plurality of second pixel values, the plurality of estimated second pixel values, the plurality of third pixel values, and the plurality of estimated third pixel values.
22 (New) The method of claim 21, further comprising: receiving a shifted wavelength input; and based on the shifted wavelength input and a first color associated with the plurality of first pixel values corresponding to the shifted wavelength input, removing the plurality of first pixel values from the image frame.
7. The medical device of claim 1, wherein the demosaicing instructions stored in the non-transitory computer readable medium cause the processor to: receive a shifted wavelength input to determine a color pixel value of the plurality of first pixels.
26. (New) The method of claim 21, further comprising: increasing a sharpness of one or more edges detected in one or more of the plurality of second locations or the plurality of third locations to generate a sharpened image frame; and blending the sharpened image frame with the processed image frame.
3. The medical device of claim 2, wherein the demosaicing instructions stored in the non-transitory computer readable medium cause the processor to: output a sharpened enhancement image created from performing the sharpened enhancement step; and blend the sharpened enhancement image with the processed image.
28. (New) The method of claim 21, further comprising: generating a sharpened image frame from the image frame; generating a contrast enhanced image frame from the image frame; and generating a partial-resolution image frame based on the processed image frame, the sharpened image frame, and the contrast enhanced image frame.
4. The medical device of claim 3, wherein the demosaicing instructions stored in the non-transitory computer readable medium cause the processor to: set a luminance value for each of the plurality of second pixels and the plurality of third pixels; and perform a contrast enhancement of the plurality of second pixels and the plurality of third pixels by modifying the luminance values to increase a contrast of the processed image.
5. The medical device of claim 4, wherein the demosaicing instructions stored in the non-transitory computer readable medium cause the processor to: output a contrast enhancement image created from performing the contrast enhancement step; and blend the contrast enhancement image with the processed image.
29. (New) The method of claim 28, further comprising: based on one or more of the image frame, the processed image frame, the sharpened image frame, or the contrast enhanced image frame, determining a current iteration of image processing has met a threshold; and generating the partial-resolution image frame in response to the determination.
12. The medical device of claim 1, wherein each location in the partial-resolution frame of pixels includes one captured color pixel value and one reconstructed color pixel value such that at least one color pixel value from the frame of raw pixels is excluded.
30. (New) The method of claim 21, wherein the plurality of first pixel values correspond to a first color, the plurality of second pixel values correspond to a second color different from the first color, and the plurality of third pixel values correspond to a third color different from the first color and the second color.
9. The medical device of claim 1, wherein the sensor includes an RGB image sensor, and the filter array includes a red-green-blue Bayer color filter array; and wherein the plurality of first pixels includes red pixels, the plurality of second pixels includes blue pixels, and the plurality of third pixels includes green pixels.
31. (New) The method of claim 30, wherein the image frame is captured by and received from a sensor including a Bayer color filter array, and wherein the first color, the second color, and the third color correspond to one of red, green, or blue.
9. The medical device of claim 1, wherein the sensor includes an RGB image sensor, and the filter array includes a red-green-blue Bayer color filter array; and wherein the plurality of first pixels includes red pixels, the plurality of second pixels includes blue pixels, and the plurality of third pixels includes green pixels.
Claims 32, 35-36 and 38 are rejected on the ground of non-statutory double patenting as being unpatentable over claims 1, 4-5 and 10 of US Patent No.: US 11,790,487 B2. The conflicting claims are not identical because copending application claim 10 defines an apparatus, while claim 1 of the application defines a method corresponding to the steps performed by the apparatus. However, the conflicting claims are not patentably distinct from each other because:
Claims 32 and 1 recite common subject matter;
Whereby patent claim 1, which recites the open ended transitional phrase “comprising”, does not preclude the method as being performed by an apparatus, and·
Whereby the elements of patent claim 1 are fully anticipated by claim 32, and anticipation is “the ultimate or epitome of obviousness” (In re Kalm, 154 USPQ 10 (CCPA 1967), also In re Dailey, 178 USPQ 293 (CCPA 1973) and In re Pearson, 181 USPQ 641 (CCPA 1974)).
This Application No.: 18465206
US Patent No.: US 11,790,487 B2.
32. (New) A method for processing images, the method comprising: receiving an image including a plurality of pixels having a plurality of captured pixel values at a plurality of locations; excluding, from the image, a first subset of the plurality of captured pixel values for a first subset of the plurality of pixels at a first subset of the plurality of locations; inferring a plurality of unknown pixel values for a remaining second subset of the plurality of pixels at the first subset of the plurality of locations; and generating a processed image using a second subset of the plurality of captured pixel values for the remaining second subset of the plurality of pixels and the plurality of unknown pixel values inferred for the remaining second subset of the plurality of pixels at the first subset of the plurality of locations.
1. A medical device, comprising: a shaft; a sensor coupled to a distal end of the shaft and including a filter array, wherein the sensor is configured to capture a raw image, and the filter array is configured to filter the raw image into a frame of raw pixel values that includes a plurality of first pixel values, a plurality of second pixel values, and a plurality of third pixel values; and a processor and non-transitory computer readable medium storing demosaicing instructions that, when executed by the processor, causes the processor to: exclude the plurality of first pixel values from the frame of raw pixel values; generate a plurality of estimated second pixel values at locations of the plurality of excluded first pixel values and the plurality of third pixel values on the frame; generate a plurality of estimated third pixel values at locations of the plurality of excluded first pixel values and the plurality of second pixel values on the frame; and create a processed image having a partial-resolution frame from the plurality of second pixel values, the plurality of estimated second pixel values, the plurality of third pixel values, and the plurality of estimated third pixel values.
35. (New) The method of claim 33, wherein the image is captured by and received from a sensor including a Bayer color filter array, and wherein the first color pixels, the second color pixels, and the third color pixels correspond to one of red, green, or blue.
10. The medical device of claim 1, wherein the sensor includes an RGB+Ir image sensor, and the filter array includes a red-green-blue-infrared Bayer color filter array; and wherein the plurality of first pixels includes blue pixels, the plurality of second pixels includes red pixels and green pixels, and the plurality of third pixels includes infrared pixels.
36. (New) The method of claim 32, further comprising: generating one or more of a sharpened image or a contrast enhanced image from the image; and generating a partial-resolution image based on the processed image and the one or more of the sharpened image or the contrast enhanced image.
5. The medical device of claim 4, wherein the demosaicing instructions stored in the non-transitory computer readable medium cause the processor to: output a contrast enhancement image created from performing the contrast enhancement step; and blend the contrast enhancement image with the processed image.
38. (New) The method of claim 36, wherein generating the contrast enhanced image from the image comprises: modifying a brightness component for one or more of the remaining second subset of the plurality of pixels to generate a contrast enhanced image frame.
4. The medical device of claim 3, wherein the demosaicing instructions stored in the non-transitory computer readable medium cause the processor to: set a luminance value for each of the plurality of second pixels and the plurality of third pixels; and perform a contrast enhancement of the plurality of second pixels and the plurality of third pixels by modifying the luminance values to increase a contrast of the processed image.
Claims 39-40 are rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 15-16 of U.S. Patent No. 11,790,487 B2.. The conflicting claims are not identical because patent claim 1 requires the additional step of “capturing a raw image; filtering the raw image into a frame of raw pixels including a plurality of first pixels”, not required by instant claim 1. However, the conflicting claims are not patentably distinct from each other because:
• Claims 1 and 39 recite common subject matter;
• Whereby claim 39, which recites the open ended transitional phrase “comprising”, does not preclude the additional elements recited by claim 1, and
• Whereby the elements of claim 39 are fully anticipated by patent claim 1 and anticipation is “the ultimate or epitome of obviousness” (In re Kalm, 154 USPQ 10 (CCPA 1967), also In re Dailey, 178 USPQ 293 (CCPA 1973) and In re Pearson, 181 USPQ 641 (CCPA 1974)).
This Application No.: 18465206
US Patent No.: US 11,790,487 B2.
39. (New) A method for processing images, the method comprising: receiving an image including a plurality of captured pixel values; filtering out a first subset of the plurality of captured pixel values from the image; generating a plurality of estimated pixel values; and generating a processed image using a remaining second subset of the plurality of captured pixel values and the plurality of estimated pixel values, and omitting the filtered out first subset of the plurality of captured pixel values.
15. An image processing method, comprising: capturing a raw image; filtering the raw image into a frame of raw pixels including a plurality of first pixels, a plurality of second pixels, and a plurality of third pixels; excluding at least the plurality of first pixels; generating missing second pixels along the frame at pixel locations of the plurality of excluded first pixels and the plurality of third pixels; generating missing third pixels along the frame at pixel locations of the plurality of excluded first pixels and the plurality of second pixels; and constructing a partially-sampled digital image from the plurality of second pixels, the plurality of third pixels, the generated second pixels, and the generated third pixels.
40. (New) The method of claim 39, further comprising: generating a sharpened image from the image; generating a contrast enhanced image from the image; and generating a partial-resolution image based on the processed image, the sharpened image, and the contrast enhanced image.
16. The method of claim 15, further comprising detecting edges within the frame of raw pixels and enhancing a sharpness of the edges to increase an edge detail in the partially-sampled digital image.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
TAKAHASHI et al (Pub No.: 20200163538) discloses an image acquisition system includes: a light source unit; an imaging unit; a medical tool detection unit; and an image processing unit. The light source unit applies, to an affected part in which a medical tool that emits fluorescence by irradiation of excitation light is present, normal light and the excitation light. The imaging unit images the affected part under irradiation with the excitation light and under irradiation with the normal light, and acquires a first image signal obtained by imaging the fluorescence and a second image signal obtained by imaging the normal light. The medical tool detection unit detects, on the basis of the first image signal, an area in which the medical tool is present in a first image obtained from the first image signal. The image processing unit generates a display image using a result of detecting the area in which the medical tool is present and the second image signal.
Zhao (US Patent No.: 10210599) discloses an efficient demosaicing method includes reconstructing missing green pixels after estimating green-red color difference signals and green-blue color difference signals that are used in the reconstruction, and then constructing missing red and blue pixels using the color difference signals. This method creates a full resolution frame of red, green and blue pixels. The full resolution frame of pixels is sent to a display unit for display. In an efficient demosaicing process that includes local contrast enhancement, image contrast is enhanced to build and boost a brightness component from the green pixels, and to build chromatic components from all of the red, green, and blue pixels. Color difference signals are used in place of the red and blue pixels in building the chromatic components.
SUZUKI (Pub No.: 20170118477) discloses an apparatus has a luminance/color difference transforming unit for generating luminance component data formed from a luminance component, first color difference component data formed from first and second color difference component data formed from second color difference component from first and second component data of the image data of a Bayer arrangement and general ledger (GL) component data. An encoding unit encodes each component of the luminance component data, the first color difference component data, the second color difference component data, and the GH component data.
Zhao (Pub No.: 20150042775) discloses a method involves generating (302) adaptively missing green pixels in the received frame of pixel with selection of a color difference signal in color difference signals using a local edge estimator. A green pixel in the full resolution frame of green pixels is transformed into a brightness pixel by scaling the green pixel with a scale factor. The brightness pixel is provided in a color space different from red-green-blue color space. The color difference signals are transformed into chrominance pixels in the color space different from a red-green-blue color space
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRAE S ALLISON whose telephone number is (571)270-1052. The examiner can normally be reached on Monday-Friday 9am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns, can be reached on (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDRAE S ALLISON/Primary Examiner, Art Unit 2673
February 5, 2026