DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Processing unit in claims 1, 33, and 68.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
A review of the published specification shows that the following appears to be the corresponding structure describe in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation:
Processing unit is described as a processor, a computing device, a logic circuit, or an FPGA in paragraph 82 of the published application.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Objections
Claims 1 and 33 are objected to because of the following informalities:
In claim 1, line 8; “subpixels’ should be changed to “sub-pixels” for consistency with the rest of the claims.
In claim 33, line 6; “subpixels’ should be changed to “sub-pixels” for consistency with the rest of the claims.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 39, 68, 84, and 91-94 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 39, the claim recites “the surgical scene”. This limitation lacks antecedent basis. No surgical scene has previously been set forth. For examination purposes, this limitation will be interpreted as reciting “a surgical scene”.
Regarding claim 68, claim 68 recites “based on (a) the estimated amount of blood and (b) at least a subset of the one or more light signals”. It is unclear what these two limitations are modifying. Are they both modifying step i, step ii, or both step i and step ii? Is only one modifying step i and the other step ii? Something else? Additionally, how would a processing unit estimate amount of blood from an amount of blood? This seems circular. Clarification is required. For examination purposes, a reference disclosing determining an amount of blood and fluorophores or fluorescent material present in the tissue will be interpreted as meeting this limitation in the claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 68 and 91 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Makinouchi (US20170079741).
Regarding claim 68, Makinouchi discloses a system comprising:
a processing unit including an image of a tissue, wherein the image includes data from one or more imaging sensors, wherein each of the one or more imaging sensors includes a plurality of pixels (Makinouchi, Para 39-41; “In the present embodiment, the light detection unit 3 outputs the detection result of the image sensor 13 as a digital signal in an image format (hereinafter referred to as captured image data). In the following description, an image captured by the image sensor 13 is referred to as captured image as appropriate. Data on the captured image is referred to as captured image data. The captured image is of a full-spec high definition (HD format) for the sake of description, but there are no limitations on the number of pixels of the captured image, pixel arrangement (aspect ratio), the gray-scale of pixel values, and the like. […] As described above, the wavelength of infrared light detected by each light receiving element of the image sensor 13 is determined by the position of the light receiving element, and hence each pixel value of captured image data is associated with the wavelength of infrared light detected by the image sensor 13. The position of a pixel on the captured image data is represented by (i,j), and the pixel disposed at (i,j) is represented by P(i,j).”),
wherein at least one pixel of the plurality of pixels includes a first plurality of sub-pixels sensitive to a first band or wavelength of light and a second plurality of sub-pixels sensitive to a second band or wavelength of light, wherein the first band or wavelength of light is distinct from the second band or wavelength of light (Makinouchi, Para 49; “The pixel P(i,j) in FIG. 2 is a pixel corresponding to a light receiving element that detects infrared light having the first wavelength in the image sensor 13. The pixel P(i+1,j) is a pixel corresponding to a light receiving element that detects infrared light having the second wavelength. The pixel P(i+2,j) is a pixel corresponding to a light receiving element that detects infrared light having the third wavelength in the image sensor 13.”) (Makinouchi, Para 45-47; “A first wavelength λ1 can be set to any desired wavelength. […] A second wavelength λ2 can be set to any desired wavelength different from the first wavelength λ1. A third wavelength λ3 can be set to any desired wavelength different from both of the first wavelength λ1 and the second wavelength λ2.”),
wherein the processing unit is configured to perform a quantitative analysis of one or more features or fiducials that are detectable within the image of the tissue based on one or more light signals obtained or registered using each of the first plurality of sub-pixels and the second plurality of sub-pixels (Makinouchi, Para 52; “The calculation unit 15 calculates the index Q(i,j) at the pixel P(i,j) as described above. While changing the values of i and j, the calculation unit 15 calculates indices at other pixels to calculate the distribution of indices. For example, similarly to the pixel P(i,j), a pixel P(i+3,j) corresponds to a light receiving element that detects infrared light having the first wavelength in the image sensor 13, and hence the calculation unit 15 uses the pixel value of the pixel P(i+3,j) instead of the pixel value of the pixel P(i,j) to calculate an index at another pixel. For example, the calculation unit 15 calculates an index Q(i+1,j) by using the pixel value of the pixel P(i+3,j) corresponding to the detection result of infrared light having the first wavelength, the pixel value of a pixel P(i+4,j) corresponding to the detection result of infrared light having the second wavelength, and the pixel value of a pixel P(i+5,j) corresponding to the detection result of infrared light having the third wavelength.”) (Makinouchi, Para 60; “The index Q(i,j) calculated by Expression (1) becomes larger as the site has a larger amount of lipid, and hence the pixel value of the pixel (i,j) becomes larger as the site has a larger amount of lipid. For example, a large pixel value generally corresponds to the result that the pixel is displayed brightly, and hence, as the site has a larger amount of lipid, the site is displayed in a more brightly emphasized manner. An operator may request that the site where the amount of water is large be displayed brightly.”), and
wherein the processing unit is configured to (i) estimate an amount of blood in the tissue and (ii) determine an amount or a concentration of fluorophores or fluorescent material present in the tissue based on (a) the estimated amount of blood and (b) at least a subject of the one or more light signals (Makinouchi, Para 161; “Such a scanning projection apparatus 1 can be utilized for examination and observation of lesions that change the distribution of blood or water, such as edema and inflammation, by projecting a component image indicating the amount of water included in the gum BT1, for example.”) (Makinouchi, Para 88; “For example, the scanning projection apparatus 1 shown in FIG. 1 may detect a fluorescent image of the tissue BT added with a fluorescent substance, and generate a component image of the tissue BT on the basis of the detection result. In this case, a fluorescent substance such as indocyanine green (ICG) is added to the tissue BT (affected area) prior to the processing of capturing the tissue BT. For example, the irradiation unit 2 includes a light source that outputs detection light (excitation light) having a wavelength that excites the fluorescent substance added to the tissue BT, and irradiates the tissue BT with the detection light output from the light source.”) (Makinouchi, Para 89; “The light detection unit 3 includes a light detector having sensitivity to fluorescent light radiated from the fluorescent substance, and captures an image (fluorescent image) of the tissue BT irradiated with the detection light. For extracting fluorescent light from light radiated from the tissue BT, for example, an optical member having characteristics of transmitting fluorescent light and reflecting at least a part of the light other than the fluorescent light may be used as the wavelength selection mirror 23. A filter having such characteristics may be disposed in an optical path between the wavelength selection mirror 23 and the light detector. The filter may be insertable in and removable from the optical path between the wavelength selection mirror 23 and the light detector, or may be exchangeable in accordance with the type of fluorescent substance, that is, the wavelength of excitation light.”).
Regarding claim 91, Makinouchi discloses all of the limitations of claim 68 as discussed above.
Makinouchi further discloses wherein the processing unit is configured to generate one or more combined images of the tissue based on image data or image signals derived from each of the first plurality of sub-pixels and the second plurality of sub-pixels (Makinouchi, Para 53-60; “In this manner, the calculation unit 15 calculates an index Q(i,j) of each of a plurality of pixels, thereby calculating the distribution of indices. The calculation unit 15 may calculate an index Q(i,j) for every pixel in the range where pixel values necessary for calculating the indices Q(i,j) are included in captured pixel data. The calculation unit 15 may calculate the distribution of indices Q(i,j) by calculating indices Q(i,j) for part of the pixels and performing interpolation operation by using the calculated indices Q(i,j). […] Thus, the data generation unit 16 in FIG. 1 rounds numerical values as appropriate to convert the index Q(i,j) into data in a predetermined image format. For example, the data generation unit 16 generates data on an image about components of the tissue BT by using the result calculated by the calculation unit 15. In the following description, the image about components of the tissue BT is referred to as component image (or projection image) as appropriate. Data on the component image is referred to as component image data (or projection image data). […] The index Q(i,j) calculated by Expression (1) becomes larger as the site has a larger amount of lipid, and hence the pixel value of the pixel (i,j) becomes larger as the site has a larger amount of lipid. For example, a large pixel value generally corresponds to the result that the pixel is displayed brightly, and hence, as the site has a larger amount of lipid, the site is displayed in a more brightly emphasized manner. An operator may request that the site where the amount of water is large be displayed brightly.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 18-19, 23, 33, 37-38, and 43-44 are rejected under 35 U.S.C. 103 as being unpatentable over Makinouchi (US20170079741) and Talbert et al. (US20200404131, hereafter Talbert).
Regarding claims 1 and 33, Makinouchi discloses a system and method comprising:
one or more imaging sensors configured to capture an image of a tissue, wherein each of the one or more imaging sensors includes a plurality of pixels (Makinouchi, Para 39-41; “In the present embodiment, the light detection unit 3 outputs the detection result of the image sensor 13 as a digital signal in an image format (hereinafter referred to as captured image data). In the following description, an image captured by the image sensor 13 is referred to as captured image as appropriate. Data on the captured image is referred to as captured image data. The captured image is of a full-spec high definition (HD format) for the sake of description, but there are no limitations on the number of pixels of the captured image, pixel arrangement (aspect ratio), the gray-scale of pixel values, and the like. […] As described above, the wavelength of infrared light detected by each light receiving element of the image sensor 13 is determined by the position of the light receiving element, and hence each pixel value of captured image data is associated with the wavelength of infrared light detected by the image sensor 13. The position of a pixel on the captured image data is represented by (i,j), and the pixel disposed at (i,j) is represented by P(i,j).”),
wherein at least one pixel of the plurality of pixels includes a first plurality of sub-pixels sensitive to a first band or wavelength of light and a second plurality of sub-pixels sensitive to a second band or wavelength of light, wherein the first band or wavelength of light is distinct from the second band or wavelength of light (Makinouchi, Para 49; “The pixel P(i,j) in FIG. 2 is a pixel corresponding to a light receiving element that detects infrared light having the first wavelength in the image sensor 13. The pixel P(i+1,j) is a pixel corresponding to a light receiving element that detects infrared light having the second wavelength. The pixel P(i+2,j) is a pixel corresponding to a light receiving element that detects infrared light having the third wavelength in the image sensor 13.”) (Makinouchi, Para 45-47; “A first wavelength λ1 can be set to any desired wavelength. […] A second wavelength λ2 can be set to any desired wavelength different from the first wavelength λ1. A third wavelength λ3 can be set to any desired wavelength different from both of the first wavelength λ1 and the second wavelength λ2.”),
wherein a modality of the imaging one or more imaging sensor is fluorescence imaging or laser speckle imaging (Makinouchi, Para 88; “For example, the scanning projection apparatus 1 shown in FIG. 1 may detect a fluorescent image of the tissue BT added with a fluorescent substance, and generate a component image of the tissue BT on the basis of the detection result. In this case, a fluorescent substance such as indocyanine green (ICG) is added to the tissue BT (affected area) prior to the processing of capturing the tissue BT.”) (Makinouchi, Para 89; “The light detection unit 3 includes a light detector having sensitivity to fluorescent light radiated from the fluorescent substance, and captures an image (fluorescent image) of the tissue BT irradiated with the detection light. For extracting fluorescent light from light radiated from the tissue BT, for example, an optical member having characteristics of transmitting fluorescent light and reflecting at least a part of the light other than the fluorescent light may be used as the wavelength selection mirror 23. A filter having such characteristics may be disposed in an optical path between the wavelength selection mirror 23 and the light detector.”) (Makinouchi, Para 84; “In the case where the projection unit 5 irradiates the tissue BT with laser light directly as in the present embodiment, flickering called speckle, which is easily visually recognized, occurs in the component image projected on the tissue BT, and hence a user can easily distinguish the component image from the tissue BT owing to the speckle.”), and
a processing unit operatively coupled to the one or more imaging sensors, wherein the processing unit is configured to perform a quantitative analysis of one or more features or fiducials that are detectable within the image of the tissue based on one or more light signals obtained or registered using each of the first plurality of sub-pixels and the second plurality of sub-pixels (Makinouchi, Para 52; “The calculation unit 15 calculates the index Q(i,j) at the pixel P(i,j) as described above. While changing the values of i and j, the calculation unit 15 calculates indices at other pixels to calculate the distribution of indices. For example, similarly to the pixel P(i,j), a pixel P(i+3,j) corresponds to a light receiving element that detects infrared light having the first wavelength in the image sensor 13, and hence the calculation unit 15 uses the pixel value of the pixel P(i+3,j) instead of the pixel value of the pixel P(i,j) to calculate an index at another pixel. For example, the calculation unit 15 calculates an index Q(i+1,j) by using the pixel value of the pixel P(i+3,j) corresponding to the detection result of infrared light having the first wavelength, the pixel value of a pixel P(i+4,j) corresponding to the detection result of infrared light having the second wavelength, and the pixel value of a pixel P(i+5,j) corresponding to the detection result of infrared light having the third wavelength.”) (Makinouchi, Para 60; “The index Q(i,j) calculated by Expression (1) becomes larger as the site has a larger amount of lipid, and hence the pixel value of the pixel (i,j) becomes larger as the site has a larger amount of lipid. For example, a large pixel value generally corresponds to the result that the pixel is displayed brightly, and hence, as the site has a larger amount of lipid, the site is displayed in a more brightly emphasized manner. An operator may request that the site where the amount of water is large be displayed brightly.”).
Makinouchi does not clearly and explicitly disclose wherein each of the first plurality of subpixels and the second plurality of sub-pixels are configured to generate image data having distinct image modalities, wherein one of the distinct image modalities is a color image.
In an analogous optical diagnostic device field of endeavor Talbert discloses a sensor configured to generate image data having distinct image modalities, wherein one of the distinct image modalities is a color image and the other imaging modality includes fluorescence imaging (Talbert, Para 7; “The traditional endoscope with the image sensor placed in the handpiece unit is further limited to capturing only color images. However, in some implementations, it may be desirable to capture images with fluorescence, hyperspectral, and/or laser scanning data in addition to color image data. Fluorescence imaging captures the emission of light by a substance that has absorbed electromagnetic radiation and “glows” as it emits a relaxation wavelength.”) (Talbert, Para 119; “Multiple exposure frames may be combined to generate a black-and-white or RGB color image. Additionally, hyperspectral, fluorescence, and/or laser mapping imaging data may be overlaid on a black-and-white or RGB image.”) (Talbert, Para 50; “Disclosed herein are systems, methods, and devices for digital imaging in a light deficient environment that employ minimal area image sensors and can be configured for laser scanning and color imaging”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Makinouchi wherein each of the first plurality of subpixels and the second plurality of sub-pixels are configured to generate image data having distinct image modalities, wherein one of the distinct image modalities is a color image in order identify critical structures in a body and providing precise and valuable information about a body cavity as taught by Talbert (Talbert, Para 8-9) which improves diagnosis.
Regarding claim 2, Makinouchi as modified by Talbert above discloses all of the limitations of claim 1 as discussed above.
Makinouchi further discloses wherein the system further comprises a first optical illumination in a third band or wavelength of light and a second optical illumination in a fourth band or wavelength of light (Makinouchi, Para 96; “FIG. 5 is a diagram showing a modification of the irradiation unit 2. The irradiation unit 2 in FIG. 5 includes a plurality of light sources including a light source 10 a, a light source 10 b, and a light source 10 c. The light source 10 a, the light source 10 b, and the light source 10 c each include an LED that outputs infrared light, and output infrared light having different wavelengths. The light source 10 a outputs infrared light in a wavelength band that includes the first wavelength but does not include the second wavelength and the third wavelength. The light source 10 b outputs infrared light in a wavelength band that includes the second wavelength but does not include the first wavelength and the third wavelength. The light source 10 c outputs infrared light in a wavelength band that includes the third wavelength but does not include the first wavelength and the second wavelength.”).
Regarding claim 3, Makinouchi as modified by Talbert above discloses all of the limitations of claim 2 as discussed above.
Makinouchi further discloses wherein the first optical illumination is selected to generate data for the first plurality of sub-pixels sensitive to the first band or wavelength of light, and wherein the second optical illumination is selected to generate data for the second plurality of sub-pixels sensitive to the second band or wavelength of light (Makinouchi, Para 96; “FIG. 5 is a diagram showing a modification of the irradiation unit 2. The irradiation unit 2 in FIG. 5 includes a plurality of light sources including a light source 10 a, a light source 10 b, and a light source 10 c. The light source 10 a, the light source 10 b, and the light source 10 c each include an LED that outputs infrared light, and output infrared light having different wavelengths. The light source 10 a outputs infrared light in a wavelength band that includes the first wavelength but does not include the second wavelength and the third wavelength. The light source 10 b outputs infrared light in a wavelength band that includes the second wavelength but does not include the first wavelength and the third wavelength. The light source 10 c outputs infrared light in a wavelength band that includes the third wavelength but does not include the first wavelength and the second wavelength.”) (Makinouchi, Para 45-47; “A first wavelength λ1 can be set to any desired wavelength. […] A second wavelength λ2 can be set to any desired wavelength different from the first wavelength λ1. A third wavelength λ3 can be set to any desired wavelength different from both of the first wavelength λ1 and the second wavelength λ2.”) (Makinouchi, Para 52; “The calculation unit 15 calculates the index Q(i,j) at the pixel P(i,j) as described above. While changing the values of i and j, the calculation unit 15 calculates indices at other pixels to calculate the distribution of indices. For example, similarly to the pixel P(i,j), a pixel P(i+3,j) corresponds to a light receiving element that detects infrared light having the first wavelength in the image sensor 13, and hence the calculation unit 15 uses the pixel value of the pixel P(i+3,j) instead of the pixel value of the pixel P(i,j) to calculate an index at another pixel. For example, the calculation unit 15 calculates an index Q(i+1,j) by using the pixel value of the pixel P(i+3,j) corresponding to the detection result of infrared light having the first wavelength, the pixel value of a pixel P(i+4,j) corresponding to the detection result of infrared light having the second wavelength, and the pixel value of a pixel P(i+5,j) corresponding to the detection result of infrared light having the third wavelength.”).
Regarding claim 18, Makinouchi as modified by Talbert above discloses all of the limitations of claim 1 as discussed above.
Makinouchi further discloses wherein the first band or wavelength of light and the second band or wavelength of light correspond to distinct bands or wavelengths of visible light, infrared light, or ultraviolet light (Makinouchi, Para 45-46; “A first wavelength λ1 can be set to any desired wavelength. For example, the first wavelength λ1 is set to a wavelength at which the absorbance is relatively small in the distribution of the absorbance of the first substance (lipid) in the near-infrared wavelength region and the absorbance is relatively small in the distribution of the absorbance of the second substance (water) in the near-infrared wavelength region. As one example, in the present embodiment, the first wavelength λ1 is set to about 1150 nm. Infrared light having the first wavelength λ1 is small in energy absorbed by lipid and strong in light intensity radiated from lipid. Infrared light having the first wavelength λ1 is small in energy absorbed by water and strong in light intensity radiated from water. A second wavelength λ2 can be set to any desired wavelength different from the first wavelength λ1. For example, the second wavelength λ2 is set to a wavelength at which the absorbance of the first substance (lipid) is higher than the absorbance of the second substance (water). As one example, in the present embodiment, the second wavelength λ2 is set to about 1720 nm. When applied to an object (for example, a tissue), infrared light having the second wavelength λ2 is larger in energy absorbed by the object and weaker in light intensity radiated from the object as the proportion of lipid to water included in the object becomes larger.”).
Regarding claim 19, Makinouchi as modified by Talbert above discloses all of the limitations of claim 1 as discussed above.
Makinouchi further discloses wherein the first band or wavelength of light is within the infrared (Makinouchi, Para 45; “A first wavelength λ1 can be set to any desired wavelength. For example, the first wavelength λ1 is set to a wavelength at which the absorbance is relatively small in the distribution of the absorbance of the first substance (lipid) in the near-infrared wavelength region and the absorbance is relatively small in the distribution of the absorbance of the second substance (water) in the near-infrared wavelength region. As one example, in the present embodiment, the first wavelength λ1 is set to about 1150 nm. Infrared light having the first wavelength λ1 is small in energy absorbed by lipid and strong in light intensity radiated from lipid. Infrared light having the first wavelength λ1 is small in energy absorbed by water and strong in light intensity radiated from water.”), and wherein the second band or wavelength of light is in the visible or the ultraviolet (Makinouchi, Para 87; “For example, the scanning projection apparatus 1 may generate a component image of the tissue BT by detecting visible light radiated from the tissue BT with the light detection unit 3 and using the detection result of the light detection unit 3.”) (Makinouchi, Para 46; “A second wavelength λ2 can be set to any desired wavelength different from the first wavelength λ1.”).
Regarding claim 23, Makinouchi as modified by Talbert above discloses all of the limitations of claim 1 as discussed above.
Makinouchi further discloses one or more band pass filters for filtering out one or more bands or wavelengths of light emitted, reflected, or received from the tissue (Makinouchi, Para 35; “The infrared filter 12 transmits infrared light having a predetermined wavelength band among light passing through the imaging optical system 11, and blocks infrared light in wavelength bands other than the predetermined wavelength band. The image sensor 13 detects at least a part of the infrared light radiated from the tissue BT via the imaging optical system 11 and the infrared filter 12.”) (Makinouchi, Para 89; “A filter having such characteristics may be disposed in an optical path between the wavelength selection mirror 23 and the light detector. The filter may be insertable in and removable from the optical path between the wavelength selection mirror 23 and the light detector, or may be exchangeable in accordance with the type of fluorescent substance, that is, the wavelength of excitation light.”) (Makinouchi, Figure 6).
Regarding claim 37, Makinouchi as modified by Talbert above discloses all of the limitations of claim 33 as discussed above.
Makinouchi further discloses collecting the one or more light signals from each of the first plurality of sub-pixels and the second plurality of sub-pixels substantially in parallel (Makinouchi, Para 36-42; “The image sensor 13 includes a plurality of light receiving elements arranged two-dimensionally, such as a CMOS sensor or a CCD sensor. The light receiving elements are sometimes called pixels or subpixels. […] In the present embodiment, the light detection unit 3 outputs the detection result of the image sensor 13 as a digital signal in an image format (hereinafter referred to as captured image data). In the following description, an image captured by the image sensor 13 is referred to as captured image as appropriate. Data on the captured image is referred to as captured image data. The captured image is of a full-spec high definition (HD format) for the sake of description, but there are no limitations on the number of pixels of the captured image, pixel arrangement (aspect ratio), the gray-scale of pixel values, and the like. […] A first pixel corresponding to a light receiving element of the image sensor 13 that detects infrared light having the first wavelength is, for example, a pixel group satisfying i=3N, where N is a positive integer. A second pixel corresponding to a light receiving element that detects infrared light having the second wavelength is, for example, a pixel group satisfying i=3N+1. A third pixel corresponding to a light receiving element that detects infrared light having the third wavelength is a pixel group satisfying i=3N+2.”).
Regarding claim 38, Makinouchi as modified by Talbert above discloses all of the limitations of claim 37 as discussed above.
Makinouchi further discloses performing the quantitative analysis substantially in real time based on the one or more light signals collected substantially in parallel (Makinouchi, Para 171-172; “The infrared camera 71 is a light detection unit that detects light radiated from the tissue. The projection unit 5 can project an image generated by the control device (not shown) by using the detection result of the infrared camera (captured image). The display device 31 can display an image acquired by the infrared camera 71 and a component image generated by the control device. For example, a visible camera is provided to the surgery lamp 80, and the display device 31 can also display an image acquired by the visible camera. […] The surgery support system SYS projects an image indicating information on a tissue on the tissue. Thus, a legion, as well as nerves, solid organs such as pancreas, fat tissue, blood vessels, and the like can be easily recognized to reduce invasiveness of an operation or treatment and enhance the efficiency of an operation or treatment.”).
Regarding claim 43, Makinouchi as modified by Talbert above discloses all of the limitations of claim 33 as discussed above.
Makinouchi further discloses wherein the quantitative analysis comprises an identification or classification of one or more tissue regions in the tissue based on the one or more light signals (Makinouchi, Para 84; “Such a component image can be used, for example, to determine whether the tissue BT has an affected area, such as a tumor. For example, when the tissue BT has a tumor, the ratio of lipid or water included in the tumor area differs from that in a tissue with no tumor. The ratio may differ depending on the type of tumor.”) (Makinouchi, Para 81; “At Step S2, the light detection unit 3 detects light (for example, infrared light) that is radiated from the tissue BT irradiated with the detection light. At Step S3, the calculation unit 15 in the image generation unit 4 calculates component information on the amount of lipid and the amount of water in the tissue BT.”) (Makinouchi, Para 52; “The calculation unit 15 calculates the index Q(i,j) at the pixel P(i,j) as described above. While changing the values of i and j, the calculation unit 15 calculates indices at other pixels to calculate the distribution of indices. For example, similarly to the pixel P(i,j), a pixel P(i+3,j) corresponds to a light receiving element that detects infrared light having the first wavelength in the image sensor 13, and hence the calculation unit 15 uses the pixel value of the pixel P(i+3,j) instead of the pixel value of the pixel P(i,j) to calculate an index at another pixel. For example, the calculation unit 15 calculates an index Q(i+1,j) by using the pixel value of the pixel P(i+3,j) corresponding to the detection result of infrared light having the first wavelength, the pixel value of a pixel P(i+4,j) corresponding to the detection result of infrared light having the second wavelength, and the pixel value of a pixel P(i+5,j) corresponding to the detection result of infrared light having the third wavelength.”) (Makinouchi, Para 60; “The index Q(i,j) calculated by Expression (1) becomes larger as the site has a larger amount of lipid, and hence the pixel value of the pixel (i,j) becomes larger as the site has a larger amount of lipid. For example, a large pixel value generally corresponds to the result that the pixel is displayed brightly, and hence, as the site has a larger amount of lipid, the site is displayed in a more brightly emphasized manner. An operator may request that the site where the amount of water is large be displayed brightly.”).
Regarding claim 44, Makinouchi as modified by Talbert above discloses all of the limitations of claim 33 as discussed above.
Makinouchi further discloses wherein the quantitative analysis comprises a multispectral classification of one or more tissue regions in the tissue based on the one or more light signals having a plurality of different wavelengths (Makinouchi, Para 84; “Such a component image can be used, for example, to determine whether the tissue BT has an affected area, such as a tumor. For example, when the tissue BT has a tumor, the ratio of lipid or water included in the tumor area differs from that in a tissue with no tumor. The ratio may differ depending on the type of tumor.”) (Makinouchi, Para 81; “At Step S2, the light detection unit 3 detects light (for example, infrared light) that is radiated from the tissue BT irradiated with the detection light. At Step S3, the calculation unit 15 in the image generation unit 4 calculates component information on the amount of lipid and the amount of water in the tissue BT.”) (Makinouchi, Para 52; “The calculation unit 15 calculates the index Q(i,j) at the pixel P(i,j) as described above. While changing the values of i and j, the calculation unit 15 calculates indices at other pixels to calculate the distribution of indices. For example, similarly to the pixel P(i,j), a pixel P(i+3,j) corresponds to a light receiving element that detects infrared light having the first wavelength in the image sensor 13, and hence the calculation unit 15 uses the pixel value of the pixel P(i+3,j) instead of the pixel value of the pixel P(i,j) to calculate an index at another pixel. For example, the calculation unit 15 calculates an index Q(i+1,j) by using the pixel value of the pixel P(i+3,j) corresponding to the detection result of infrared light having the first wavelength, the pixel value of a pixel P(i+4,j) corresponding to the detection result of infrared light having the second wavelength, and the pixel value of a pixel P(i+5,j) corresponding to the detection result of infrared light having the third wavelength.”) (Makinouchi, Para 60; “The index Q(i,j) calculated by Expression (1) becomes larger as the site has a larger amount of lipid, and hence the pixel value of the pixel (i,j) becomes larger as the site has a larger amount of lipid. For example, a large pixel value generally corresponds to the result that the pixel is displayed brightly, and hence, as the site has a larger amount of lipid, the site is displayed in a more brightly emphasized manner. An operator may request that the site where the amount of water is large be displayed brightly.”).
Claim 39 is rejected under 35 U.S.C. 103 as being unpatentable over Makinouchi and Talbert as applied to claim 33 above, and further in view of Ramsteiner (US20140213860).
Regarding claim 39, Makinouchi as modified by Talbert above discloses all of the limitations of claim 33 as discussed above.
Makinouchi further discloses wherein the quantitative analysis comprises a quantification an amount of fluorescence emitted from the surgical scene or a concentration of a fluorescing material or substance of the one or more features or fiducials (Makinouchi, Para 88; “For example, the scanning projection apparatus 1 shown in FIG. 1 may detect a fluorescent image of the tissue BT added with a fluorescent substance, and generate a component image of the tissue BT on the basis of the detection result. In this case, a fluorescent substance such as indocyanine green (ICG) is added to the tissue BT (affected area) prior to the processing of capturing the tissue BT. For example, the irradiation unit 2 includes a light source that outputs detection light (excitation light) having a wavelength that excites the fluorescent substance added to the tissue BT, and irradiates the tissue BT with the detection light output from the light source.”) (Makinouchi, Para 52; “The calculation unit 15 calculates the index Q(i,j) at the pixel P(i,j) as described above. While changing the values of i and j, the calculation unit 15 calculates indices at other pixels to calculate the distribution of indices. For example, similarly to the pixel P(i,j), a pixel P(i+3,j) corresponds to a light receiving element that detects infrared light having the first wavelength in the image sensor 13, and hence the calculation unit 15 uses the pixel value of the pixel P(i+3,j) instead of the pixel value of the pixel P(i,j) to calculate an index at another pixel. For example, the calculation unit 15 calculates an index Q(i+1,j) by using the pixel value of the pixel P(i+3,j) corresponding to the detection result of infrared light having the first wavelength, the pixel value of a pixel P(i+4,j) corresponding to the detection result of infrared light having the second wavelength, and the pixel value of a pixel P(i+5,j) corresponding to the detection result of infrared light having the third wavelength.”) (Makinouchi, Para 60; “The index Q(i,j) calculated by Expression (1) becomes larger as the site has a larger amount of lipid, and hence the pixel value of the pixel (i,j) becomes larger as the site has a larger amount of lipid. For example, a large pixel value generally corresponds to the result that the pixel is displayed brightly, and hence, as the site has a larger amount of lipid, the site is displayed in a more brightly emphasized manner. An operator may request that the site where the amount of water is large be displayed brightly.”).
Makinouchi does not clearly and explicitly disclose wherein the quantification is determined using spectral fitting or absorption spectroscopy.
In an analogous optical diagnostic device field of endeavor Ramsteiner discloses wherein quantification is determined using spectral fitting or absorption spectroscopy (Ramsteiner, Para 21; “The fluorescent-light-emitting region 113 can be configured here such that it constitutes, viewed from the light-detecting device 12, a nearly point-type light source arranged in the tissue, which light source can be used for absorption spectroscopy of the tissue located between the two devices 11 and 12, the fluids located between the two devices 11 and 12, and/or the particles located between the two devices 11 and 12. In this manner, the light 14′, which is trapped in the device 11, is coupled out and emitted in a point-type fashion through the region 113 (see arrows 14). The absorption spectroscopy of the fluorescent light 14 emitted according to the disclosure can then be carried out by the light-detecting device 12 or another suitable device using methods known therefor.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Makinouchi wherein the quantification is determined using spectral fitting or absorption spectroscopy in order to reliably quantify different analytes as taught by Ramsteiner (Ramsteiner, Para 3-4) which improves diagnosis.
Claim 46 is rejected under 35 U.S.C. 103 as being unpatentable over Makinouchi and Talbert as applied to claim 33 above, and further in view of Ayaz et al. (US20150038812, hereafter Ayaz).
Regarding claim 46, Makinouchi as modified by Talbert above discloses all of the limitations of claim 33 as discussed above.
Makinouchi does not clearly and explicitly disclose wherein the quantitative analysis comprises a determination of the real-time blood oxygenation based on the one or more light signals.
In an analogous optical diagnostic device field of endeavor Ayaz discloses wherein quantitative analysis comprises a determination of real-time blood oxygenation based on one or more light signals (Ayaz, Para 50; “was performed at real-time during an experiment in order to calculate the oxygenation changes in the prefrontal cortex of a subject at real-time”) (Ayaz, Para 73; “The fNIR-BCI Server software on the Protocol-Computer received the raw fNIR signals, calculated the oxygenation changes at real-time using modified Beer Lambert Law and transformed oxygenation changes to a number between 0 to 100, called fNIR-BCI index as described above. fNIR-BCI index is transmitted to the game at real-time through TCP/IP networking”) (Ayaz, Para 33; “FIG. 2 is another depiction of fNIR and the light path that fNIR may take in passing through tissue in the brain of a test subject. Although not limiting, an example experimental description of fNIR is now included. Typically, an optical apparatus for fNIR Spectroscopy consists of at least one light source that shines light to the head and a light detector that receives light after it has interacted with the tissue. Photons”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Makinouchi wherein the quantitative analysis comprises a determination of the real-time blood oxygenation based on the one or more light signals in order to measure tissue activity as taught by Ayaz (Ayaz, Para 7-9) which helps diagnosis.
Claim 47 is rejected under 35 U.S.C. 103 as being unpatentable over Makinouchi and Talbert as applied to claim 33 above, and further in view of Lennartz et al. (US20210228287, hereafter Lennartz).
Regarding claim 47, Makinouchi as modified by Talbert above discloses all of the limitations of claim 33 as discussed above.
Makinouchi does not clearly and explicitly disclose wherein the quantitative analysis comprises a quantitative speckle analysis based on the one or more light signals.
In an analogous optical diagnostic device field of endeavor Lennartz discloses wherein quantitative analysis comprises a quantitative speckle analysis based on one or more light signals (Lennartz, Para 90; “The light sensor 540 may be an image sensor such as CCD or CMOS array and may output the sensed results, which may be laser speckle data for processing into laser speckle contrast images (LSCIs). The LSCI allows a clinician to see a quantitative mapping of local blood flow dynamics in a wide area so that the clinician can quickly and accurately assess the blood flows within the tissue 330”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Makinouchi wherein the quantitative analysis comprises a quantitative speckle analysis based on the one or more light signals in order to quickly and accurately assess the blood flows within the tissue as taught by Lennartz (Lennartz, Para 90) which helps diagnosis.
Claim 84 is rejected under 35 U.S.C. 103 as being unpatentable over Makinouchi and Felce et al. (US20240027464, hereafter Felce).
Regarding claim 84, Makinouchi discloses all of the limitations of claim 68 as discussed above.
Makinouchi does not clearly and explicitly disclose wherein the processing unit is configured to quantify an amount of fluorescence emitted from the tissue or an amount of fluorescent material present in the tissue based on a lighting condition of the image, wherein the lighting condition includes an illumination bias, an illumination profile, or an illumination gradient of the image.
In an analogous optical diagnostic device field of endeavor Felce discloses quantifying an amount of fluorescence emitted from a tissue or an amount of fluorescent material present in the tissue (Felce, Para 11; “in a first aspect, the invention provides a method of identifying pathogens in a sample using a fluorescence imaging system configured to illuminate the sample with an excitation light source and detect resulting fluorescence in multiple colour channels”) based on a lighting condition of an image, wherein the lighting condition includes an illumination bias, an illumination profile, or an illumination gradient of the image (Felce, Para 163; “The intensity profile of the external light source may vary across the focal plane, in which case the intensity of detected candidate objects may depend on the position of the candidate object within the external light source beam. This may be corrected for by determining the intensity profile of the external light source and applying a correction to the measured intensity data based on the position of the candidate object”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Makinouchi wherein the processing unit is configured to quantify an amount of fluorescence emitted from the tissue or an amount of fluorescent material present in the tissue based on a lighting condition of the image, wherein the lighting condition includes an illumination bias, an illumination profile, or an illumination gradient of the image in order to correct for determined light intensity as taught by Felce (Felce, Para 163) which leads to more accurate measurements which improves diagnosis.
Claims 92-94 are rejected under 35 U.S.C. 103 as being unpatentable over Makinouchi and Valdes et al. (US20160278678, hereafter Valdes).
Regarding claim 92, Makinouchi discloses all of the limitations of claim 91 as discussed above.
Makinouchi does not clearly and explicitly disclose wherein the processing unit is configured to generate a quantitative map of fluorescence in the tissue based on the image data or image signals.
In an analogous optical diagnostic device field of endeavor Valdes discloses generating a quantitative map of fluorescence in tissue based on image data or image signals (Valdes, Para 71; “The processor 180 also uses the hyperspectral camera 128 to capture a hyperspectral fluorescent image stack and executes dFI (depth-resolved fluorescent imaging) and/or qdFI (quantified depth-resolved fluorescent imaging) routines from memory 178 to process the hyperspectral fluorescent image stack to map depth and quantity of fluorophore in tissue […] tomographic display of mapped depth and quantity of fluorophore.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Makinouchi wherein the processing unit is configured to generate a quantitative map of fluorescence in the tissue based on the image data or image signals in order to improve the ability of a user to distinguish tissue types as taught by Valdes (Valdes, Para 8) which improves diagnosis.
Regarding claim 93, Makinouchi as modified by Valdes above discloses all of the limitations of claim 92 as discussed above.
Makinouchi does not clearly and explicitly disclose wherein the quantitative map of fluorescence indicates an amount or concentration of fluorescence material present in one or more regions of the tissue.
Valdes further discloses wherein the quantitative map of fluorescence indicates an amount or concentration of fluorescence material present in one or more regions of the tissue (Valdes, Para 25; “Hyperspectral images taken under specific wavelengths of light are displayed as fluorescent images, and corrected for optical properties of tissue to provide quantitative maps of fluorophore concentration”) (Valdes, Para 91; “thereby generate a map of tissue classifications up to one centimeter deep in the brain surface. The classifier operates on chromophore concentrations, including oxygenated and deoxygenated hemoglobin and ratios of oxygenated to deoxygenated hemoglobin, fluorophore concentrations, and optical properties as determined for that voxel. Finally, the images and generated maps are displayed”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Makinouchi wherein the quantitative map of fluorescence indicates an amount or concentration of fluorescence material present in one or more regions of the tissue in order to improve the ability of a user to distinguish tissue types as taught by Valdes (Valdes, Para 8) which improves diagnosis.
Regarding claim 94, Makinouchi as modified by Valdes above discloses all of the limitations of claim 93 as discussed above.
Makinouchi does not clearly and explicitly disclose wherein the processing unit is configured to perform a calibration that correlates an amount of fluorescent light detected by the one or more imaging sensors to the amount or concentration of fluorescent material present in the one or more regions.
Valdes further discloses wherein the processing unit is configured to perform a calibration that correlates an amount of fluorescent light detected by the one or more imaging sensors to the amount or concentration of fluorescent material present in the one or more regions (Valdes, Para 25; “Hyperspectral images taken under specific wavelengths of light are displayed as fluorescent images, and corrected for optical properties of tissue to provide quantitative maps of fluorophore concentration”) (Valdes, Para 91; “thereby generate a map of tissue classifications up to one centimeter deep in the brain surface. The classifier operates on chromophore concentrations, including oxygenated and deoxygenated hemoglobin and ratios of oxygenated to deoxygenated hemoglobin, fluorophore concentrations, and optical properties as determined for that voxel. Finally, the images and generated maps are displayed”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Makinouchi wherein the processing unit is configured to perform a calibration that correlates an amount of fluorescent light detected by the one or more imaging sensors to the amount or concentration of fluorescent material present in the one or more regions in order to improve the ability of a user to distinguish tissue types as taught by Valdes (Valdes, Para 8) which improves diagnosis.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to John Li whose telephone number is (313)446-4916. The examiner can normally be reached Monday to Thursday; 5:30 AM to 3:30 PM Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN D LI/Primary Examiner, Art Unit 3798