Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The two information disclosure statements (“IDS”) filed on 01/10/2024 and 02/08/2024 were reviewed and the listed references were noted.
Drawings
The 2-page drawings have been considered and placed on record in the file.
Status of Claims
Claims 1-18 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 6-14, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Jarc et al. (US 2026/0004552 A1 w/ EFD of June 16, 2022), in view of Kim et al. (US 2024/0188797 A1 w/ EFD of December 9, 2022).
Regarding claim 1, Jarc teaches “A method for processing fluorescent image guided surgical or diagnostic imagery” (Jarc, Abstract discloses; “An illustrative system may access a first image sequence captured by an imaging device during a medical procedure, the first image sequence comprising first images. the first images based on illumination of a scene associated with the medical procedure using visible-spectrum light: access a second image sequence captured by the imaging device during the medical procedure. the second image sequence comprising second images. the second images based on illumination of the scene using non-visible spectrum light: and provide the first image sequence and the second image sequence to a machine learning module.”), “capturing non-fluorescent images and fluorescent images of an operating field in which at least one fluorescent dye is present with a surgical or diagnostics imaging device” (Jarc, Para. 0023 discloses; “Given a first sequence of images (e.g., visible-spectrum images) of a scene associated with a medical procedure and a second sequence of images (e.g., non-visible-spectrum images) of the scene” Jarc, Para. 0024 discloses; “Illustrative non-visible-spectrum images include fluorescence images, hyperspectral images, and other types of images that do not rely solely on visible-spectrum illumination.”), “processing the fluorescent images with respect to brightness and coloring (Jarc, Para. 0049 discloses; “For example, pixels having color or intensity values within one color range or intensity range may correspond to one type of organ (or tissue, or object)”), generating composite images by overlaying the fluorescent images over the non- fluorescent images” (Jarc, Para. 0055 discloses; “Each second image received by the image blending module 230 is paired with one or more first images (for brevity, first images will be referred to in the singular) that are also received by the image blending module 230. The image blending module 230 may perform geometric transforms to geometrically align the first image and the second image (e.g., so that the images represent a same view of the scene (respectively corresponding pixels represent a same point of the scene).”), “wherein the processing of the fluorescent images comprises inputting the fluorescent images into at least one artificial intelligence model trained on one or more of fluorescent still images and fluorescent video images” (Jarc, Para. 0044 discloses; “Although the working data 180 may include two image sequences of different types of images (e.g., visible-spectrum images and non-visible-spectrum images, respectively), as with the training data 170, in varying embodiments, the working data 180 that is passed to the machine learning model”), “to identify one or more types of structures by fluorescent dye emission” (Jarc, Para. 0049 discloses; “Features 202 may be detected and identified based on known traits of pixels for the particular type of non-visible-spectrum imaging technology (e.g., fluorescent imaging) used for the second image sequence 116.”). Jarc does not explicitly teach “performing color coding of the fluorescent images by coloring fluorescent parts of the fluorescent images with pre-determined false colors assigned to different types of structures according to the respective one or more identified types of structures”. In an analogous field of endeavor, Kim discloses, “performing color coding of the fluorescent images by coloring fluorescent parts of the fluorescent images with pre-determined false colors assigned to different types of structures according to the respective one or more identified types of structures” (Kim, Para. 0010 discloses; “When a suitable pseudo coloring AI model can be selected in accordance with whether a tissue is normal or abnormal (e.g., cancer), it would be possible to obtain a pseudo-coloring resultant image”)
Jarc and Kim are both considered to be analogous to the claimed invention because they are in the same field of using machine learning or artificial intelligence to process fluorescent medical images. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jarc to incorporate the teachings of Kim in order to perform color coding by adding pre-determined false colors (as disclosed by Kim) to different types of structures (as disclosed by Jarc). One of ordinary skill in the art would have been motivate to combine the previously described methods of Jarc with the teachings of Kim to further enhance a user’s visual experience of differentiating the different types of structures appearing in a medical image where they are presented in a color-coded manner. Accordingly, it would have been obvious to combine Jarc and Kim to obtain the claimed invention in claim 1.
Regarding claim 2, the combination of Jarc in view of Kim teaches the method for processing fluorescent images as claimed in claim 1, “further comprising inputting the non-fluorescent images into the at least one artificial intelligence model such that each non-fluorescent image is coupled to a fluorescent image” (Jarc, Abstract discloses; “The first image sequence comprising first images. The first images based on illumination of a scene associated with the medical procedure using visible-spectrum light” … “the second image sequence comprising second images. the second images based on illumination of the scene using non-visible spectrum light: and provide the first image sequence and the second image sequence to a machine learning module.”), “the at least one artificial intelligence model having been trained for using the non-fluorescent images as additional input for identifying the one or more types of structures” (Jarc, Para. 0006 discloses; “wherein the trained machine learning model has been trained using a second image sequence comprising second images.”)
Regarding claim 3, the combination of Jarc in view of Kim teaches the method for processing fluorescent images as claimed in claim 1, “comprising several artificial intelligence models that have been trained on at least one of several different dyes, several different structures and several different imaging systems” (Kim, Para. 0214 discloses; “Referring to FIG. 10, an AI model may learn a tissue image and a condition (e.g., the wavelength band of light, organ information of cells or a tissue, etc.) under which the tissue image was obtained.”). Examiner interprets “several different dyes” as wavelength band of light, “several different structures” as organ information of cells or a tissue, and “several different imaging systems” as a condition under which the tissue image was obtained. The proposed combination as well as the motivation for combining the Jarc and Kim references presented in the rejection of claim 1, apply to claim 3 and are incorporated herein by reference. Thus, the method recited in claim 3 is met by Jarc and Kim.
Regarding claim 4, the combination of Jarc in view of Kim teaches “The method as according to claim 1, further comprising providing information about the at least one of the fluorescent dye and the imaging system used in a fluorescent image guided surgical or diagnostic procedure as input to the at least one artificial intelligence model” (Kim, Para. 0019 discloses; “… and generate a first H&E image corresponding to the first image by inputting the first image, the dye information, the operation information, and the organ information into a first artificial intelligence model…”). Examiner interprets information about an imaging system to fall within operation information. The proposed combination as well as the motivation for combining the Jarc and Kim references presented in the rejection of claim 1, apply to claim 4 and are incorporated herein by reference. Thus, the method recited in claim 4 is met by Jarc and Kim.
Regarding claim 6, the combination of Jarc in view of Kim teaches the method for processing fluorescent images as claimed in claim 1, “further comprising generating color coded text labeling of the structures identified in the fluorescent images and integrating the text labeling into the composite images” (Jarc, Para. 0025 discloses; “As used herein, a “label” refers to any type of data indicative of an object or other feature represented in an image including, but not limited to, graphical or text-based annotations, tags, highlights, augmentations, and overlays.”)
Regarding claim 7, the combination of Jarc in view of Kim teaches the method for processing fluorescent images as claimed in claim 1, “further comprising selecting the pre-determined false colors from a set of several coloring schemes upon request by an operator” (Kim, Para. 0247 discloses; “For example, the endomicroscope system may identify or select a pseudo coloring mode based on whether an AI model that generates medical diagnosis assistance information has been trained in advance, whether an AI model for pseudo coloring for each diagnosis result has been trained, or selection by the user of the endomicroscope system.”) The proposed combination as well as the motivation for combining the Jarc and Kim references presented in the rejection of claim 1, apply to claim 7 and are incorporated herein by reference. Thus, the method recited in claim 7 is met by Jarc and Kim.
Claim 8 recites a system with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Jarc and Kim references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Jarc and Kim references discloses an image sensor (Jarc, Para. 0032 discloses; “The first image capture device 122 and the second image capture device 124 may be separate sensors within a single camera, or they may be separate sensors in separate respective cameras.”), a light source (Jarc, Para. 0028 discloses; “As noted above, the light sources 110 might include any combination of a white light source, a narrow-band light source (whether in the visible spectrum or not, e.g., an ultraviolet lamp), a laser, an infrared light emitting diode (LED), etc. If fluoresced light is to be captured, the type of light source may depend on the fluorescing agent or protein being used during the medical procedure.”), and a computer (Jarc, Para. 0035 discloses; “The image processing system 126 may be implemented by one or more computing devices and/or computer resources (e.g., processors, memory devices, storage devices, etc.) as may serve a particular implementation.”)
Regarding claim 9, the combination of Jarc in view of Kim teaches the system for processing fluorescent images as claimed in claim 8, “further comprising a screen” (Jarc, Para. 0066 discloses; “In some instances, the display monitor 714 may be implemented by a touchscreen display and provide user input functionality.”)
Regarding claim 10, the combination of Jarc in view of Kim teaches the system for processing fluorescent images as claimed in claim 8, “wherein the imaging device is one of a video endoscope, a video exoscope, or a combination of one of a telescope for endoscopic procedures, an exoscope and an attachment lens releasably attached to a camera head” (Jarc, Para. 0032 discloses; “In some embodiments, the imaging device 112 and the light sources 110 may be part of (or optically connected with) an endoscope.”)
Regarding claim 11, the combination of Jarc in view of Kim teaches the system for processing fluorescent images as claimed in claim 8, “wherein the generation of the composite images is implemented one of in the at least one artificial intelligence model and in a separate set of instructions receiving input about the identified types of structures from the one or more artificial intelligence models” (Jarc, Para. 0055 discloses; “The image blending module 230 may create a synthetic image based on image data from the first image and the second image” and Jarc, Para. 0057 discloses; “For example, the machine learning model might predict a stage of a medical procedure, a category of any feature detected in an image (e.g., a type of object present, a type of tissue or organ present, etc.), or others.”)
Claims 12, 13, 14, 16, and 17 recite systems with elements corresponding to the steps recited in method claims 2, 3, 4, 6, and 7, respectively. Therefore, the recited elements of these claims are mapped to the proposed combination in the same manner as the corresponding steps in their corresponding method claims. Additionally, the rationale and motivation to combine the Jarc and Kim references, presented in claim 1 apply to these claims.
Claim 18 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 1. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Jarc and Kim references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of Jarc and Kim references discloses a computer readable storage medium (Jarc, Para. 0074 discloses; “For example, the storage device 806 may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein.”)
Claims 5, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Jarc, in view of Kim in further view of Friedman et al. (US 8,471,866 B2).
Regarding claim 5, the combination of Jarc in view of Kim does not explicitly teach “The method according to claim 1 further comprising generating a color legend of colors used for the structures identified in the fluorescent images and one of integrating the legend in the composite images and displaying the legend separately from the composite images.” However, in an analogous field of endeavor, Friedman discloses “Thus, as shown in FIG. 3, six different colors may be used to color code and identify the labels 134 of the legend 132 with the corresponding segment 124” (Friedman, Column 5, Lines 65-67 discloses).
Jarc, Kim, and Friedman are all considered to be analogous to the claimed invention because they are all in the same field of processing and displaying medical images. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Jarc and Kim to incorporate the teachings of Friedman in order to display the colors that coordinate with the structures separately from the composite image. One of ordinary skill in the art would have been motivated to combine the previously described methods of the combination of Jarc and Kim with the teachings of Friedman to allow the user to quickly and easily differentiate the types of structures and false-colors. Accordingly, it would have been obvious to combine Jarc, Kim, and Friedman to obtain the invention in claim 5.
Claim 15 recites a system with elements corresponding to the steps recited in method claim 5, respectively. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Jarc, Kim, and Friedman references, presented in claim 5, apply to this claim.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUSTIN M. OAKES whose telephone number is (571)272-9379. The examiner can normally be reached 7:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JUSTIN M OAKES/Examiner, Art Unit 2662
/Siamak Harandi/Primary Examiner, Art Unit 2662