DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12 February 2026 have been fully considered but they are not persuasive.
Claims 1-4 and 6-27 are pending in this application, claim 16 was not elected, the remaining claims have been considered below. Claim 5 is canceled by the applicant.
Argument:
The applicant argues that the ground of rejection of Boyd in its context only identifies different stages and also different rates of progression. Applicant then states that the addition to claim 1 is far more specific as it describes that the lesion shape evaluation index not only corresponds to different predefined stages of lesion but also how it is used. Applicant also notes that the addition describes that the processor is configured to process the mask to simulate a change from a first medical stage to a second medical stage such that the pseudo mask image represents a progress of the lesion different from that (progress of lesion) in the original image. Applicant then argues that the citation of Boyd does not disclose such a feature and that the Saikou reference does not disclose how the pseudo mask image is derived based on the lesion shape evaluation index and how the mask the processed the simulation changes of progress of lesion in different medical stages.
Response:
US Patent Publication 2022 0207729 A1, (Boyd et al.) shows the limitation wherein the lesion shape evaluation index corresponds to predefined medical stages of the lesion, and the processor is configured to process the masks to simulate a change from a first medical stage to a second medical stage such that the pseudo mask image represents a progress of the lesion different from that in the original image ("these complex 2D patterns may represent different subtypes of disease, different stages of disease, different diseases, different likelihoods of progressing different rates of progression, different prognoses, different responses to treatment, different underlying biology, different concurrent diseases, different concurrent medical therapies, and/or different lifestyle choices ( e.g., smoking)," paragraph [0079] and "Such a map may be used for visualizing disease state, e.g., upon presentation to a graphic user interface. Maps generated at different time periods for the same eye may be used to visualize disease progression." paragraph [0404]).
Our reviewing Court has made clear that examined claims are interpreted as broadly as is reasonable using ordinary and accustomed term meanings so as to be consistent with the Specification. In re Thrift, 298 F.3d 1357, 1364 (Fed. Cir. 2002). Moreover, there is no ipsissimis verbis test for determining whether a reference discloses a claim element, i.e., identity of terminology is not required. In re Bond, 910 F.2d 831, 832 (Fed. Cir. 1990). Further, References are evaluated by what they suggest to one versed in the art, rather than by their specified disclosures. In re Bozek, 163 USPQ 545 (CCPA 1969). In re Hoeschele, 406 F.2d 1403, 1406-07, 160 USPQ 809, 811-812 (CCPA 1969) (“[I]t is proper to take into account not only specific teachings of the references but also the inferences which one skilled in the art would reasonably be expected to draw therefrom...”). Thus, the test for whether a feature is obvious over a given set of references is what the combined teachings of the references would have suggested to those of ordinary skill in the art. In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981).
Argument:
The applicant argues that Claim 2 states that the pseudo mask image and the pseudo image are used as training data. Applicant then states that the ground of rejection based on Saikou paragraph [0063] states that the training data includes multiple mask images used as correct answer data while a captured image is used an input image. The training is to minimize the losses between the two.
Applicant then cites the specification, specifically, paragraph [0063] ... The training data is sets of a plurality of mask images that are used as correct answer data and a captured image that is used as an input image. In learning, for example, the parameters of the mask image generation model are determined by the gradient descent method, the error back propagation method, or the like so that the error (loss) between the output by the mask image generation model when the input image is inputted thereto and the correct answer data is minimized.
Applicant then argues that this is different from claim 2 which describes that the pseudo mask image and pseudo image (not original image nor mask image) are used as training data. Applicant then states that the pseudo mask image and the pseudo image are used as training data but the prior art teaches that the mask images and a captured image (original image) are used for training, contrary to claim 2.
Response:
US Patent Publication 2024 0378840 A1, (Saikou) shows the limitation wherein the pseudo mask image and the pseudo image are used as training data for learning a segmentation model that segments the object included in an image ("The training data is sets of a plurality of mask images that are used as correct answer data and a captured image that is used as an input image," paragraph [0063]).
Specifically, claim 2 depends on claim 1, and claim 1 states “An image generation apparatus comprising.” Since claim 1 uses the open ended phrase comprising, additional elements can be shown in the art, in this case that additional images are used for training. Thus, our reviewing Court has made clear that examined claims are interpreted as broadly as is reasonable using ordinary and accustomed term meanings so as to be consistent with the Specification. In re Thrift, 298 F.3d 1357, 1364 (Fed. Cir. 2002). The Court further has explained that the interpretations are to be made while “taking into account whatever enlightenment by way of definitions or otherwise that may be afforded by the written description contained in the applicant’s specification,” In re Morris, 127 F.3d 1048, 1054 (Fed. Cir. 1997), but without reading limitations from examples given in the Specification into the claims, In re Zletz, 893 F.2d 319, 321-22 (Fed. Cir. 1989).
Priority
Receipt is acknowledged that application claims priority to foreign application with application number JP2022-050635 dated 25 March 2022 and JP2022-150250 dated 21 September 2022. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 17 February 2025 that has been previously considered remains placed in the application file.
Election/Restrictions
Claim 16 remains withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected Group, there being no allowable generic or linking claim.
Claim Interpretation
Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009).
Claims 7 and 15 recite “at least one.” Since “at least one” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 6-15 and 17-27 (all claims not canceled or not elected) are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2024 0378840 A1, (Saikou) in view of US Patent Publication 2022 0207729 A1, (Boyd et al.).
Claim 1
PNG
media_image1.png
760
558
media_image1.png
Greyscale
Regarding Claim 1, Saikou teaches an image generation apparatus ("an image processing device, an image processing method, and storage medium for processing images acquired in endoscopic examination," paragraph [0001]) comprising:
[AltContent: textbox (Saikou, Fig. 12, showing an image generating device.)]at least one processor ("The processor 11 is a processor such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and a TPU (Tensor Processing Unit)," paragraph [0045]), wherein the processor is configured to:
acquire an original image ("The image processing device 1 acquire images (also referred to as "captured images 1a")," paragraph [0040]) and a mask image in which masks are applied to one or more regions respectively representing one or more objects including a target object in the original image ("mask images each of which is an image indicating a biopsy part in a captured image 1a," paragraph [0047] where a biopsy part is a target object);
derive a pseudo mask image by processing the mask in the mask image ("the image selection unit 34 determines that any pair of mask images between which the degree of coincidence of the score maximum positions is equal to or larger than a predetermined threshold value (also referred to as "first similarity determination threshold value") is a pair of similar mask images," paragraph [0092]); and
derive a pseudo image that has a region based on a mask included in the pseudo mask image ("The method of generating the biopsy part map 71 is not limited to the method of integrating or selecting mask image(s) of the output image Io. In another example, the output control unit 35 may input the output image Io to a segmentation model configured to extract (segment) a region corresponding to the biopsy part such as a lesion region from an image inputted thereto, and generate the biopsy part map 71 based on the result outputted by the segmentation model in response to the input," paragraph [0117]).
PNG
media_image2.png
478
636
media_image2.png
Greyscale
Saikou does not explicitly teach all of lesion shape evaluation index.
[AltContent: textbox (Boyd et al. Fig. 52, showing lesion progression over time.)] However, Boyd et al. teach based on a lesion shape evaluation index used as an evaluation index in medical practice for a medical image, ("the computer-implemented method further comprises correlating one or more of the defined shapes with the presence of phagocytic immune cells such as macrophages" paragraph [0036] where correlating defined shapes teaches using a shape evaluation index) wherein the lesion shape evaluation index corresponds to predefined medical stages of the lesion, and the processor is configured to process the masks to simulate a change from a first medical stage to a second medical stage such that the pseudo mask image represents a progress of the lesion different from that in the original image("these complex 2D patterns may represent different subtypes of disease, different stages of disease, different diseases, different likelihoods of progressing different rates of progression, different prognoses, different responses to treatment, different underlying biology, different concurrent diseases, different concurrent medical therapies, and/or different lifestyle choices ( e.g., smoking)," paragraph [0079] and "Such a map may be used for visualizing disease state, e.g., upon presentation to a graphic user interface. Maps generated at different time periods for the same eye may be used to visualize disease progression." paragraph [0404]); and has the same representation format as the original image, based on features derived from the original image using an encoder and the pseudo mask image ("Output 1404 illustrates an automatically segmented image, or mask, derived using classifier 1000," paragraph [0458]), wherein the pseudo image is synthesized by a decoder of a neural network using the features derived from the original image and the pseudo mask image as inputs ("FIG. 52 shows an example sequence of masked images from four timepoints. As shown, the mask images may be generated for multi-modal image input, e.g., FAF and DNIRA images," paragraph [0459]).
Therefore, taking the teachings of Saikou and Boyd et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Image Processing Device, Image Processing Method and Storage Medium” as taught by Saikou to use “Detection, Prediction and Classification for Ocular Disease” as taught by Boyd et al. The suggestion/motivation for doing so would have been that, "the present evaluation methods find use in evaluation of a tumor of the eye, such as tumor being one or more of (e.g. a metastasis to the eye of) a basal cell carcinoma, biliary tract cancer; bladder cancer; bone cancer; brain and central nervous system cancer; breast cancer; cancer of the peritoneum; cervical cancer; choriocarcinoma; colon and rectum cancer; connective tissue cancer; cancer of the digestive system; endometrial cancer; esophageal cancer; eye cancer; cancer of the head and neck; gastric cancer (including gastrointestinal cancer); glioblastoma; hepatic carcinoma; hepatoma; intraepithelial neoplasm; kidney or renal cancer; larynx cancer; leukemia; liver cancer; lung cancer (e.g., small-cell lung cancer, non-small cell lung cancer, adenocarcinoma of the lung, and squamous carcinoma of the lung); melanoma; myeloma; neuroblastoma; oral cavity cancer (lip, tongue, mouth, and pharynx); ovarian cancer; pancreatic cancer; prostate cancer; retinoblastoma; rhabdomyosarcoma; rectal cancer," as noted by the Boyd et al. disclosure in paragraph [0354], which also motivates combination because the combination would predictably have a greater adaptability as there is a reasonable expectation that the type of cancer may not be known in advance; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
The rejection of apparatus claim 1 above applies mutatis mutandis to the corresponding limitations of method claim 22 and computer readable medium claim 25 while noting that the rejection above cites to both device and method disclosures. Claims 22 and 25 are mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 2
Regarding claim 2, Saikou teaches the image generation apparatus according to claim 1, wherein the pseudo mask image and the pseudo image are used as training data for learning a segmentation model that segments the object included in an image ("The training data is sets of a plurality of mask images that are used as correct answer data and a captured image that is used as an input image," paragraph [0063]).
Claim 3
Regarding claim 3, Saikou teaches the image generation apparatus according to claim 2, wherein the processor is configured to accumulate the pseudo mask image and the pseudo image as the training data ("The training data is sets of a plurality of mask images that are used as correct answer data and a captured image that is used as an input image," paragraph [0063]).
Claim 4
Regarding claim 4, Saikou teaches the image generation apparatus according to claim 1, wherein the processor is configured to derive the pseudo mask image that is able to generate the pseudo image including a target object of a class different from a class indicated by the target object ("The image processing device 1 may use a classification model configured to perform classification into three or more classes," paragraph [0126]).
Claim 6
Regarding claim 6, Saikou teaches the image generation apparatus according to claim 1, as noted above.
Saikou does not explicitly teach all of lesion shape evaluation index.
However, Boyd et al. teach wherein the processor is configured to derive the pseudo mask image by processing the mask until a normal organ has a shape to be evaluated as a lesion based on a measurement index in medical practice for a medical image ("these complex 2D patterns may represent different subtypes of disease, different stages of disease, different diseases, different likelihoods of progressing different rates of progression, different prognoses, different responses to treatment, different underlying biology, different concurrent diseases, different concurrent medical therapies, and/or different lifestyle choices ( e.g., smoking)," paragraph [0079]).
Saikou and Boyd et al. are combined as per claim 1.
Claim 7
Regarding claim 7, Saikou teaches the image generation apparatus according to claim 1, as noted above.
Saikou does not explicitly teach all of style image having predetermined density, color, or texture.
However, Boyd et al. teach wherein the processor is configured to refer to at least one style image having predetermined density, color, or texture and generate the pseudo image having density, color, or texture depending on the style image selected by a user from a displayed list of predefined style templates, each style templates explicitly representing a predetermined density, color, or texture, and generate the pseudo image having density, color, or texture depending on the selected style image template("When visualized in an integrated format, the difference images may be overlaid in different colors so a user can observe how the changes have manifested themselves over time. Delta image analysis can also present an overlaid formatting where each image is superimposed over the previous one in the visual representation. By allowing for the image differences to be displayed on the display device relative to the eye structures, the user can be afforded an opportunity to see how the differences compare," paragraph [0207]).
Saikou and Boyd et al. are combined as per claim 1.
Claim 8
Regarding claim 8, Saikou teaches the image generation apparatus according to claim 1, wherein the processor is configured to receive an instruction for a degree of processing of the mask and derive the pseudo mask image by processing the mask based on the instruction ("In this case, the image selection unit 34 calculates the degree of similarity based on any image similarity index such as cosine similarity, MSE (Mean Squared Error), and SSIM (Structural Similarity) for all ~ 2 combinations of N mask images. Then, the image selection unit 34 determines that a pair of mask images between which the degree of similarity is equal to or larger than a predetermined threshold value (also referred to as "second similarity determination threshold value") is a pair of similar mask images," paragraph [0094] where a predetermined threshold teaches a degree of processing).
Claim 9
Regarding claim 9, Saikou teaches the image generation apparatus according to claim 8, wherein the processor is configured to receive designation of a position of an end point of the mask after processing and designation of a processing amount as the instruction for the degree of processing ("In this instance, the output control unit 35 causes the display device 2 to display an image based on the output image Io or the output image Io as it is as information regarding the biopsy part. Thus, the output control unit 35 can present the existence and position of the identified biopsy part to the user," paragraph [0118] where the existence and position of the identified biopsy part is a position of an end point of a mask).
Claim 10
Regarding claim 10, Saikou teaches the image generation apparatus according to claim 8, wherein the processor is configured to receive the instruction for the degree of processing of the mask under a constraint condition set in advance ("In this case, the image selection unit 34 calculates the degree of similarity based on any image similarity index such as cosine similarity, MSE (Mean Squared Error), and SSIM (Structural Similarity) for all ~ 2 combinations of N mask images. Then, the image selection unit 34 determines that a pair of mask images between which the degree of similarity is equal to or larger than a predetermined threshold value (also referred to as "second similarity determination threshold value") is a pair of similar mask images," paragraph [0094] where a predetermined threshold teaches a constraint condition set in advance).
Claim 11
Regarding claim 11, Saikou teaches the image generation apparatus according to claim 1, as noted above.
Saikou does not explicitly teach all of an inclusion relation, in the mask image.
However, Boyd et al. teach wherein, in a case where the original image includes a plurality of the objects, and the target object and a partial region of another object other than the target object have an inclusion relation, in the mask image, a region having the inclusion relation is given with a mask different from a region having no inclusion relation ("the registered DNIRA and FAF images are compared and the regions of hypofluorescent DNIRA that extend beyond the boundary of dark FAF or that exist in regions distinct from the dark FAF, are identified (both in a separate image and as part of a two-layer overlay) and quantified (following segmentation)," paragraph [0636] where a two-layer overlay is an inclusion relation).
Saikou and Boyd et al. are combined as per claim 1.
Claim 12
Regarding claim 12, Saikou teaches the image generation apparatus according to claim 11, as noted above.
Saikou does not explicitly teach all of an inclusion relation.
However, Boyd et al. teach wherein the processor is configured to, in a case where the other object having the inclusion relation is an object fixed in the original image, derive the pseudo mask image by processing the mask applied to the target object conforming to a shape of a mask applied to the fixed object ("the registered DNIRA and FAF images are compared and the regions of hypofluorescent DNIRA that extend beyond the boundary of dark FAF or that exist in regions distinct from the dark FAF, are identified (both in a separate image and as part of a two-layer overlay) and quantified (following segmentation)," paragraph [0636] where a two-layer overlay is an inclusion relation).
Saikou and Boyd et al. are combined as per claim 1.
Claim 13
Regarding claim 13, Saikou teaches the image generation apparatus according to claim 1, as noted above.
Saikou does not explicitly teach all of the original image is a three- dimensional image.
However, Boyd et al. teach wherein the processor is configured to, in a case where the original image is a three- dimensional image, derive the pseudo mask image by processing the mask while maintaining three-dimensional continuity of the mask applied to the region of the target object ("such techniques include, but are not limited to fundus photography, cSLO, FAF, angiography, OCT, OCTA, including three dimensional reconstructions of such," paragraph [0288]).
Saikou and Boyd et al. are combined as per claim 1.
Claim 14
Regarding claim 14, Saikou teaches the image generation apparatus according to claim 1, the target object is a lesion included in the medical image ("In other words, the biopsy part indicates a part suspected of a lesion. It is noted that examples of the biopsy part include not only a part suspected of a lesion but also include a peripheral region of the above-mentioned part and any other part where a biopsy is determined to be necessary," paragraph [0039]), as noted above.
Saikou does not explicitly teach all of the original image is a three- dimensional image.
However, Boyd et al. teach wherein the original image is a three-dimensional medical image ("such techniques include, but are not limited to fundus photography, cSLO, FAF, angiography, OCT, OCTA, including three dimensional reconstructions of such," paragraph [0288]),
Saikou and Boyd et al. are combined as per claim 1.
Claim 15
Regarding claim 15, Saikou teaches the image generation apparatus according to claim 14, as noted above.
Saikou does not explicitly teach all of the medical image includes a rectum.
However, Boyd et al. teach wherein the medical image includes a rectum of a human body ("the present evaluation methods find use in evaluation of a tumor of the eye, such as tumor being one or more of (e.g. a metastasis to the eye of) a basal cell carcinoma, biliary tract cancer; bladder cancer; bone cancer; brain and central nervous system cancer; breast cancer; cancer of the peritoneum; cervical cancer; choriocarcinoma; colon and rectum cancer; connective tissue cancer; cancer of the digestive system; endometrial cancer; esophageal cancer; eye cancer; cancer of the head and neck; gastric cancer (including gastrointestinal cancer); glioblastoma; hepatic carcinoma; hepatoma; intraepithelial neoplasm; kidney or renal cancer; larynx cancer; leukemia; liver cancer; lung cancer (e.g., small-cell lung cancer, non-small cell lung cancer, adenocarcinoma of the lung, and squamous carcinoma of the lung); melanoma; myeloma; neuroblastoma; oral cavity cancer (lip, tongue, mouth, and pharynx); ovarian cancer; pancreatic cancer; prostate cancer; retinoblastoma; rhabdomyosarcoma; rectal cancer," paragraph [0354]), and
the target object is a rectal cancer, and another object other than the target object is
at least one of a mucous membrane layer of the rectum, a submucosal layer of the rectum, a muscularis propria of the rectum, a subserous layer of the rectum, or a background other than the layers ("the present evaluation methods find use in evaluation of … rectal cancer," paragraph [0354]).
Saikou and Boyd et al. are combined as per claim 1.
Claim 17
Regarding claim 17, Saikou teaches a learning apparatus ("a mask image generation model is learned in advance based on training data (training dataset)," paragraph [0063])comprising:
at least one processor ("realized by the processor 11 executing a program," paragraph [0075]), wherein the processor is configured to:
construct a segmentation model that segments a region of one or more objects including a target object included in an input image, by performing machine learning using a plurality of sets of pseudo images and pseudo mask images generated by the image generation apparatus according to claim 1 as training data ("The mask image generation model is a machine learning model or a statistical model that is trained to output a plurality of mask images indicating candidate regions for a biopsy part in the inputted captured image 1a with different levels of granularity (i.e., resolutions) when a captured image 1a is inputted thereto," paragraph [0063]).
Claim 18
Regarding claim 18, Saikou teaches the learning apparatus according to claim 17, wherein the processor is configured to: construct the segmentation model by performing machine learning using a plurality of sets of original images and mask images as training data ("The mask image generation model is a machine learning model or a statistical model that is trained to output a plurality of mask images indicating candidate regions for a biopsy part in the inputted captured image 1a with different levels of granularity (i.e., resolutions) when a captured image 1a is inputted thereto," paragraph [0062]).
Claim 19
Regarding claim 19, Saikou teaches a segmentation model constructed by the learning apparatus according to claim 17 ("The method of generating the biopsy part map 71 is not limited to the method of integrating or selecting mask image(s) of the output image Io. In another example, the output control unit 35 may input the output image Io to a segmentation model configured to extract (segment) a region corresponding to the biopsy part such as a lesion region from an image inputted thereto, and generate the biopsy part map 71 based on the result outputted by the segmentation model in response to the input," paragraph [0111]).
Claim 20
Regarding claim 20, Saikou teaches an image processing apparatus comprising:
at least one processor ("realized by the processor 11 executing a program," paragraph [0075]), wherein the processor is configured to
derive a mask image in which one or more objects included in a target image to be processed are masked, by segmenting a region of one or more objects including a target object included in the target image using the segmentation model according to claim 19 ("The method of generating the biopsy part map 71 is not limited to the method of integrating or selecting mask image(s) of the output image Io. In another example, the output control unit 35 may input the output image Io to a segmentation model configured to extract (segment) a region corresponding to the biopsy part such as a lesion region from an image inputted thereto, and generate the biopsy part map 71 based on the result outputted by the segmentation model in response to the input," paragraph [0111]).
Claim 21
Regarding claim 21, Saikou teaches the image processing apparatus according to claim 20, wherein the processor is configured to discriminate a class of the target object masked in the mask image using a discrimination model that discriminates a class of a target object included in a mask image ("The image processing device 1 may use a classification model configured to perform classification into three or more classes," paragraph [0126]).
Claim 22
Regarding claim 22, Saikou teaches an image generation method ("an image processing device, an image processing method, and storage medium for processing images acquired in endoscopic examination," paragraph [0001]) used by an image generation apparatus having a processor, the method implemented by the processor comprising:
acquiring an original image ("The image processing device 1 acquire images (also referred to as "captured images 1a")," paragraph [0040]) and a mask image in which masks are applied to one or more regions respectively representing one or more objects including a target object in the original image ("mask images each of which is an image indicating a biopsy part in a captured image 1a," paragraph [0047] where a biopsy part is a target object);
deriving a pseudo mask image by processing the mask in the mask image ("the image selection unit 34 determines that any pair of mask images between which the degree of coincidence of the score maximum positions is equal to or larger than a predetermined threshold value (also referred to as "first similarity determination threshold value") is a pair of similar mask images," paragraph [0092]); and
deriving a pseudo image that has a region based on a mask included in the pseudo mask image ("The method of generating the biopsy part map 71 is not limited to the method of integrating or selecting mask image(s) of the output image Io. In another example, the output control unit 35 may input the output image Io to a segmentation model configured to extract (segment) a region corresponding to the biopsy part such as a lesion region from an image inputted thereto, and generate the biopsy part map 71 based on the result outputted by the segmentation model in response to the input," paragraph [0117]).
Saikou does not explicitly teach all of lesion shape evaluation index.
However, Boyd et al. teach wherein the lesion shape evaluation index corresponds to predefined medical stages of the lesion, and the processor is configured to process the masks to simulate a change from a first medical stage to a second medical stage such that the pseudo mask image represents a progress of the lesion different from that in the original image("these complex 2D patterns may represent different subtypes of disease, different stages of disease, different diseases, different likelihoods of progressing different rates of progression, different prognoses, different responses to treatment, different underlying biology, different concurrent diseases, different concurrent medical therapies, and/or different lifestyle choices ( e.g., smoking)," paragraph [0079] and "Such a map may be used for visualizing disease state, e.g., upon presentation to a graphic user interface. Maps generated at different time periods for the same eye may be used to visualize disease progression." paragraph [0404]);
based on a lesion shape evaluation index used as an evaluation index in medical practice for a medical image("the computer-implemented method further comprises correlating one or more of the defined shapes with the presence of phagocytic immune cells such as macrophages" paragraph [0036] where correlating defined shapes teaches using a shape evaluation index); and
has the same representation format as the original image, based on features derived from the original image using an encoder and the pseudo mask image ("Output 1404 illustrates an automatically segmented image, or mask, derived using classifier 1000," paragraph [0458]), wherein the pseudo image is synthesized by a decoder of a neural network using the features derived from the original image and the pseudo mask image as inputs ("FIG. 52 shows an example sequence of masked images from four timepoints. As shown, the mask images may be generated for multi-modal image input, e.g., FAF and DNIRA images," paragraph [0459]).
Saikou and Boyd et al. are combined as per claim 1.
Claim 23
Regarding claim 23, Saikou teaches a learning method of constructing a segmentation model that segments a region of one or more objects including a target object included in an input image, by performing machine learning using a plurality of sets of pseudo images and pseudo mask images generated by the image generation method according to claim 22 as training data ("The mask image generation model is a machine learning model or a statistical model that is trained to output a plurality of mask images indicating candidate regions for a biopsy part in the inputted captured image 1a with different levels of granularity (i.e., resolutions) when a captured image 1a is inputted thereto," paragraph [0063]).
Claim 24
Regarding claim 24, Saikou teaches an image processing method comprising: deriving a mask image in which one or more objects included in a target image to be processed are masked, by segmenting a region of one or more objects including a target object included in the target image using the segmentation model according to claim 19 ("The method of generating the biopsy part map 71 is not limited to the method of integrating or selecting mask image(s) of the output image Io. In another example, the output control unit 35 may input the output image Io to a segmentation model configured to extract (segment) a region corresponding to the biopsy part such as a lesion region from an image inputted thereto, and generate the biopsy part map 71 based on the result outputted by the segmentation model in response to the input," paragraph [0111]).
Claim 25
Regarding claim 25, Saikou teaches a non-transitory computer-readable storage medium that stores an image generation program ("an image processing device, an image processing method, and storage medium for processing images acquired in endoscopic examination," paragraph [0001]) causing a processor of an image generation apparatus to execute:
acquiring an original image ("The image processing device 1 acquire images (also referred to as "captured images 1a")," paragraph [0040]) and a mask image in which masks are applied to one or more regions respectively representing one or more objects including a target object in the original image ("mask images each of which is an image indicating a biopsy part in a captured image 1a," paragraph [0047] where a biopsy part is a target object);
deriving a pseudo mask image by processing the mask in the mask image ("the image selection unit 34 determines that any pair of mask images between which the degree of coincidence of the score maximum positions is equal to or larger than a predetermined threshold value (also referred to as "first similarity determination threshold value") is a pair of similar mask images," paragraph [0092]); and
deriving a pseudo image that has a region based on a mask included in the pseudo mask image ("The method of generating the biopsy part map 71 is not limited to the method of integrating or selecting mask image(s) of the output image Io. In another example, the output control unit 35 may input the output image Io to a segmentation model configured to extract (segment) a region corresponding to the biopsy part such as a lesion region from an image inputted thereto, and generate the biopsy part map 71 based on the result outputted by the segmentation model in response to the input," paragraph [0117]).
Saikou does not explicitly teach all of lesion shape evaluation index.
However, Boyd et al. teach based on a lesion shape evaluation index used as an evaluation index in medical practice for a medical image, ("the computer-implemented method further comprises correlating one or more of the defined shapes with the presence of phagocytic immune cells such as macrophages" paragraph [0036] where correlating defined shapes teaches using a shape evaluation index) wherein the lesion shape evaluation index corresponds to predefined medical stages of the lesion, and the processor is configured to process the masks to simulate a change from a first medical stage to a second medical stage such that the pseudo mask image represents a progress of the lesion different from that in the original image("these complex 2D patterns may represent different subtypes of disease, different stages of disease, different diseases, different likelihoods of progressing different rates of progression, different prognoses, different responses to treatment, different underlying biology, different concurrent diseases, different concurrent medical therapies, and/or different lifestyle choices ( e.g., smoking)," paragraph [0079] and "Such a map may be used for visualizing disease state, e.g., upon presentation to a graphic user interface. Maps generated at different time periods for the same eye may be used to visualize disease progression." paragraph [0404]); and
has the same representation format as the original image, based on features derived from the original image using an encoder and the pseudo mask image ("Output 1404 illustrates an automatically segmented image, or mask, derived using classifier 1000," paragraph [0458]), wherein the pseudo image is synthesized by a decoder of a neural network using the features derived from the original image and the pseudo mask image as inputs ("FIG. 52 shows an example sequence of masked images from four timepoints. As shown, the mask images may be generated for multi-modal image input, e.g., FAF and DNIRA images," paragraph [0459]).
Saikou and Boyd et al. are combined as per claim 1.
Claim 26
Regarding claim 26, Saikou teaches a non-transitory computer-readable storage medium that stores a learning program causing a computer to execute: a procedure of constructing a segmentation model that segments a region of one or more objects including a target object included in an input image, by performing machine learning using a plurality of sets of pseudo images and pseudo mask images generated by the image generation method according to claim 22 as training data ("The mask image generation model is a machine learning model or a statistical model that is trained to output a plurality of mask images indicating candidate regions for a biopsy part in the inputted captured image 1a with different levels of granularity (i.e., resolutions) when a captured image 1a is inputted thereto," paragraph [0063]).
Claim 27
Regarding claim 27, Saikou teaches a non-transitory computer-readable storage medium that stores an image processing program causing a computer to execute: a procedure of deriving a mask image in which one or more objects included in a target image to be processed are masked, by segmenting a region of one or more objects including a target object included in the target image using the segmentation model according to claim 19 ("The method of generating the biopsy part map 71 is not limited to the method of integrating or selecting mask image(s) of the output image Io. In another example, the output control unit 35 may input the output image Io to a segmentation model configured to extract (segment) a region corresponding to the biopsy part such as a lesion region from an image inputted thereto, and generate the biopsy part map 71 based on the result outputted by the segmentation model in response to the input," paragraph [0111]).
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent Publication 2022 0198668 A1 to Park et al. discloses a method for analyzing a lesion based on a medical image, which is performed by a computing device. The method may include: obtaining positional information of a suspicious nodule which exists in the medical image; generating a mask for the suspicious nodule based on a patch of the medical image corresponding to the positional information; and determining a class for a state of the suspicious nodule based on the patch of the medical image and the mask for the suspicious nodule.
US Patent Publication 2019 0378278 A1 to Bose et al. discloses generating augmented segmented image set obtain are provided. The method may include: obtaining a first image including a first anatomical structure of a first object; determining first feature data of the first anatomical structure; determining one or more first transformations related to the first anatomical structure, wherein a first transformation includes a transformation type and one or more transformation parameters related to the transformation type; applying the one or more first transformations to the first feature data of the first anatomical structure to generate second feature data of the first anatomical structure; and generating a second image based on the second feature data of the first anatomical structure.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Heath E. Wells/Examiner, Art Unit 2664
Date: 25 February 2026