DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “first setting unit”, “processing unit”, “output unit”, “display unit”, and/or “second setting unit” in claims 1-19. In light of the specification, the “first setting unit”, “processing unit” and “second setting unit” are interpreted to be a combination of a processor executing instructions for carrying out the actions recited in the claim – or equivalents thereof. In light of the specification, the “output unit” and “display unit” are interpreted to be a display device and display screen or screen region, respectively – or equivalents thereof.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 8-10, and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to mental process abstract idea without significantly more.
Claim(s) 1 recite(s):
“set sample regions in an analysis target region of an image”, which can be reasonably be interpreted as a human observer viewing a displayed current image and mentally selecting or setting sample regions in a mentally determined salient sub-region of the image – via visual perception; and
“select at least one reference image from a plurality of reference images associated with a plurality of cases, on a basis of images of the sample regions”, which can be reasonably interpreted as a human observer viewing a plurality of reference images concurrently displayed and mentally selecting one of the reference images based on similarity with the aforementioned set sample region – via visual perception.
This judicial exception is not integrated into a practical application because additional elements of:
“A medical image analysis apparatus”, “a first setting unit configured to”, “on a basis of an algorithm”, and “a processing unit configured to” are generically recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer;
“obtained by imaging of a biologically-originated sample” are generically recited insignificant extra-solution activity of data gathering;
“an output unit configured to output the selected reference image” are generically recited insignificant extra-solution activity of data outputting.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because additional elements of:
“A medical image analysis apparatus”, “a first setting unit configured to”, “on a basis of an algorithm”, and “a processing unit configured to” are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f);
“obtained by imaging of a biologically-originated sample” are insignificant extra-solution activity of data gathering;
“an output unit configured to output the selected reference image” are insignificant extra-solution activity of data outputting.
Listed depending claims do not remedy these deficiencies:
Claims 2-6 further recite mental process abstract ideas involving visual perception.
Claims 8 and 9 further mental process and recited math abstract ideas.
Claims 10 and 17 further recite additional elements of insignificant extra-solution activity of data outputting that are generically recited and well-understood, routine, conventional.
As per claim(s) 18, arguments made in rejecting claim(s) 1 are analogous.
Depending claim 19 does not remedy these deficiencies since it recites additional elements that are generically recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f).
As per claim(s) 20, arguments made in rejecting claim(s) 1 are analogous. Note that claim 20 does not recite all of the computer component additional elements recited in claim 1. Claim 20 only recites, “on a basis of an algorithm”, which was addressed above regarding claim 1
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 6-10, 12-15, 17, and 18-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 2021/0118136 A1 (Hassan-Shafique).
As per claim 1, Hassan-Shafique teaches a medical image analysis apparatus comprising (Hassan-Shafique: Fig. 1 (shown below): para 28: “The computing environment 100 includes a slide scanner 120, a pathology database 110, a client device 105, and a historical database 150 in addition to the personalized oncology system 125. The personalized oncology system 125 may include a user interface unit 135, a regions-of-interest (ROI) selection unit 140, a search unit 145, and a data processing unit 160.”;
PNG
media_image1.png
567
1044
media_image1.png
Greyscale
PNG
media_image2.png
536
964
media_image2.png
Greyscale
para 30):
a first setting unit configured to set sample regions in an analysis target region of an image obtained by imaging of a biologically-originated sample, on a basis of an algorithm (Hassan-Shafique:
abstract: “accessing a first histopathological image of a histopathological slide of a sample taken from a first patient”;
Fig. 1 (shown above): mainly 120 “slide scanner”; Fig. 6: mainly 605-610; paras 28, 29, 59: “slide scanner”
para 3: “accessing a first histopathological image of a histopathological slide of a sample taken from a first patient; analyzing the first histopathological image using a first machine learning model configured to extract first features from the first histopathological image, wherein the first features are indicative of cancerous tissue in the sample taken from the first patient”;
para 26: “identifying regions of interest (ROI) in a patient's histopathology imagery to be analyzed”;
para 31: “The user interface unit 135 may be configured to render the various user interfaces described herein, such as those shown in FIGS. 8, 9, and 10”;
para 32: “The ROI selection unit 140 allows a user to select one or more regions of interest in a histopathological image for a patient. The user may request the histopathological image may be accessed from the pathology database 110. The ROI selection unit 140 may provide tools that enable the user to select one or more ROI. The ROI selection unit 140 may also implement an automated process for selecting one or more ROI in the image. The automated ROI selection process may be implemented in addition to the manual ROI selection process and/or instead of the manual ROI selection process.”: note that the target region can be interpreted to be the entire image requested by the user.
para 59: “The user may select the whole-slide image from a pathology database 110 or other data store of patient information accessible to the personalized oncology system 105.” : note that the target region can be interpreted to be the entire image requested by the user.
Para 60: “The process 600 may include an operation 610 in which the regions of interest (ROI) in the whole-slide image are selected. The user interface unit 135 of the personalized oncology system 125 may display the slide that was accessed in operation 605. The ROI selection unit 140 may provide tools on the user interface that enable the user to manually select one or more ROI. In the example shown in FIG. 6, the user may draw a square or rectangular region around an ROI. The ROI selection unit 140 may determine the coordinates of the selected ROI by mapping the square or rectangle drawn on the whole-slide image. The ROI selection unit 140 may also allow the user to draw other shapes around a ROI or draw a freehand shape around an ROI. The ROI selection unit 140 may also be configured to automatically detect one or more ROI.”;
PNG
media_image3.png
784
1267
media_image3.png
Greyscale
Para 63: “The techniques implemented by the personalized oncology system 125 solve this technical problem by utilizing the expertise of the pathologist to identify the regions of interest (ROI) in a patient's histopathology imagery. The ROI, also referred to herein as a “patch” of a histopathology image, is a portion of the whole-slide image.”;
Fig. 12 (shown below): 1210-1220;);
a processing unit configured to select at least one reference image from a plurality of reference images associated with a plurality of cases, on a basis of images of the sample regions (Hassan-Shafique:
para 3: “searching a histological database that includes a plurality of second histopathological images and corresponding clinical data for a plurality of second patients to generate search results, wherein the search results include a plurality of third histopathological images and corresponding clinical data from the plurality of second histopathological images and corresponding clinical data that match the first features from the first histopathological image, and wherein the third histopathological images and corresponding clinical data are associated with a plurality of third patients of the plurality of second patients”;
para 27: “mine large histopathological imagery databases, (ii) novel deep learning methods to extract meaningful features from histopathological images…
retrieval of image databases, and (iv) the knowledge and expertise of trained pathologists to recognize and interpret subtle histologic features”;
para 33: “The search unit 145 may be configured to search the historical database 150 to find histopathological imagery stored therein that is similar to the ROI identified by the user.”;
para 34: “The historical database 150 may store historical histopathological imagery that has been collected from numerous patients. The historical database 150 may be provided by a third party which is separate from the entity which implements the personalized oncology system 125. The historical database 150 may be provided as a service in some implementations, which may be accessed by the personalized oncology system 125 via a network and/or via the Internet. The histopathological imagery stored in the historical database 150 may be associated with clinical data, which may include information associated with the patient associated with the selected historical imagery, such as but not limited to diagnoses, disease progression, clinical outcomes, time-to-events information.”;
para 41: “The matched cases 830 include cases from the historical histopathological database. The matched cases 830 may include histopathological imagery that includes characteristics that the oncologist may compare with histopathological imagery of the patient. The matched cases 830 may show details of cases from the database that may help to guide the oncologist treating the patient by providing key insights and clinical outcomes based on the patient's own histological and other personal factors. The oncologist may use this information to identify an optimal therapeutic plan for the patient.”;
para 42: “The histopathological imagery stored in the pathology database 110 and the historical database 150 play a critical role in the cancer diagnosis process. Pathologists evaluate histopathological imagery for a number of characteristics, that include nuclear atypia, mitotic activity, cellular density, and tissue architecture to identify cancer cells as well as the stage of the cancer. This information enables the patient's doctors to create optimal therapeutic schedules to effectively control the metastasis of tumor cells.”;
para 46: “FIG. 2 shows an example of the structure of an example CNN 200 which may be implemented by the search unit 145 of the personalized oncology system 125. The CNN 200 includes an input image 205, which is a histopathology image to be analyzed. The histopathology image may be obtained from the pathology database 110 for a patient for whom a personalized therapeutic plan is being developed. The input image 205 may be quite large and include a scan of an entire slide. However, as will be discussed in the examples which follow, “patches” of the input image 205 that correspond to one or more ROI identified by the user and/or automatically identified by the ROI selection unit 140 may be provided to the CNN 200 for analysis rather than the entire input image 205.”;
PNG
media_image4.png
655
1552
media_image4.png
Greyscale
Para 47: “The first convolutional layer 210 applies filters and/or feature detectors to the input image 205 and outputs feature maps.”;
Para 61: “The process 600 may include an operation 615 in which the regions of interest (ROI) of the whole-slide image are provided to a DCNN of the search unit 145 of the personalized oncology system 125 for analysis. The DCNN is configured to extract features from the selected ROIs and match these features with pre-indexed features from the historic imagery stored in the historical histopathological database in operation 620.”;
Para 63: “The CNNs of the system may then (1) analyze and refine the ROI data and (2) match the refined ROI data with the histopathology imagery and associated clinical data stored in the historical database 150.”
Fig. 12 (shown below): 1230;); and
an output unit configured to output the selected reference image (Hassan-Shafique:
para 3: “analyzing the plurality of third histopathological images and the corresponding clinical data associated with the plurality of third histopathological images using statistical analysis techniques to generate associated statistics and metrics associated with mortality, morbidity, time-to-event, or a combination thereof for the plurality of third patients associated with the third histopathological images; and presenting an interactive visual representation of the associated statistics and metrics on a display of the system”;
para 27: “provide oncologists key insights about clinical outcomes, such as but not limited to survival rate, reoccurrence rate, and time-to-reoccurrence, and the efficacy of treatments based on patient's histological and other personal factors. Thus, the personalized oncology system enables the oncologist to identify optimal treatment plan for the patient”;
para 61: “The matching historical imagery and associated clinical data are obtained from the historical histopathological database in operation 625 and provided to the personalized oncology system 125 for presentation to the user. The associated clinical data may include information associated with the patient associated with the selected historical imagery, such as but not limited to diagnoses, disease progression, clinical outcomes, time-to-events information”;
Fig. 12 (shown below): 1240;
PNG
media_image5.png
1248
1070
media_image5.png
Greyscale
).
As per claim 6, Hassan-Shafique teaches the medical image analysis apparatus according to claim 1, wherein the first setting unit clusters a plurality of the sample regions to generate a plurality of clusters, and selects the sample region from the clusters, and the processing unit selects the reference image on a basis of an image of the sample region selected from the clusters (Hassan-Shafique: See arguments and citations offered in rejecting claim 1 above; The plural ROIs are interpreted to be the clusters. ).
As per claim 7, Hassan-Shafique teaches the medical image analysis apparatus according to claim 1, wherein the first setting unit determines one or more magnifications corresponding to a case to be analyzed among a plurality of magnifications of an image, and the first setting unit sets the sample region in the analysis target region of the image at the determined magnification (Hassan-Shafique: See arguments and citations offered in rejecting claim 1 above;
Para 97 (shown below): “The query feature(s) are computed at appropriate magnification based on the magnification of the query image 1805. Similarly, the magnification of query image 1805 may be used by the search unit 145 to determine which learned dictionary and associated indexes are used in subsequent processing.”: The user query with appropriate magnification for corresponding learned dictionary).
As per claim 8, Hassan-Shafique teaches the medical image analysis apparatus according to claim 1, wherein the processing unit calculates a similarity between the images of the sample regions and the reference images, and selects the reference image on a basis of the similarity (Hassan-Shafique: See arguments and citations offered in rejecting claim 1 above).
As per claim 9, Hassan-Shafique teaches the medical image analysis apparatus according to claim 8, wherein the processing unit calculates feature values of the images of the sample regions, and calculates the similarity on a basis of the feature values and feature values of the reference images (Hassan-Shafique: See arguments and citations offered in rejecting claim 8 above;
Para 53: features; para 57: “extract, and match image-features… searching large databases of histopathology images and associated clinical data.”).
As per claim 10, Hassan-Shafique teaches the medical image analysis apparatus according to claim 1, further comprising: a display unit configured to display a part or all of the image obtained by imaging of the biologically-originated sample; and a second setting unit configured to set the analysis target region in the image displayed on the display unit (Hassan-Shafique: See arguments and citations offered in rejecting claim 1 above;
Fig. 6 (shown above): mainly 610 (shown below):
PNG
media_image6.png
375
1388
media_image6.png
Greyscale
See para 60 (cited above);
Note that the patch can be interpreted as the analysis target region that is selected from the whole-slide image. The ROI is selected from the patch image.).
As per claim 12, Hassan-Shafique teaches the medical image analysis apparatus according to claim 10, wherein the second setting unit sets the analysis target region in the image on a basis of instruction information from an operator (Hassan-Shafique: See arguments and citations offered in rejecting claim 1 above).
As per claim 13, Hassan-Shafique teaches the medical image analysis apparatus according to claim 8, wherein the output unit displays a part or all of the image obtained by imaging of the biologically-originated sample on a first screen portion in a screen of an application, and displays the selected reference image on a second screen portion in the screen of the application (Hassan-Shafique: See arguments and citations offered in rejecting claim 13 above; Fig. 6 (shown above)).
As per claim 14, Hassan-Shafique teaches the medical image analysis apparatus according to claim 13, wherein
the output unit places the selected reference image in the second screen portion in an order according to the similarity (Hassan-Shafique: See arguments and citations offered in rejecting claim 13 above;
para 97 (shown below): “The ranked results may then be presented to the user on a user interface provided by the user interface unit 135. The ranked results may be presented with the source image 1840 and the position within the image 1845 from which the ranked results is found”:
PNG
media_image7.png
1598
995
media_image7.png
Greyscale
PNG
media_image8.png
660
1258
media_image8.png
Greyscale
).
As per claim 15, Hassan-Shafique teaches the medical image analysis apparatus according to claim 14, wherein the output unit selects one sample region from the sample regions on a basis of instruction information from an operator, and places the reference images for which the similarity to the selected sample region has been calculated, on the second screen portion in an order according to the similarity to the image of the selected sample region (Hassan-Shafique: See arguments and citations offered in rejecting claim 13 above;
Note that the patch image, selected by the user, is one of plural patch images that are stitched together to make up the whole-slide image).
As per claim 17, Hassan-Shafique teaches the medical image analysis apparatus according to claim 1, wherein the plurality of reference images are associated with clinical information about the plurality of cases, and the output unit further outputs the clinical information regarding the selected reference image (Hassan-Shafique: See arguments and citations offered in rejecting claim 1 above).
As per claim(s) 18, arguments made in rejecting claim(s) 1 are analogous. Hassan-Shafique also teaches an imaging device configured to image a biologically- originated sample (Hassan-Shafique: See arguments and citations offered in rejecting claim 1 above;
abstract: “accessing a first histopathological image of a histopathological slide of a sample taken from a first patient”; Fig. 1: mainly 120 “slide scanner”; Fig. 6: mainly 605-610; paras 28, 29, 59: “slide scanner”).
As per claim 19, Hassan-Shafique teaches the medical image analysis system according to claim 18, further comprising a computer program executed by a computer to cause the computer to function as the first setting unit, the processing unit, and the output unit (Hassan-Shafique: See arguments and citations offered in rejecting claim 1 above;
Figs. 1 and 11; para 116: instructions;
Para 119: “program instructions”; para 120: program).
As per claim(s) 20, arguments made in rejecting claim(s) 1 are analogous.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 3, 5, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Hassan-Shafique as applied to claims 1 and 10 above, and further in view of Official Notice.
As per claim 2, Hassan-Shafique teaches the medical image analysis apparatus according to claim 1. Hassan-Shafique does not teach the first setting unit sets the sample regions in random positions in the analysis target region. Examiner provides Official Notice that these limitations were well known prior to filing.
One of ordinary skill in the art, prior to filing, would have recognized the advantages of improving robustness, coverage of potential features, and avoiding bias. The teachings of the prior art could have been incorporated into Hassan-Shafique in that the first setting unit sets the sample regions in random positions in the analysis target region.
As per claim 3, Hassan-Shafique teaches the medical image analysis apparatus according to claim 1. Hassan-Shafique does not teach wherein the first setting unit sets the sample regions at equal intervals in the analysis target region. Examiner provides Official Notice that these limitations were well known prior to filing.
One of ordinary skill in the art, prior to filing, would have recognized the advantage of ensuring that meaningful parts of an image are not missed. The teachings of the prior art could have been incorporated into Hassan-Shafique in that the first setting unit sets the sample regions at equal intervals in the analysis target region.
As per claim 5, Hassan-Shafique teaches the medical image analysis apparatus according to claim 1. Hassan-Shafique does not teach the first setting unit selects a sample region from the sample regions on a basis of a size of a cell nucleus included in the sample regions, and the processing unit selects the reference image on a basis of an image of the selected sample region. Examiner provides Official Notice that these limitations were well known prior to filing.
One of ordinary skill in the art, prior to filing, would have recognized the advantage of improved diagnostic accuracy. The teachings of the prior art could have been incorporated into Hassan-Shafique in that the first setting unit selects a sample region from the sample regions on a basis of a size of a cell nucleus included in the sample regions, and the processing unit selects the reference image on a basis of an image of the selected sample region.
As per claim 11, Hassan-Shafique teaches medical image analysis apparatus according to claim 10, wherein a position of the image displayed on the display unit is movable by an operator (Hassan-Shafique: See arguments and citations offered in rejecting claim 10 above).
Hassan-Shafique does not teach the second setting unit sets a region of the image included in a predetermined region in a display region of the image as the analysis target region in a case where the image is kept unmoved for a certain period of time. Examiner provides Official Notice that these limitations were well known prior to filing.
One of ordinary skill in the art, prior to filing, would have recognized the advantage of preventing user errors and provides a brief window for the user to reconsider or cancel the action. The teachings of the prior art could have been incorporated into Hassan-Shafique in that the second setting unit sets a region of the image included in a predetermined region in a display region of the image as the analysis target region in a case where the image is kept unmoved for a certain period of time.
Claim(s) 4 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Hassan-Shafique as applied to claims 1 and 14 above, and further in view of US 2021/0019342 A1 (Peng).
As per claim 4, Hassan-Shafique teaches the medical image analysis apparatus according to claim 1. Hassan-Shafique does not teach the first setting unit selects a sample region from the sample regions on a basis of a density of cell nuclei included in the sample regions, and the processing unit selects the reference image on a basis of an image of the selected sample region. Peng teaches these limitations (Peng: para 87: “Further queries can also be used to focus the search results along a specific subset that shares a very specific feature (i.e. the initial results could all be somehow similar, but the refined subset could focus on e.g. images that are similar with respect to the nuclei density”).
Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Peng into Hassan-Shafique since both Hassan-Shafique and Peng suggest a practical solution and field of endeavor of similar histopathological case retrieval and display based on similarity to current case in general and Peng additionally provides teachings that can be incorporated into Hassan-Shafique in that regions are selected according to cell nuclei density “So the further queries would be a way of user interaction to "tell" the system what kind of similarity the user is interested in.” (Peng: para 87). The teachings of Peng can be incorporated into Hassan-Shafique in that regions are selected according to cell nuclei density. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable.
As per claim 16, Hassan-Shafique teaches the medical image analysis apparatus according to claim 14, wherein the output unit selects one reference image from the reference images displayed on the second screen portion on a basis of instruction information from an operator, and the output unit outputs a (Hassan-Shafique: See arguments and citations offered in rejecting claim 14 above).
Hassan-Shafique does not teach split display screen.
Peng teaches split display screen (Peng: para 22: “The user interface may also include a feature for display of a similarity ranking for the one or more additional portions to the input query. The one or more additional portions in a ranked list or a pivot table”;
[0023] FIG. 1 is an illustration of a display on a workstation showing an input or query image and a set of similar images to the query image in a results pane or pivot table. In FIG. 1, there are results shown for different models that selected similar images from a reference library.
PNG
media_image9.png
573
938
media_image9.png
Greyscale
PNG
media_image10.png
585
1023
media_image10.png
Greyscale
PNG
media_image11.png
489
929
media_image11.png
Greyscale
PNG
media_image12.png
513
1072
media_image12.png
Greyscale
PNG
media_image13.png
786
841
media_image13.png
Greyscale
).
Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Peng into Hassan-Shafique since both Hassan-Shafique and Peng suggest a practical solution and field of endeavor of similar histopathological case retrieval and display based on similarity to current case in general and Peng additionally provides teachings that can be incorporated into Hassan-Shafique in that the input and retrieved images are displayed in split screen as for “comparison mode” (Peng: para 64). The teachings of Peng can be incorporated into Hassan-Shafique in that the input and retrieved images are displayed in split screen. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Atiba Fitzpatrick whose telephone number is (571) 270-5255. The examiner can normally be reached on M-F 10:00am-6pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for Atiba Fitzpatrick is (571) 270-6255.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Atiba Fitzpatrick
/ATIBA O FITZPATRICK/
Primary Examiner, Art Unit 2677