Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Information Disclosure Statement
The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
a derivation unit that derives an estimation result in Claims 2-15 (when considered to include all of the limitations of parent Claim 1), but not in stand-alone Claim 1 by itself (see the “Single Means Claim” discussion below)
a training unit that creates the multi-stage trained model in Claim 2 and its dependents, but not in Claim 16 (see the “Single Means Claim” discussion below)
a display control unit that causes a display device to display in Claim 12 and its dependents
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1 and 16 are rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because the claims purports to invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, but fails to recite a combination of elements as required by that statutory provision and thus cannot rely on the specification to provide the structure, material or acts to support the claimed function. As such, the claim recites a function that has no limits and covers every conceivable means for achieving the stated function, while the specification discloses at most only those means known to the inventor. Accordingly, the disclosure is not commensurate with the scope of the claim.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 2-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 4 recites the limitations the class selected by a user from the first/second class classification. There is a lack of proper antecedent basis in the claims for these limitations. For the purpose of examination, the claims will be interpreted as if they had read a class selected by a user from the first/second class classification.
Claim 7 recites the limitation the user. There is a lack of proper antecedent basis in the claims for this limitation. For the purpose of examination, the claim will be interpreted as if it had read a user.
The term similar in Claims 10 and 11 is a relative term which renders the claim indefinite. The term similar is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim limitations derivation unit, training unit, and display control unit in Claims 2-15 invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. While the specification mentions CPUs, it does not do so in specific reference to structure for any of the recited units. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
For the purpose of examination, Claims 2-15 will be interpreted such that the units comprise a processor and an algorithm operating on that processor that performs the recited functions.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Dependent claims are rejected for inheriting the indefiniteness of their parent claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 and 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims not fall within at least one of the four categories of patent eligible subject matter because software per se embodiments fall within the scope of the claim language (as the claims do not invoke 35 U.S.C. 112(f)), and software itself is not a machine, article of manufacture, process, nor composition of matter.
Claims 1-4, 7, 8, and 12-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 can be amended to recite a device comprising a processor, thus an article of manufacture, one of the four statutory categories of patentable subject matter. However, Claim 1 further recites to derive an estimation result of determination on a first pathological image obtained by imaging a biological sample … in which class classification can be set in each stage, which is a mental process of determination (estimating two classes, based on a particular image). Thus, Claim 1 recites an abstract idea.
The claim further recites the additional elements a determination support device comprising a derivation unit and using a multi-stage trained model to perform the abstract idea step of deriving the particular results, which cannot integrate the abstract idea into the practical application since they merely recite using a computer or other machinery as a tool to perform the abstract idea (see MPEP 2106.05(f)(2)). Thus, the claim is directed to the abstract idea of estimating a result of two classifications from a pathological image.
Finally, the additional elements of the claim cannot provide an inventive concept or significantly more than the abstract idea itself, because they consist only of using a computer or other machinery as a tool to perform the abstract idea (see MPEP 2106.05(f)(2)). Therefore, the claim is subject-matter ineligible.
Claim 2, dependent upon Claim 1, merely recites an additional element of generically training a model to perform the abstract idea of Claim 1, which again is using a computer or other machinery to perform the abstract idea, which neither integrates the abstract idea into a practical application nor provides an inventive concept (see MPEP 2106.05(f)(2)).
Claim 3, dependent upon Claim 1, recites an additional mental process steps of annotating or labeling training data (determining a label for a portion of the image). Claim 3 does not recite any additional elements which could integrate the abstract idea into a practical application, because the additional elements consist of specifying that two models perform the classification (first and second trained models) and that the models are trained, which again is using a computer or other machinery to perform the abstract idea, which neither integrates the abstract idea into a practical application nor provides an inventive concept (see MPEP 2106.05(f)(2)).
Claim 4, dependent upon Claim 3, merely states that the annotation labels come from a user, which is an additional element specifying what the data used in the abstract idea is, i.e. particular field of use, which can neither integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself (see MPEP 2106.05(h)).
Claim 7, dependent upon Claim 3, recites an additional element to present, to the user, a grid which is insignificant extra-solution activity of data display (see MPEP 2106.05(g), it neither uses or applies the abstract idea in any way) which is well-understood, routine, and conventional (see the discussion of bounding boxes as standard in creating training data in Zhang et al, “Hierarchical Convolutional Neural Networks for Segmentation of Breast Tumors in MRI with Applications to Radiogenomics,” Introduction).
Claim 8, dependent upon Claim 3, recites an additional element of gathering annotation data from a user, which is insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)) and also well-understood, routine, and conventional (see the discussion of labeling training data in Zhang, Introduction).
Claims 12-14, dependent upon Claim 1, recites an additional element to display the estimation result which is insignificant extra-solution activity of outputting the result of the abstract idea (see MPEP 2106.05(g)) which is further well-understood, routine, and conventional by MPEP 2106.05(d), “transmitting data over a network”. Superimposition is also well-understood, routine, and conventional method of displaying results (see Zhang, Figs. 1 & 2; Branson, “Active Annotation Translation,” Fig. 2; Jain, “Active Image Segmentation Propagation,” Figs. 1 & 2).
Claim 15, dependent upon claim 2, merely specifies the particular technological environment in which the abstract idea takes place (a multi-layer neural network) which by MPEP 2106.05(h) and 2106.05(g)(2) “using generic computer components,” neither integrates the abstract idea into a practical application nor provides significantly more than the abstract idea itself.
Claim 16 recites an information processing device comprising the training unit of Claim 2, and is thus rejected for reasons set forth in the rejection of Claim 2. Claim 17 recites the method preformed by the information processing device of Claim 16, and is thus rejected for reasons set forth in the rejection of Claim 16.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 12, 15, 16, and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang et al., “Hierarchical Convolutional Neural Networks for Segmentation of Breast Tumors in MRI With Applications to Radiogenomics.”
Regarding Claim 1, Zhang teaches a determination support device comprising a derivation unit (Zhang, pg. 442, 2nd column, 1st paragraph, “Python … our program was run on a single GPU” the method is performed on a computer) that derives an estimation result of determination on a first pathological image obtained by imaging a biological sample (Zhang, title, “Segmentation of Breast Tumors in MRI” & pg. 439, Fig. 1), the derivation being performed using a multi-stage trained model (Zhang, pg. 439, Fig. 1) in which class classification can be set in each stage (Zhang, pg. 439, Fig. 1, “Breast Mask” & “Tumor Mask”).
Regarding Claim 2, Zhang teaches the determination support device according to Claim 1 (and thus the rejection of Claim 1 is incorporated). Zhang further teaches a training unit that creates the multi-stage model by training a learning model using training data including annotations of different class classifications in each stage (Zhang, pg. 439, Fig. 1, “Breast Mask” & “Tumor Mask” are different class classifications at each stage; pg. 438, 2nd column, 2nd paragraph, “The radiologist manually annotated all tumors” & 2nd column, last paragraph, “For generating ground-truth breast masks, we employ a curve-fitting and active contour based method to obtain breast masks for training the proposed breast segmentation model” & pg. 439, 1st column, 2nd paragraph, “We first train an FCN model … for estimating the … breast mask … we train additional two FCN models to … detect all tumors in the input image”).
Regarding Claim 3, Zhang teaches the determination support device according to Claim 2 (and thus the rejection of Claim 2 is incorporated). Zhang has already been shown to teach wherein the multi-stage trained model includes: a first trained model; and a second trained model having the class classification different from the class classification of the first trained model (Zhang, pg. 439, 1st column, 2nd paragraph, “We first train an FCN model … for estimating the … breast mask … we train additional two FCN models to … detect all tumors in the input image”) and the training unit performs processes including: creating first training data by annotating a region included in a second pathological image with one of the classes of a first class classification (Zhang, 2nd column, last paragraph, “For generating ground-truth breast masks, …”); creating second training by annotating a region included in the second pathological image with one of the classes of a second class classification different in class classification from the first class classification (Zhang, pg. 438, 2nd column, 2nd paragraph, “The radiologist manually annotated all tumors”); creating the first trained model by training the learning model using the first training data, and creating the second trained model by training the learning model using the second training data (Zhang, pg. 439, 1st column, 2nd paragraph, “We first train an FCN model … for estimating the … breast mask … we train additional two FCN models to … detect all tumors in the input image”).
Regarding Claim 12, Zhang teaches the determination support device according to Claim 3 (and thus the rejection of Claim 3 is incorporated). Zhang further teaches a display control unit that causes a display device to display the estimation result of determination derived by each of the multi-stage trained models (Zhang, pg. 443, Fig. 5, the results are displayed).
Regarding Claim 15, Zhang teaches the determination support device according to Claim 2 (and thus the rejection of Claim 2 is incorporated). Zhang further teaches training the learned model by deep learning using a multi-layer neural network (Zhang, pg. 440, Fig. 3).
Regarding Claim 16, Zhang teaches an information processing device (Zhang, pg. 442, 2nd column, 1st paragraph, “Python … our program was run on a single GPU” the method is performed on a computer) for creating a multi-stage trained model that derives an estimation result of determination from a first pathological image obtained by imaging a biological sample(Zhang, title, “Segmentation of Breast Tumors in MRI” & pg. 439, Fig. 1), the information processing deice comprising a training unit that creates the multi-stage trained model by training a learning mode using training data including labels for annotation indicating different class classifications in each stage (Zhang, pg. 439, Fig. 1, “Breast Mask” & “Tumor Mask” are different class classifications at each stage; pg. 438, 2nd column, 2nd paragraph, “The radiologist manually annotated all tumors” & 2nd column, last paragraph, “For generating ground-truth breast masks, we employ a curve-fitting and active contour based method to obtain breast masks for training the proposed breast segmentation model” & pg. 439, 1st column, 2nd paragraph, “We first train an FCN model … for estimating the … breast mask … we train additional two FCN models to … detect all tumors in the input image”).
Claim 17 recites precisely the method performed by the device of Claim 16, and is thus rejected for reasons set forth in the rejection of Claim 16.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 4, 7, 8, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al., “Hierarchical Convolutional Neural Networks for Segmentation of Breast Tumors in MRI With Applications to Radiogenomics,” in view of Jain et al., “Active Image Segmentation Propagation.”
Regarding Claim 4, Zhang teaches the determination support device according to Claim 3 (and thus the rejection of Claim 3 is incorporated). Zhang has already been shown to teach creating the first training data by annotating the second pathological image with [a] class … from the first class classification; and creating the second training data by annotating the second pathological image with [a] class selected by the user from the second class classification (Zhang, pg. 438, 2nd column, 2nd paragraph, “The radiologist manually annotated all tumors” & 2nd column, last paragraph, “For generating ground-truth breast masks, we employ a curve-fitting and active contour based method to obtain breast masks for training the proposed breast segmentation model”).
Zhang automatically annotates the first class classification/breast mask ground truth, and thus does not teach the class selected by a user from the first class classification. However, Jain teaches user annotations of masks vs. background in order to train a segmentation model (Jain, pg. 2867, Fig. 2 shows foreground vs background masks & pg. 2865, 1st column, 1st paragraph, “The idea is to actively request human annotation for select images that, once labeled with their foreground, are most expected to help co-segment the remaining unlabeled images”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include human annotations in the labeled data for mask determination, as does Jain. The motivation to do so is “to improve the joint segmentation of other unlabeled images in the collection” (Jain, pg. 2870, 1st column, 2nd paragraph).
Regarding Claim 7, Zhang teaches the determination support device according to Claim 3 (and thus the rejection of Claim 3 is incorporated). Zhang further teaches in creating training data … for creating … the trained model(Zhang, pg. 438, 2nd column, 2nd paragraph, “The radiologist manually annotated all tumors with the smallest cuboid bounding box covering each tumor region” where “box” denotes a grid of pixels displayed).
Zhang automatically annotates the first class classification/breast mask ground truth, and thus does not teach displaying a grid for each stage of the multi-stage models, only for the second/tumor stage. However, Jain teaches user annotations of masks vs. background in order to train a segmentation model (Jain, pg. 2867, Fig. 2 shows foreground vs background masks & pg. 2865, 1st column, 1st paragraph, “The idea is to actively request human annotation for select images that, once labeled with their foreground, are most expected to help co-segment the remaining unlabeled images”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include human annotations in the labeled data for mask determination, thus also including bounding boxes/a grid, as does Jain. The motivation to do so is “to improve the joint segmentation of other unlabeled images in the collection” (Jain, pg. 2870, 1st column, 2nd paragraph).
Regarding Claim 8, Zhang teaches the determination support device according to Claim 3 (and thus the rejection of Claim 3 is incorporated). Zhang further teaches in creating training data … for creating … the trained model(Zhang, pg. 438, 2nd column, 2nd paragraph, “The radiologist manually annotated all tumors with the smallest cuboid bounding box covering each tumor region” where “box” denotes a grid of pixels displayed).
Zhang automatically annotates the first class classification/breast mask ground truth, and thus does not teach annotating a region for each stage, only for the second/tumor stage. However, Jain teaches user annotations of masks vs. background in order to train a segmentation model (Jain, pg. 2867, Fig. 2 shows foreground vs background masks & pg. 2865, 1st column, 1st paragraph, “The idea is to actively request human annotation for select images that, once labeled with their foreground, are most expected to help co-segment the remaining unlabeled images”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include human annotations of regions in the labeled data for mask determination, as does Jain. The motivation to do so is “to improve the joint segmentation of other unlabeled images in the collection” (Jain, pg. 2870, 1st column, 2nd paragraph)
Regarding Claim 14, Zhang teaches the determination support device according to Claim 12 (and thus the rejection of Claim 12 is incorporated). Zhang further teaches to display the estimation result of determination derived in [the second stage] of the multi-stage trained model so as to be superimposed on the first pathological image (Zhang, pg. 443, Fig. 5, the tumor results are displayed superimposed on the original image, also see Fig. 2).
However, Zhang does not teach that the breast mask/first stage/foreground vs background results be superimposed on the first image. Jain teaches this limitation (Jain, Figs. 1 & 2). It would have been obvious to one of ordinary skill in the art to superimpose both first stage and second stage classes (breast mask and tumor segmentation) on the images. The motivation to do so is to see all the results produced by the machine learning classification system.
Claims 5, 6, 10, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, in view of Jain, and further in view of Branson et al., “Active Annotation Translation.”
Regarding Claim 6, the Zhang/Jain combination of Claim 4 teaches the determination support device according to Claim 4 (and thus the rejection of Claim 4 is incorporated). The combination does not teach, but Branson does teach, in creating training data .. for creating … trained models (Branson, Abstract, “a general framework for quickly annotating an image dataset when previous annotations exist”) to present, to the user, the estimation result of determination estimated by each of the multi-stage trained models together with the second pathological image (Branson, pg. 3, Fig. 3, “an annotation interactively corrects a set of predicted part locations” and “a computer vision predicted segmentation is interactively corrected using brush strokes” that is, both mask and object locations, predicted by a model, can be corrected by a user in order to annotate the training data, see pg. 3, 1st column, 2nd paragraph, “as we incrementally label new target annotations Y, we progressively obtain more training data to learn an … automated computer-vision based structured predictor”). It would have been obvious to one of ordinary skill in the art to include Branson’s method of speedier annotation, by displaying a prediction of what the annotations would be for both breasts masks and tumor identification, in the Zhang/Jain combination. The motivation to do so is “to form an improved prediction system which accelerates the annotator’s work through a smart GUI” (Branson, Abstract).
Claim 5 is strictly broader than Claim 6 (only the first stage is required to have the predictions presented while annotating, i.e. the breast masks, rather than both stages) and is thus rejected for reasons set forth in the rejection of Claim 6.
Regarding Claim 10, the Zhang/Jain/Branson combination of Claim 5 teaches the decision support device according to Claim 5 (and thus the rejection of Claim 5 is incorporated). Claim 10 merely recites presenting and annotating a third image (i.e. a second training image) similar to the second image in the same manner that the second image is annotated, and as Zhang/Jain/Branson’s annotation is not merely for a single image (Branson, Abstract, “quickly annotating an image dataset” in the combination, all consisting of breast and thus similar images), Claim 10 is rejected for reasons set forth in the rejection of Claim 5.
Regarding Claim 11, the Zhang/Jain/Branson combination of Claim 10 teaches the decision support device according to Claim 10 (and thus the rejection of Claim 10 is incorporated). Claim 11 merely recites that the third image being annotated with a label similar to a label recommended to be used for annotating the region of the second pathological image. Since all labels in the first stage are breast foreground/background, and all labels in the second stage are “tumor” or “no tumor”, then all the annotated labels are similar to the other labels in their respective stage.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang, in view of Jain and Branson, and further in view of Lyman, US PG Pub 2020/0161005.
Regarding Claim 9, the Zhang/Jain/Branson combination of Claim 5 teaches the decision support device according to Claim 5 (and thus the rejection of Claim 5 is incorporated). The combination does not teach, but Lyman (also teaching examination of annotated data for training medical scan analysis systems) does teach wherein in creating training data … for creating … models, the training unit increases magnification of the second pathological image to be presented to the user (Lyman, [0453], “The interactive interface 7075 can automatically … zoom-in on … the annotation data”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to zoom in on the annotation data while the user is annotating both stages. The motivation to do so is to allow easier editing (Lyman, [0453]).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang, in view of Lenga, US PG Pub 2022/0101626 (with an effective filing date of 1/21/2020).
Regarding Claim 13, Zhang teaches the determination support device according to Claim 12 (and thus the rejection of Claim 12 is incorporated). Zhang does not teach to display the estimation result of determination … together with reliability of each of the estimation results, but Lenga (also in medical image segmentation) teaches this limitation (Lenga, [0084-0085], “For … semantic segmentation … visualization, the confidence can be coded as the colour saturation or opacity in a display”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to compute a confidence for both task stages of Zhang and display them, in the manner of Lenga. The motivation to do so is to “allow an expert … to quickly assess results of a model by focusing on results/outputs that are associated with higher confidence measures” (Lenga, [0014]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Ha, US PG Pub 2020/0372637 also teaches manual annotation of mask images.
Venkatesan, US Patent 12,277,192, also teaches multi-stage multi-task hierarchical classifiers
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN M SMITH whose telephone number is (469)295-9104. The examiner can normally be reached Monday - Friday, 8:00am - 4pm Pacific.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRIAN M SMITH/Primary Examiner, Art Unit 2122