Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: “estimation apparatus, an estimation method, and a program for presenting determination reason and basis”.
Claim Interpretation
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
-- “a data acquisition unit configured to acquire input data … “; “an estimation
unit configured to estimate information …”, in claim 1;
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 5 recites the limitation " the estimation processing unit " in line 1. There is insufficient antecedent basis for this limitation in the claim. The “estimation processing unit ” is not introduced before. It is suggested changing” the estimation processing unit” to “the estimation unit”, to refer to “an estimation unit ” of independent claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 7-13 are rejected under 35 U.S.C. 103 as being unpatentable over Yong et al, (KR 20280138107, “based on English machine translation”) in view of Suzuki et al, (“Computer-Aided Diagnostic Scheme for Distinction Between Benign and Malignant Nodules in Thoracic Low-Dose CT by Use of Massive Training Artificial Neural Network”, IEEE transaction on medical imaging, vol. 24, No. 9, September 2005)
In regards to claim 1, Yong discloses an estimation apparatus, (CAD system), comprising:
a data acquisition unit configured to acquire input data serving as image data, (Fig. 2, and Par. 0056, implicit by acquiring the medical image (10), which may be an image or video, using at least one of MRI, CT, X-ray, PET, and EIT);
an estimation unit, (110 in Fig. 1), configured to estimate information relating to an estimation object by inputting the input data into a trained model, (see at least: Fig. 1, and Par. 0037-0040, the lesion diagnosis unit (110), “i.e., estimation unit”, analyzes the medical image using a deep neural network, extracts feature information within the medical image based on the image analysis, and can diagnose whether there is a lesion, ”i.e., object”, based on the extracted feature information, [i.e., estimate information relating to an estimation object, “extracts feature information to determine whether there is a lesion”, by inputting the input data into a trained model, “implicitly by inputting the medical image to the deep neural network”]); and
a display unit, (121 in Fig. 2), configured to display the estimated information and a reason why the estimated information is estimated, (see at least: Fig. 1, and Par. 0045-0049, diagnosis basis derivation unit (120), derives basis information that explains the basis for lesion diagnosis using the extracted feature information and diagnostic information according to the diagnosis result, “i.e., estimating information and a reason why the estimated information is estimated”, where the diagnostic basis derivation unit (120) may include a visual description information generation unit (121) that generates visual explanation information to visually provide which information from the medical image was extracted and utilized to derive the diagnosis result, [i.e., [i.e., a display unit, (121 in Fig. 2), configured to display the estimated information, and a reason why the estimated information is estimated, “implicit by generates visual description information, to provide which information from the medical image was extracted and utilized to derive the diagnosis result by the lesion diagnosis factor description information”]);
Yong does not expressly disclose wherein the trained model includes a plurality of modularized networks constructed in such a manner that each of the modularized networks are trained in advance by different characteristics of the estimation object in image data for first training and estimation, and a fusion network for estimating information relating to the estimation object in input images constructed in such a manner that a plurality of output signals obtained by inputting image data for the second training and estimation into the plural modularized networks are inputted.
However, Suzuki discloses wherein the trained model includes a plurality of modularized networks, (Fig. 3, “MTANNs”), constructed in such a manner that each of the modularized networks are trained in advance by different characteristics of the estimation object in image data for first training and estimation, (see at least: Page 1140, section III.C, the multi-MTANN consists of plural MTANNs that are arranged in parallel, where each MTANN is trained by use of benign nodules representing a different benign type, but with the same malignant nodules, [i.e., wherein the trained model includes a plurality of modularized networks, “multi-MTANN”], and each MTANN acts as an expert for distinguishing malignant nodules from a specific type of benign nodule, e.g., MTANN no. 1 is trained to distinguish malignant nodules from small benign nodules, and MTANN no. 2 is trained to distinguish malignant nodules from medium-sized benign nodules with fuzzy edges; and so on, [i.e., MTANN is implicitly constructed in such a manner that each of the modularized networks, “MTANN no. 1, MTANN no. 2…”, are trained in advance by different characteristics of the estimation object in image data for first training and estimation, “MTANN no. 1 is trained to distinguish malignant nodules from small benign nodules, and MTANN no. 2 is trained to distinguish malignant nodules from medium-sized benign nodules”]); and
a fusion network for estimating information relating to the estimation object in input images constructed in such a manner that a plurality of output signals obtained by inputting image data for the second training and estimation into the plural modularized networks are inputted, (see at least: Fig. 3, where the integration ANN corresponds to the fusion network; and Page 1141, section D, left-hand-column, the scores from the expert MTANNs in the multi-MTANN are combined by use of an integration ANN such that different types of benign nodules can be distinguished from malignant nodules, where scores of each MTANN are entered to each input unit in the integration ANN, “i.e., that a plurality of output signals obtained from the modularized networks MTANNs are input to the integration ANN (fusion network)”, and after training, the integration ANN is expected to output a higher value for a malignant nodule, and a lower value for a benign nodule, “estimating information relating to the estimation object”]).
Yong and Suzuki are combinable because they are both concerned with medical imaging diagnosis. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Yong, to use the MTANNs and integration ANN, as though by Suzuki, with the Yong’s lesion diagnosis unit (110), in order to make a distinction between malignant and benign nodules, (see Abstract, and Page 1141, section D, left-hand-column).
In regards to claim 2, the combine teaching Yong and Suzuki as whole discloses the limitations of claim 1.
Yong further discloses wherein the information relating to the estimation object includes some or all of name, type, and attribute of the object and a numerical value related to the object, (Yong, see at least: Par. 0058, the diagnosis result may include numerical values and disease names, “i.e., some of diagnosis result including numerical values and disease names related to the object”).
In regards to claim 4, the combine teaching Yong and Suzuki as whole discloses the limitations of claim 1.
Yong further discloses wherein the estimation unit further outputs information indicating the degree of influence of the plurality of output signals on a result of estimation, (Yong , see at least: Par. 0099, lesion diagnosis factor description information generation unit (522) can generate highly reliable lesion diagnosis factor description information …, and degree of spiculation, through a cost function (541) regarding the accuracy of the lesion diagnosis factor description information, “implicit the degree of influence of the plurality of output signals on a result of estimation”).
In regards to claim 7, the combine teaching Yong and Suzuki as whole discloses the limitations of claim 1.
Suzuki further discloses wherein the estimation unit is configured to construct the plurality of trained modularized networks by receiving the image data for the first training and estimation from the data acquisition unit, and inputting the image data for the first training and estimation into the plurality of modularized networks, (Suzuki, see at least: Page 1140, Fig. 3, where the MTANNs are constructed to receive the input image (nodule ROI) implicitly acquired from the data acquisition unit, and the image data (nodule ROI) is input to the MTANN’s components for first training), and construct the trained fusion network by receiving the image data for the second training and estimation from the data acquisition unit and inputting the plurality of output signals obtained by the input of the image data for the second training and estimation into the plurality of modularized networks into the fusion network to implement supervised training, (Suzuki, see at least: Page 1140, Fig. 3, the integration ANN is constructed to receive the output scoring from the plurality of MTANNs components to perform a supervised second training, to determine likelihood of malignancy).
In regards to claim 8, the combine teaching Yong and Suzuki as whole discloses the limitations of claim 1.
Suzuki further discloses wherein the image data for the first training and estimation includes a plurality of data sets trained in each of the plurality of modularized networks, the plurality of data sets corresponding to the information relating to the estimation object, and the plurality of modularized networks, which have been trained, are constructed in such a manner that the plurality of data sets are input into the plurality of modularized networks to be subjected to training, respectively, (Suzuki, see at least: Page 1140, Fig. 3, where the image data nodule (ROI) is trained in plurality of each of MTANNs components with the nodule information, (implicitly information related to object), by inputting the dataset with respect to the image data nodule (ROI) to the MTANNs components).
In regards to claim 9, the combine teaching Yong and Suzuki as whole discloses the limitations of claim 1.
Suzuki further discloses wherein each of the plurality of output signals obtained by inputting the image data for the second training and estimation into the plurality of modularized networks is a signal corresponding to one type of the characteristics of the estimation object, (Suzuki, see at least: Page 1140, Fig. 3, where the output signals from the MTANN components are input to the integration ANN, where the multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by use of benign nodules representing a different benign type, “i.e., different characteristics of the estimation object”).
In regards to claim 10, the combine teaching Yong and Suzuki as whole discloses the limitations of claim 1.
Yong further discloses wherein the estimation unit estimates, as the information relating to the estimation object, discrimination information relating to a state of the estimation object discriminated based on a type of the characteristics of the estimation object, (Yong, see at least: Par. 0065, the diagnostic network (320) can classify malignant masses and benign masses according to the lesion characteristics, such as the margin and shape of a lesion (Mass) in the medical image, and a random noise vector).
In regards to claim 11, the combine teaching Yong and Suzuki as whole discloses the limitations of claim 1.
Suzuki further discloses wherein the fusion network constructed by training of the image data for the first training and estimation and the image data for the second training and estimation, including a boundary surface that is formed in a vector space, to which a multidimensional vector having the output signals from the plurality of modularized networks belongs, to discriminate the state of the estimation object, (Suzuki, see at least: Page 1140, Fig. 3, where the MTANNs are constructed to receive the input image (nodule ROI) implicitly acquired from the data acquisition unit, and the image data (nodule ROI) is input to the MTANN’s components for first training, ….and the integration ANN is constructed to receive the output scoring from the plurality of MTANNs components to perform a supervised second training, to determine likelihood of malignancy. Further, Page 1147, left hand-column, the segmentation was performed by use of the radial search of edge candidates based on edge magnitude and contour smoothness for determining the regions of the nodules, “i.e., implicitly determining the boundary surface of the lesion”, and from Page 1148, paragraph below Fig. 18, all 76 malignant nodules in the database in the principal component (PC) vector space, “i.e., the boundary surface of the lesion is implicitly formed in a vector space to implicitly discriminate the state of the estimation object”, …., where the MTANN can be considered as an 81-dimensional (81-D) input vector, “i.e., the multidimensional vector are output from the plurality of MTANNs components”).
Regarding claim 12, claim 12 recites substantially similar limitations as set forth in claim 1. As such, claim 12 is rejected for at least similar rational.
The Examiner further acknowledged the following additional limitation(s): “an estimation method”. However, Yong discloses the “estimation method”, (see at least: Par. 0001, “method for generating a diagnosis reason explanation”).
Regarding claim 13, claim 12 recites substantially similar limitations as set forth in claim 1. As such, claim 12 is rejected for at least similar rational.
The Examiner further acknowledged the following additional limitation(s): “a non-transitory computer readable medium storing a program”. However, Yong discloses the “non-transitory computer readable medium storing a program”, (see at least: Par. 0123, 0125, Software and data may be stored on one or more computer-readable recording media).
Claims 3, and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Yong et al, and Suzuki et al, as applied to claim 2 above; and further in view of LaLonde et al, (“Encoding Visual Attributes in Capsules for Explainable Medical Diagnoses”, Springer Nature Switzerland AG 2020)
In regards to claim 3, the combine teaching Yong and Suzuki as whole discloses the limitations of claim 1.
Yong further discloses the lesion diagnosis unit (110) can generate a feature vector by fusing feature information extracted from a medical image based on a deep neural network, (Yong, Par. 0043); but the combine teaching Yong and Suzuki as whole does not expressly disclose wherein the fusion network is constructed by training of a multidimensional vector having the plurality of output signals as components.
However, LaLonde et al discloses training of a multidimensional vector having the plurality of output signals as components, (see at least: Abstract, “X-Caps, encodes high-level visual object attributes within the vectors of its capsules; and Fig. 3, Page 298, the proposed network (1) visually-interpretable high-level features encoded in the X-Caps capsule vectors, “i.e., implicitly training of a multidimensional vector having the plurality of output signals as components, (capsule vectors), using the network (1)).
Yong, Suzuki, and LaLonde are combinable because they are all concerned with medical imaging diagnosis. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Yong and Suzuki, to use the x-caps, as though by LaLonde, in order to encode high-level visual object attributes within the vectors, (LaLonde, see Abstract), to thereby diagnose the nodule on different scales, (LaLonde, see paragraph below Fig. 2)
In regards to claim 5, the combine teaching Yong and Suzuki as whole discloses the limitations of claim 4.
Yong further discloses wherein the estimation processing unit outputs the degree of influence on the result of estimation, (see Par. 0099), and further outputs information indicating the selected output signals, (see at least: Par. 0045-049, and Par. 0104, derive visual explanation information and lesion diagnosis factor explanation information that explain the basis for lesion diagnosis).
The combine teaching Yong and Suzuki as whole does not expressly disclose selecting the predetermined number of the output signals in descending order.
However, LaLonde et al discloses selecting the predetermined number of the output signals in descending order, (see at least: Fig. 3, x-caps: generate signals A1, A2, A2, …An, implicitly in descending order)
Yong, Suzuki, and LaLonde are combinable because they are all concerned with medical imaging diagnosis. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Yong and Suzuki, to use the x-caps for generating attribute capsules signals in descending order, as though by LaLonde, to diagnose the nodule on different scales, (LaLonde, see paragraph below Fig. 2).
In regards to claim 6, the combine teaching Yong, Suzuki, and LaLonde as whole discloses the limitations of claim 4.
Yong further discloses wherein the display unit displays the information indicating the selected output signals as information indicating the reason or information indicating the reason and a basis on which the reason is obtained, (see at least: Par. 0045-049, and Par. 0104, implicit by deriving visual explanation information and lesion diagnosis factor explanation information that explain the basis for lesion diagnosis).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMARA ABDI/Primary Examiner, Art Unit 2668 01/23/2026