Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 21, 2026 has been entered.
Response to Arguments
Applicant’s arguments and amendments have persuasively overcome the objections and rejections. New rejections and objections appear below.
Claim Objections
Claims 1, 11, and 12 are objected to because of the following informalities:
Claims 1, 11, and 12 recite “a Alzheimer’s” but the ‘a’ should be “an.”
Claims 1, 11, and 12 recite “regularization factor gradually reduced,” but this is grammatically incorrect. One option is to recite “is gradually.”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3, 4, and 7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 3 and 4 recite first and second loss functions that do not have explicit antecedent basis in the first and second loss functions of claim 1. It is unclear if these are intended to be the same first and second loss functions. If they are intended to be the same loss functions, reciting “the” instead of ‘a’ fixes this. If they are intended to be different, labels other than “first” and “second” clarify this.
Claim 7 is rejected because it is unclear whether “the” loss functions refer to those of claim 1 or claim 4.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 4, 8, 11 and 12 (all claims except 7) are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. Pub. 20190325215 (“Wang”) in view of KR101995383B1 (“Park”) (citations are to the previously attached machine translation) and U.S. Pat. Pub. 20200372344 (“Mavroeidis”). References are listed in the Notice of Cited References when they were first cited. If a reference is not identifiable (e.g., due to a typo), it can be identified by searching for the quoted text.
1. A method for classification by using a deep learning model, the method being performed by a computing device including at least one processor, the method comprising: (Wang, Fig. 3)
extracting a feature vector interpretable based on domain knowledge by inputting an image patch including at least one (Wang, Fig. 3, Attention map. See also [0051] “The M-Net is mainly used to extract a Feature Map and generate a feature vector according to the feature map. The A-Net is mainly used to generate an attention map and a category score map according to the feature map.” [0051] explains that the attention map is from one of the feature vectors.)
the first neural network is a convolutional neural network (Wang, [0043] “a neural network (including, but not limited to, a convolutional neural network)”)
the feature vector comprising one or more of:
a shape feature, (Wang, [0098] “The C-Net is used to perform feature extraction on the key sub-region, and generate a feature vector” and [0064] “Since the key sub-region corresponds to the first sub-region, the shape of the key sub-region is the same as that of the first sub-region.”)
a texture feature,
a volumetric feature,
a geometric feature of the (Wang, [0098] “The C-Net is used to perform feature extraction on the key sub-region, and generate a feature vector”)
estimating a probability value regarding presence of a (Wang, Fig. 3 and [0046] “The category detection result is used for indicating which of the predetermined plurality of categories the second image belongs to, and/or the probability that the second image belongs to the predetermined plurality of categories”)
the second neural network is a fully-connected neural network (Wang, [0050] “the present application can also be applied to a non-convolutional neural network, such as a fully-connected neural network …”)
wherein the deep learning model is pre-trained based on a loss function (Wang, [0098] “The embodiment of the present application can train the convolutional neural network only by using sample data of a weak supervision condition.”)
wherein the deep learning model is a model pre-trained based on a difference between the feature vector generated by the first neural network and a guide vector, and (Wang, [0098] “The embodiment of the present application can train the convolutional neural network only by using sample data of a weak supervision condition. Taking a scenario applied to retinal image detection as an example, the convolutional neural network training is performed on training data containing only a retinal image and a diagnosis result … .” Wang’s weak supervision condition teaches the claimed training based on a difference (i.e., the claimed guide vector corresponds to the diagnosis result).)
wherein the guide vector includes features describing characteristics of the at least one (Wang, [0098] “The trained convolutional neural network can also generate a threshold attention image while detecting the disease level of retinopathy, which displays the role of each pixel in the retinal image in the disease diagnosis”)
[[Note that the last few lines of the claim are reproduced below]]
Wang is not relied on for the below claim language.
However, Park teaches classification for Alzheimer’s disease, and that the subregion is of the brain rather than Wang’s eye. (Park, [0009] “(b) a step of allowing the computing device to divide a plurality of brain regions, which are previously designated to be related to brain diseases, from the brain image by using a partition network or to support another device to divide the brain regions (c) The step that the computing device uses the , disease prediction network and it produces the classification information about the brain disorder based on the result information.” Park specifies that the brain disease is Alzheimer’s, see, e.g., [0002].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Park to the teachings of Wang such that Wang is applied to brain conditions in order to determine if someone has a brain disease. Park, abstract.
Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143.
The combination of Wang and Park is not relied on for the below claim language.
However, Mavroeidis teaches:
a loss function comprising a first loss function and a second loss function; and (Mavroeidis, abstract, “This process is iteratively repeated until the loss functions both converge”)
wherein the pre-training comprises reducing a difference between the feature vector generated by the first neural network and the guide vector by using the second loss function, the second loss function being applied with a regularization factor, (Mavroeidis, abstract, “Loss functions of the neural network for both the training data and the test data are used to modify the regularization parameter, and the neural network model is retrained using the modified regularization parameter”)
wherein a size of the regularization factor gradually reduced until the number of learning cycles reaches a predetermined reference during the pre-training so that a relative weight between the first loss function and the second loss function is adjusted. (Mavroeidis, abstract, “This process is iteratively repeated until the loss functions both converge”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Mavroeidis to the teachings of the combination of Wang and Park such that Mavroeidis is applied to avoid overfitting and thus improve performance. Mavroeidis [0005].
Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143.
3. The method of claim 1, wherein the loss function includes:
a first loss function having the probability value estimated by the second neural network as an input variable; and (Wang, [0098] “The convolutional neural network of the embodiment of the present application can be divided into the following three parts according to functions, i.e., an M-Net, an A-Net, and a C-Net. … The embodiment of the present application can train the convolutional neural network only by using sample data of a weak supervision condition.” The probability value is an input to the third part, i.e., the C-Net)
a second loss function having the feature vector extracted by the first neural network as an input variable. (Wang, [0098] “The convolutional neural network of the embodiment of the present application can be divided into the following three parts according to functions, i.e., an M-Net, an A-Net, and a C-Net. … The embodiment of the present application can train the convolutional neural network only by using sample data of a weak supervision condition.” Wang’s M-Net’s feature vector e.g., [0051] is used to train the M-Net.)
4. The method of claim 1, wherein the loss function is a sum of a first loss function used for the classification task and a second loss function used for a regression task. (Wang, [0091] “When the category detection result of the second image is generated according to the first category score vector and the second category score vector, the first category score vector and the second category score vector are averaged or weighted, and the averaged value or the weighted vector is converted into a category probability vector by a regression operation” Wang’s averaging teaches the claimed sum (note that the claim is open ended), Wang’s first category score vector teaches the claimed used for a classification task and the later user of Wang’s weighted vector for regression teaches the claimed “used for a regression task “ (in other words, the claim requires that the second loss function is summed, and that the second loss function is used for a regression task, but there are no requirements between the summation and the regression.)
8. The method of claim 1, wherein the extracting of the feature vector includes:
extracting the image patch including the at least one brain subregion from a medical image including a brain region; and inputting the image patch into the first neural network of the deep learning model to generate a feature vector corresponding to the brain subregion. (Park, [0009] “(b) a step of allowing the computing device to divide a plurality of brain regions, which are previously designated to be related to brain diseases, from the brain image by using a partition network or to support another device to divide the brain regions (c) The step that the computing device uses the , disease prediction network and it produces the classification information about the brain disorder based on the result information”)
9. The method of claim 8, wherein the feature vector includes a feature in a form interpretable based on the domain knowledge in relation to a characteristic of the brain region including at least one of volume, shape, length, or texture of the brain subregion. (Park, [0006] “The purpose of the present invention is to improve accuracy and efficiency of brain disease diagnosis by extracting image features (shape, texture, global position, structural features, etc.)”)
Claims 11 and 12 are rejected as per claim 1. Wang, claim 19 teaches claim 11’s “computer program stored in a computer-readable storage medium,” and Wang, claim 16 teaches claim 12’s computing device.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. Pub. 20190325215 (“Wang”), KR101995383B1 (“Park”) and U.S. Pat. Pub. 20200372344 (“Mavroeidis”) in view of U.S. Pat. 11334807 (“O’Shea”).
7. The method of claim 4, wherein: the first loss function includes a cross-entropy loss function; and (Wang, [0098] “The embodiment of the present application can train the convolutional neural network only by using sample data of a weak supervision condition.” Wang’s weak supervision teaches the claimed cross-entropy because weak supervision divides things into correct or incorrect, and these are the two probability distributions for cross-entropy.)
The combination of Wang, Park, and Mavroeidis is not relied on for the below claim language.
However, O’Shea teaches the second loss function includes a hyperbolic log loss function. (O’Shea, claim 19, “a cross-entropy loss function, a log-cosine hyperbolic loss function.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of O’Shea to the teachings of the combination of Wang, Park, and Mavroeidis such that the loss functions of O’Shea are used with the loss functions of the combination of Wang, Park, and Mavroeidis for the purpose of having more opportunity to minimize loss functions. O’Shea, claim 19.
Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Pat. Pub. 20190180441 – [0052] attention maps, [0068] likelihoods
U.S. Pat. 12354256 – “FIG. 18 is an example model architecture that may be implemented to estimate a fluid intelligence score from brain surface morphology. The model contains a pre-convolutional layer, four residual blocks, and a post residual block, followed by a fully connected layer.”
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ORANGE/Primary Examiner, Art Unit 2663