DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of claims 1-4 and 15-19 (claims 16 and 18, from species II, are rejoined due to similar with claim 1) and Non-elected of claims 5-14 in the reply filed on 12/8/2025 is acknowledged.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claims 16 and 18 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claims 16 and 18 refer back to two different claims. The statute requires reference back to a single claim (“a reference to a claim previously set forth”). See MPEP 608/01(n).
Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, and 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Adler (US-PAT-NO: 10672153 B2) in view of FISHER (WO 2021041128 A1).
Regarding claim 1, 15, 17, and 19. Adler teaches a method for training a deep learning model comprising:
receiving a training dataset comprising a plurality of input/output pairs (see Fig. 3, Col. 13, lines 20-22, the initial image can be fed to a Generator 306 convolutional network of a Conditional Generative Adversarial Network (CGAN); see Col. 13, lines 44-46, these posterior distribution simulated images at 312 can be used at run-time to provide information about the uncertainty associated with the initial image 304); and
training a conditional generative adversarial network (cGAN) using the training dataset (see Fig. 3, Col. , lines , during CGAN training of the Generator 306, the Discriminator 308 also receives the initial image at 304).
wherein the training comprises a regularization process configured to enforce consistency with a posterior mean (see Col. 16, lines 46-57, the above-described Deep Posterior Sampling approach for quantifying uncertainty in image reconstruction can use generative models from machine learning to create random samples si from the probability distribution given by P(x=x|y=y). Using such generated random samples, a wide range of one or more estimators can be evaluated. For example, according to the law of large numbers, the posterior mean can be approximated according to …).
However, Adler does not expressly teach a posterior covariance or trace-covariance.
FISHER teaches that point estimates and uncertainties of the treatment effect can be estimated using a Laplace approximation of the resulting posterior distribution. In practice, various methods including (but not limited to) exact integration, Markov Chain Monte Carlo calculations, and/or variational approximations could be used to obtain a posterior distribution. Using the Laplace approximation (i.e., a series expansion about the maximum of the posterior distribution), it is possible to derive an estimate for the covariance matrix of the posterior distribution (see paragraph 68).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Adler by FISHER to obtain a posterior distribution and to derive an estimate for the covariance matrix of the posterior distribution, in order to provide a posterior covariance or trace-covariance as taught by Fisher. Therefore, combining the elements from prior arts according to known methods and technique, such as posterior distribution and covariance matrix, would yield predictable results.
Regarding claim 16. The method of claim 15, further comprising training the cGAN according to claim 1 (see claim 1 above).
Regarding claim 18. The method of claim 17, further comprising training the cGAN according to claim 1 (see claim 1 above).
Claim(s) 2-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Adler (US-PAT-NO: 10672153 B2) in view of FISHER (WO 2021041128 A1), and further in view of SAWADA (PGPUB: 20180025271 A1).
Regarding claim 2. The combination does not expressly teach a method of claim 1, wherein the regularization process uses a supervised L1 loss in conjunction with a standard deviation reward.
SAWADA teaches that in the case of performing supervised learning on the target neural network 102B, for example, a loss function (L1 or L2) representing an error between the answer vector Z and the output vector Y may be defined by using input data X, weights W, and answer labels (for example, L=|Y−Z|, ∥ represents an absolute value) (see paragraph 163); The relation vector adjusting section 107 adjusts the value of a first relation vector and the value of a second relation vector so that the value of the first relation vector is within a range of a constant multiple of a first standard deviation calculated from a plurality of first output vectors, that the value of the second relation vector is within a range of a constant multiple of a second standard deviation calculated from a plurality of second output vectors, and that a difference value between the first relation vector and the second relation vector is large. That is, the relation vector adjusting section 107 adjusts the relation vectors generated by the relation vector generating section 101 so that the difference between the relation vectors increases within a predetermined range (see paragraph 170); the relation vector adjusting section 107 determines whether each of the new relation vectors is within N times the standard deviation of the output vector Y calculated based on the target learning data attached with a corresponding answer label (see paragraph 178).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by SAWADA for providing a loss function (L1 or L2) representing an error between the answer vector Z and the output vector Y may be defined by using input data X, weights W, and answer labels (for example, L=|Y−Z|, ∥ represents an absolute value), as the regularization process uses a supervised L1 loss; providing The relation vector adjusting section 107 adjusts the value of a first relation vector and the value of a second relation vector so that the value of the first relation vector is within a range of a constant multiple of a first standard deviation calculated from a plurality of first output vectors, that the value of the second relation vector is within a range of a constant multiple of a second standard deviation calculated from a plurality of second output vectors, and that a difference value between the first relation vector and the second relation vector is large. That is, the relation vector adjusting section 107 adjusts the relation vectors generated by the relation vector generating section 101 so that the difference between the relation vectors increases within a predetermined range, as standard deviation. Therefore, combining the elements from prior arts according to known methods and technique, such as a loss function (L1 or L2) representing an error between the answer vector Z and the output vector Y may be defined by using input data X and the relation vector adjusting section 107 adjusts the value of a first relation vector and the value of a second relation vector so that the value of the first relation vector is within a range of a constant multiple of a first standard deviation calculated from a plurality of first output vectors, would yield predictable results.
Regarding claim 3. The combination teaches the method of claim 2, wherein the standard deviation reward is weighted (see SAWADA, paragraph 163, in the case of performing supervised learning on the target neural network 102B, for example, a loss function (L1 or L2) representing an error between the answer vector Z and the output vector Y may be defined by using input data X, weights W, and answer labels (for example, L=|Y−Z|, ∥ represents an absolute value) , and the weights W may be updated along a gradient for decreasing the loss function by using the gradient descent method or back propagation).
Regarding claim 4. The combination teaches method of claim 2, further comprising autotuning the standard deviation reward (see SAWADA, paragraph 188, the relation vector adjusting section 107 determines whether each of the new relation vector R1′=[0.20, 0.0, −0.51, 1.15, −0.09, 0.10, −0.07, 0.03, 0.0] and the new relation vector R2′=[0.18, 0.04, −0.07, 0.14, 0.09, 0.02, 0.15, 0.20, 0.10] is within five times the standard deviation calculated from the respective output vectors. If the determination result is affirmative, the relation vector adjusting section 107 outputs the new relation vectors R1′ and R2′ to the identifying apparatus 20).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN JIA whose telephone number is (571)270-5536. The examiner can normally be reached 9:00 am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIN JIA/Primary Examiner, Art Unit 2663