Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is in response to the application filed on 06/29/2023.
Claims 1-9 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 06/29/2023 has been considered (see form-1449, MPEP 609).
Drawings
The drawings filed on 06/29/2023 are accepted.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4-5 and 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Rolfe et al. (US PGPUB 2019/0244680, hereinafter Rolfe), in view of Baker et al. (US PGPUB 2019/0095798, hereinafter Baker).
As per as claim 1, Rolfe discloses:
A non-transitory computer-readable recording medium storing a
machine learning program causing a computer to execute a process comprising:
calculating an average and a variance of a latent variable by inputting
input data to an encoder (Rolfe, e.g.,[0032], [0038-0043], “...sum of a genetic and a non-genetic component, the first being distributed within families as a normal random variable centered at the average of the parental genetic components, and with a variance independent of the parental traits... model has been trained, an average variance due to a selected latent variable can be evaluated when other latent variables are held constant...”);
sampling a noise based on a normal distribution of the variance (Rolfe, e.g., [0038], “...distributed within families as a normal random variable centered at the average of the parental genetic components...” and [0080], [0163], [0190], “...samples and a random noise component), and/or otherwise determining appropriately-correlated samples. For instance, a genetic latent variable c ...”);
calculating the latent variable by adding the noise to the average (Rolfe, e.g., [0162-0165], “...latent variable for the mother, ƒ is the corresponding latent variable for the father, and n is a noise component (e.g. corresponding to Mendelian genetic randomization)”);
calculating output data by inputting the calculated latent variable to a
decoder (Rolfe, e.g., [0165-0169], “...latent variables may be sampled from differently by the decoder... one may pass an individual's information through the encoder to produce a latent representation of the individual and then, in the decoder, generate predictions based on the latent representation (provided as input to the decoder)...”); and
training the encoder and the decoder in accordance with a loss function,
the loss function including a value and an error between the input data and the
output data, the value being obtained by multiplying encoding information by a
correction coefficient based on the noise, the encoding information being
information of a probability distribution of the latent variable and a prior
distribution of the latent variable (Rolfe, e.g., [0216-2018], “...allows the use of the gradient of the decoder to estimate the change in the loss function, since the gradient of the decoder captures the effect of small changes in the location of a selected packet in the latent space... low-variance provided the motion of most probability packets has a similar effect on the loss function...packets are tightly clustered (e.g., if the encoder produces a Gaussian distribution with low variance) or if the movements of well-separated packets have a similar effect on the loss function (e.g., if the decoder is roughly linear)...” and further see [0234], “value and error” of loss function).
To make records clearer regarding to the language of “training the encoder and the decoder in accordance with a loss function” (although as stated above, Rolfe functional disclose the features of “training the encoder and the decoder in accordance with a loss function” (Rolfe, e.g., [0216-2018])).
However Kaker, in an analogous art, discloses “training the encoder and the decoder in accordance with a loss function” (Baker, e.g., [0015-0019], “...hyperparameters for the encoder and decoder networks... learning of an autoencoder network may use stochastic gradient descent training just as in supervised training... The loss function used in training a VAE incorporates a measure of divergence between reconstructed data and the source data as well as a second term representing the Kullback-Leibler divergence between the latent variables in the stochastic layer and zero-mean unit Gaussians or other specified simple statistical distributions...”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Baker and Rolfe to access a much richer parametric family of distributions and provide more effective knowledge transmission from an encoder to a decoder and better able to represent multiple distinct latent data classes or clusters in order to prevent from suffer from mode collapse or identical copy of training data (Baker, e.g., [0004]).
As per as claim 2, the combination of Baker and Rolfe disclose:
The non-transitory computer-readable recording medium according to
claim 1, wherein the machine learning program causes the computer to further
execute a process of calculating the correction coefficient based on the noise and the average (Rolfe, e.g., [0038-0046], “.... average variance due to a selected latent variable can be evaluated when other latent variables are held constant... k results can then be averaged to provide an estimation of the log-likelihood... correspond to a weighted group of diseases that are strongly correlated, and independent of other weighted groups of disease...”).
Claims 4-5 are essentially the same as claims 1-2 except that they set forth the claimed invention as a method rather a non transitory computer readable recording medium, respectively and correspondingly, therefore is rejected under the same reasons set forth in rejections of claims 1-2.
Claims 7-8 are essentially the same as claims 1-2 except that they set forth the claimed invention as an information processing apparatus rather a non transitory computer readable recording medium, respectively and correspondingly, therefore is rejected under the same reasons set forth in rejections of claims 1-2.
Allowable Subject Matter
The prior art does not teach “wherein in the calculating of the correction coefficient, the correction coefficient is calculated for each dimension of the latent variable, and in the training, the encoder and the decoder are trained to minimize values of a sum value of results obtained by multiplying the encoding information for each dimension by the correction coefficient for each dimension and the error.". Per the instant office action, claims 3, 6 and 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Additional Art Considered
The prior art made of record and not relied upon is considered pertinent to the Applicants’ disclosure.
The following patents and papers are cited to further show the state of the art at the time of Applicants’ invention with respect to machine learning program causing a computer to execute a process including: calculating an average and a variance of a latent variable by inputting input data to an encoder; sampling a noise based on a normal distribution of the variance; calculating the latent variable by adding the noise to the average; calculating output data by inputting the calculated latent variable to a decoder; and training the encoder and the decoder in accordance with a loss function, the loss function.
a. Yamaguchi et al. (US PGPUB 2021/0327456, hereafter Yamaguchi); “Anomaly Detection Apparatus, Probability Distribution Learning Apparatus, Autoencoder Learning Apparatus, Data Transformation Apparatus, And Program”) discloses anomaly detection apparatus includes an anomaly degree estimating unit configured to estimate an anomaly degree indicating a degree of anomaly of anomaly detection target equipment from sound emitted from the anomaly detection target equipment.
Yamaguchi also teaches latent variable estimating that calculates an average and an variance [0239].
Yamaguchi further teaches autoencoder which restores normal [0024-0025], [0153-0157].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUAN A PHAM whose telephone number is (571)270-3173. The examiner can normally be reached M-F 7:45 AM - 6:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TUAN A PHAM/Primary Examiner, Art Unit 2163