Prosecution Insights
Last updated: April 19, 2026
Application No. 18/120,566

NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM, MACHINE LEARNING METHOD, AND INFORMATION PROCESSING DEVICE

Non-Final OA §101§103
Filed
Mar 13, 2023
Examiner
PHAM, TUAN A
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
583 granted / 697 resolved
+28.6% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
729
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 697 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to the application filed on 03/13/2023. Claims 1-7 are pending. Information Disclosure Statement The information disclosure statement (IDS) filed on 03/13/2023 has been considered (see form-1449, MPEP 609). Drawings The drawings filed on 03/13/2023 are accepted. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3 and 6-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The Claim recites the language of “inputting data into a machine learning model; acquiring a first value output from the machine learning model in response to the inputting, a second values output from the machine learning model based on a variable obtained by modifying a latent variable that is calculated by the machine learning model in response to the inputting, and information entropy of the latent variable; and training the machine learning model based on the first value, the second value and the information entropy of the latent variable.” Claim 1 recites the limitation of “inputting data into a machine learning model”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim element precludes the step from practically being performed in the mind. For example, “inputting” in the context of this claim encompasses the user entering information. Similarly, the limitation of acquiring a first value output from the machine learning model in response to the inputting, a second values output from the machine learning model based on a variable obtained by modifying a latent variable that is calculated by the machine learning model in response to the inputting, and information entropy of the latent variable, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, but for the “acquiring” in the context of this claim encompasses the user manually to receiving the information. Also Similarly, the limitation of training the machine learning model based on the first value, the second value and the information entropy of the latent variable, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, “training” in the context of this claim encompasses the user manually learning about information. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using one or more storage device to perform the inputting, acquiring, training steps. The processor and memory in those steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of learning information) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor the inputting, acquiring, training steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 2 is dependent on independent claim 1 and includes all the limitations of claim 1. Claim 2 recites “an estimator having a second parameter, and a decoder having a third parameter, and the training includes optimizing the first parameter, the second parameter, and the third parameter so as to ensure minimization of first-type reconfiguration data that is output from the decoder in response to the inputting, second-type reconfiguration data that is output from the encoder based on a variable obtained by modifying a latent variable which is calculated by the encoder in response to the inputting, and information entropy of the latent variable based on probability distribution of the latent variable as estimated by the estimator”. The claim language provides only further estimating which is directed towards the abstract idea and does not amount to significantly more. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Claim 3 is dependent on independent claim 1 and includes all the limitations of claim 1. Claim 3 recites “the machine learning model is an autoencoder configured to encode the data and generates the latent variable, and decode the data from the latent variable, and the training includes calculating a first cost based on difference between first-type reconfiguration data, which is obtained by decoding the latent variable, and the data, calculating a second cost based on difference between second-type reconfiguration data, which is obtained by adding a noise to the latent variable, and the first- type reconfiguration data, calculating, as a third cost, information entropy of the latent variable based on probability distribution of the latent variable, and training the machine learning model to ensure minimization of the first cost, the second cost, and the third cost”. The claim language provides only further learning which is directed towards the abstract idea and does not amount to significantly more. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Regarding claim 6: are essentially the same as claim 1 except that they set forth the claimed invention as a method rather than a computer readable recording medium respectively and correspondingly, therefore are rejected under the same reasons set forth in rejections of claim 1. Regarding claim 7: are essentially the same as claim 1 except that they set forth the claimed invention as a processing device rather than a computer readable recording medium respectively and correspondingly, therefore are rejected under the same reasons set forth in rejections of claim 1. Accordingly, the claims 1-3 and 6-7 are not patent eligible. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Kato et al. (Rate-Distortion Optimization Guided Autoendcoder For Generative Approach With Quantitatively Measurable Latent Space), in view of Niwa et al. (US PGPUB 2021/0158226, hereinafter Niwa). As per as claim 1, Kato discloses: A non-transitory computer-readable recording medium having stored therein a program that causes a computer to execute a process comprising: inputting data into a machine learning model (Kato, e.g., [abstract], [page 3], (input data machine learning model)); acquiring a first value output from the machine learning model in response to the inputting, a second values output from the machine learning model based on a variable obtained by modifying a latent variable that is calculated by the machine learning model in response to the inputting, and information entropy of the latent variable (Kato, e.g., [pages 3-5], “output with noise to latent variable and entropy of latent variable....distribution function of latent space with parameters...” and further see fig. 2, associating with texts description, [page 5] for modify/update/further calculation); and training the machine learning model based on the first value, the second value and the information entropy of the latent variable (Kato, e.g., [pages 5-6], “...models trained with these h() as RaDOGAGA(D) and RaDOGAGA...that entropy is minimized when the diagonal component...). To make records clearer regarding to the language of “modifying a latent variable that is calculated by the machine learning model in response to the inputting” (although as stated above Kato functional disclose the features of modifying/updating/changing a latent variable (Kato, e.g., pages 3-5]). However Niwa, in an analogous art, discloses “modifying a latent variable that is calculated by the machine learning model in response to the inputting” (Niwa, e.g., [0031], [0087-0090], “... updates the latent variable for each node...ith latent variable and jth latent variable... matrix to represent variables which have been separately written in V sets...” and further see [0116-0121], “... the function F.sub.1*, Formula (3-3) contains updating of the latent variable p and the dual variable w. As a way of updating these variables...”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Niwa and Kato to use a framework that acquires mapping between collected data and information that should be output through learning with data and teaching data to provide a cost function is used instead of teaching data (Niwa, e.g., [002-004]). As per as claim 2, the combination of Niwa and Kato disclose: he non-transitory computer-readable recording medium according to claim 1, wherein the machine learning model is an autocencoder that includes an encoder having a first parameter, an estimator having a second parameter, and a decoder having a third parameter, and the training includes optimizing the first parameter, the second parameter, and the third parameter so as to ensure minimization of first-type reconfiguration data that is output from the decoder in response to the inputting (Kato, e.g., [pages 3-4], “... auto-encoder is minimizing cost function...automatically finds an appropriate scale of latent space....parametric encoder, decoder, and probability distribution function of latent space with parameters ...”), second-type reconfiguration data that is output from the encoder based on a variable obtained by modifying a latent variable which is calculated by the encoder in response to the inputting, and information entropy of the latent variable based on probability distribution of the latent variable as estimated by the estimator (Kato, e.g., [pages 3-4], “...estimated entropy of the latent distribution, D (x1, x2) in the second and the third....calculating distance function...”) and further see (Niwa, e.g., [0031], [0087-0090], “... updates the latent variable for each node...ith latent variable and jth latent variable... matrix to represent variables which have been separately written in V sets...” and further see [0116-0121], “... the function F.sub.1*, Formula (3-3) contains updating of the latent variable p and the dual variable w. As a way of updating these variables...”). As per as claim 3, the combination of Niwa and Kato disclose: The non-transitory computer-readable recording medium according to claim 1, wherein the machine learning model is an autoencoder configured to encode the data and generates the latent variable, and decode the data from the latent variable, and the training includes calculating a first cost based on difference between first-type reconfiguration data, which is obtained by decoding the latent variable, and the data, (Kato, e.g., [pages 3-4], “......parametric encoder, decoder, and probability distribution function of latent... sum of latent variable....a cost (1-SSIM) can be also approximated in a quadratic...deriving parameters that minimize this value, the encoder, decoder and probability distraction of the latent space are trained ...”), calculating a second cost based on difference between second-type reconfiguration data, which is obtained by adding a noise to the latent variable, and the first- type reconfiguration data, calculating, as a third cost, information entropy of the latent variable based on probability distribution of the latent variable, and training the machine learning model to ensure minimization of the first cost, the second cost, and the third cost (Kato, e.g., [pages 3-4], disclose add a noise to the decoder output to produce the cost of difference terms (first cost, second cost and third cost) and further see (Niwa, e.g., [0050], [0060], [0123], [0138], discloses calculate and output the difference costs). Regarding claim 6: are essentially the same as claim 1 except that they set forth the claimed invention as a method rather than a computer readable recording medium respectively and correspondingly, therefore are rejected under the same reasons set forth in rejections of claim 1. Regarding claim 7: are essentially the same as claim 1 except that they set forth the claimed invention as a processing device rather than a computer readable recording medium respectively and correspondingly, therefore are rejected under the same reasons set forth in rejections of claim 1. Allowable Subject Matter The prior art does not teach “wherein the machine learning model is an autoencoder configured to encode the data and generates the latent variable, and decode the data from the latent variable, and the training includes calculating a first cost based on difference between first-type reconfiguration data, which is obtained by decoding the latent variable, and the data, calculating a second cost based on a Jacobian matrix in which a value is used that is obtained when difference between second-configuration data, which is obtained by decoding the latent variable after adding a noise thereto, and the first-type reconfiguration data is divided by a specific component of the noise, calculating a third cost based on each row element vector of the Jacobian matrix, calculating, as a fourth cost, information entropy of the latent variable based on probability distribution of the latent variable, and training the machine learning model to ensure minimization of the first cost, the second cost, the third cost, and the fourth cost.” (Claim 4), “wherein the machine learning model is an autoencoder configured to encode the data and generates the latent Variable, and decode the data from the latent variable, and the training includes calculating a first cost based on difference between first-type reconfiguration data, which is obtained by decoding the latent variable, and the data, calculating a second cost based on Jacobian matrix in which a value is used that is obtained when difference between second- configuration data, which is obtained by decoding the latent variable after adding a noise thereto, and the first-type reconfiguration data is divided by a specific component of the noise, arid a matrix that defines measure, calculating a third cost based on difference between Hermitian inner product of each row element vector of the Jacobian matrix, the matrix that defines measure, and transpose of each row element vector of the Jacobian matrix, and a constant number, calculating, as a fourth cost, information entropy of the latent variable based on probability distribution of the latent variable, and training the machine learning model to ensure minimization of the first cost, the second cost, the third cost, and the fourth cost”. Per the instant office action, claims 4-5 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Additional Art Considered The prior art made of record and not relied upon is considered pertinent to the Applicants’ disclosure. The following patents and papers are cited to further show the state of the art at the time of Applicants’ invention with respect for machine learning which is inputting data into a machine learning model, acquiring a first value output from the machine learning model in response to the inputting, a second value output from the machine learning model based on a variable obtained by modifying a latent variable that is calculated by the machine learning model in response to the inputting, and information entropy of the latent variable, and training the machine learning model based on the first value, the second value and the information entropy of the latent variable. a. Shuji Senda (US PGPUB 2013/0064451, hereinafter Senda); “Image Conversion Parameter Calculation Device, Image Conversion Parameter Calculation method, and Program” discloses “an image conversion parameter calculation device capable of accurately calculating a conversion parameter for image alignment with a processing amount that does not depend on a size of an image to be aligned is provided and performing processing for selected pixels which are the pixels selected by the pixel selection element, the conversion parameter being a parameter for converting, to the first image, a second image that is subject to image alignment with the first image”. Senda further disclose calculating/processing cost of parameter [0091-0092], extraction a noise region, [0191]. Senda also teaches calculating a Jacobian matrix and update/modify calculation, Hessian matrix [0191]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUAN A PHAM whose telephone number is (571)270-3173. The examiner can normally be reached M-F 7:45 AM - 6:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TUAN A PHAM/Primary Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Mar 13, 2023
Application Filed
Nov 26, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596679
METHOD AND APPARATUS PROVIDING A TIERED ELASTIC CLOUD STORAGE TO INCREASE DATA RESILIENCY
2y 5m to grant Granted Apr 07, 2026
Patent 12596758
IoT Enhanced Search Results
2y 5m to grant Granted Apr 07, 2026
Patent 12585718
System and Method for Feature Determination and Content Selection
2y 5m to grant Granted Mar 24, 2026
Patent 12572561
METHOD AND APPARATUS FOR SYNCHRONOUSLY UPDATING METADATA IN DISTRIBUTED DATABASE
2y 5m to grant Granted Mar 10, 2026
Patent 12566777
SYSTEMS AND METHODS OFFLINE DATA SYNCHRONIZATION
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+27.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 697 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month