Prosecution Insights
Last updated: April 19, 2026
Application No. 18/113,794

ELECTRONIC DEVICE PERFORMING SIMULATION OF TARGET ROW REFRESH LOGIC OF DYNAMIC RANDOM ACCESS MEMORY AND OPERATING METHOD OF ELECTRONIC DEVICE

Non-Final OA §112§DP
Filed
Feb 24, 2023
Examiner
GIROUX, GEORGE
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
4y 6m
To Grant
93%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
401 granted / 612 resolved
+10.5% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
28 currently pending
Career history
640
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 612 resolved cases

Office Action

§112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Drawings The applicant’s submitted drawings appear to be acceptable for examination purposes. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the drawings. Information Disclosure Statement As required by M.P.E.P. 609(c), the applicant's submission of the Information Disclosure Statements, dated 24 February 2023 and 25 August 2023, are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy of the PTOL-1449 forms, initialed and dated by the examiner, are attached to the instant office action. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-11, 14-17, and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitations “the number of times of iteration” and “the maximum number of times of iteration” in lines 10-11 (as well as further recitations of the same). There is insufficient antecedent basis for these limitations in the claim. Claims 2-11 depend upon claim 1, and thus include the aforementioned limitation(s). Claim 7 also recites the limitations “the second number of times of iteration” and “the second maximum number of times of iteration” in lines 8-9. There is insufficient antecedent basis for these limitations in the claim. Claims 8-10 depend upon claim 7, and thus include the aforementioned additional limitation(s). The intended scope of claim 8 is also unclear because it is unclear what is “being stochastic” in the context of the claim. For the purposes of examination the examiner has assumed that the generator network is stochastic. Claim 14 recites the limitation "the given number of times” in lines 2-3. There is insufficient antecedent basis for this limitation in the claim. Claims 15-17 depend upon claim 14, and thus include the aforementioned limitation(s). Claim 19 recites the limitation "the input tensor allowing the second score to be close to the first score” in lines 2-3. There is insufficient antecedent basis for this limitation in the claim. The term “close to” in claim 19 is also a relative term which renders the claim indefinite. The term “close to” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-7, 11-15, and 18-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 of copending Application No. 17/941448 in view of Keller (US 2016/0098561). This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. As per claim 1, the claim is compared with claim 4 (which depends upon claim 1) of Application No. 17/941448—where any differences between them have been highlighted (in bold)—as follows: Instant Application Application 17/941448 an operating method of an electronic device which includes a processor performing simulation of target row refresh logic of a dynamic random access memory, the method comprising: A device comprising: … initiate a simulator, trained to output a simulator parameter for a design of a semiconductor device [claim 1] generating an input tensor by the processor by using a generator network generate an input tensor from the trained generator network, [claim 1] obtaining a first score by the processor by inputting the input tensor to a target row refresh logic module initiate a simulator, trained to output a simulator parameter for a design of a semiconductor device, to perform a black box operation on the input tensor and to output an output score as a result of the black box operation, [claim 1] storing a pair of the generator network and the first score in an evolution pool by the processor when the first score is greater than a threshold value and to store the input tensor and the output score in the memory as the input tensor-score pair, store information used in the black box operation in the evolution pool when the output score is greater than a minimum score stored in the memory, [claim 1] training a critic network based on the input tensor and the first score by the processor when the number of times of iteration is smaller than the maximum number of times of iteration initiate a critic network to train the sampled generator network in a back- propagation manner, [claim 1] … wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number [claim 4] training the generator network based on a training result of the critic network by the processor when the number of times of iteration is smaller than the maximum number of times of iteration initiate a critic network to train the sampled generator network in a back- propagation manner, [claim 1] … wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number [claim 4] again performing the generating of the input tensor, the obtaining of the first score, and the storing the pair of the generator network and the first score in the evolution pool by the processor when the number of times of iteration is smaller than the maximum number of times of iteration initiate a critic network to train the sampled generator network in a back- propagation manner, [claim 1] … wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number [claim 4] As illustrated above, claim 4 of Application No. 17/941448 claims all of the limitations set forth in the instant application, except for the simulation being performed being simulation of target row refresh logic of a dynamic random access memory. Keller teaches simulation of target row refresh logic of a dynamic random access memory as [various simulators may be used to simulate the computer assets (para. 0103, etc.) including refresh logic of rows of a DRAM memory (para. 0119, etc.), which can be used by the pattern modeling to produce a score for the hardware (paras. 0130, 0143, 0202, etc.)]. Application ‘448 and Keller are analogous art, as they are within the same field of endeavor, namely simulating and malicious/attack/risk scoring computer hardware components. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to simulate row refresh logic of the DRAM memory as part of the modeling/scoring of the hardware components, as taught by Keller, as the simulator producing a simulator parameter for a design of a semiconductor device, in the claimed invention of Application ‘448. Keller provides motivation as [The read process in DRAM is destructive and removes the charge on the memory cells in an entire row, so there is a row of specialized latches on the chip called sense amplifiers, one for each column of memory cells, to temporarily hold the data. During a normal read operation the sense amplifiers, after reading and latching the data, rewrite the data in the accessed row before sending the bit from a single column to output. The normal read electronics on the chip has the ability to refresh an entire row of memory in parallel, significantly speeding up the refresh process. The refresh circuitry must perform a refresh cycle on each of the rows on the chip within the refresh time interval, to make sure that each cell gets refreshed (para. 0119, etc.) and particular exemplary embodiments analyze these characteristics which conceptually provide a fingerprint or fingerprints of what is anticipated and hence form a pattern or patterns. These patterns can be tracked, monitored and verified to be certain to a degree of probability or alternatively a quantified score that the hardware, firmware or software is not substantially different or modified and is only doing what was originally intended to be accomplished and that added, disabled or modified circuitry, code or algorithms have not been implemented and are not doing something unwanted behind the scenes (para. 0130)]. As per claim 2, Application ’448/Keller teaches wherein the training of the critic network, the training of the generator network, the generating of the input tensor, the obtaining of the first score, and the storing the pair of the generator network and the first score in the evolution pool are iteratively performed until the number of times of iteration reaches the maximum number of times of iteration [wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number (Application ‘448: claim 4)]. As per claim 3, Application ’448/Keller teaches initializing a generator pool including a plurality of generator networks [a memory configured to store an evolution pool and an input tensor-score pair; and a processor configured to sample a generator network in the evolution pool (Application ‘448: claim 1)], wherein the generating of the input tensor, the obtaining of the first score, and the storing the pair of the generator network and the first score in the evolution pool, the training of the critic network, and the training of the generator network are performed on each of the plurality of generator networks [wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number (Application ‘448: claim 4), which includes sampling a generator from the evolution pool (Application ‘448: claim 1), thereby performing the training on each of the plurality of generator networks (of the pool)]. As per claim 4, Application ’448/Keller teaches wherein the training of the critic network includes: training the critic network such that the first score is generated from the input tensor [wherein the processor is configured to train the critic network by updating the critic network such that a predicted score calculated by the black box operation on the input tensor is closer to the output score, based on the input tensor-score pair stored in the memory (Application ‘448: claim 3)]. As per claim 5, Application ’448/Keller teaches wherein the training of the generator network includes: inferring a second score from the input tensor by using the critic network; and training the generator network by using a difference between the first score and the second score as a loss function [wherein the processor is configured to train the critic network by updating the critic network such that a predicted score calculated by the black box operation on the input tensor is closer to the output score, based on the input tensor-score pair stored in the memory (Application ‘448: claim 3); where training to bring the scores closer together is using the difference between the scores as a loss function]. As per claim 6, Application ’448/Keller teaches sorting generator network-first score pairs of the evolution pool based on the first score [wherein the updating of the evolution pool includes: sorting the stored information based on a level of the output score (Application ‘448: claim 5)]; and when the number of the generator network-first score pairs of the evolution pool reaches a second threshold value, removing a generator network-first score pair including the first score having the smallest value from among the generator network-first score pairs of the evolution pool from the evolution pool such that the number of the generator network-first score pairs of the evolution pool is maintained below the second threshold value [wherein the updating of the evolution pool includes: sorting the stored information based on a level of the output score, leaving only the preset number of information in a descending order from a maximum score, and deleting a remaining portion of the information (Application ‘448: claim 5)]. As per claim 7, Application ’448/Keller teaches selecting one of the generator networks of the evolution pool [sample a generator network in the evolution pool (Application ‘448: claim 1); and wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number (Application ‘448: claim 4), such that the sampling will occur again and again until the iteration number/threshold is met]; generating a second input tensor by using the selected generator network [generate an input tensor from the trained generator network (Application ‘448: claim 1); repeating as described above (to generate a second input tensor)]; obtaining a third score by inputting the second input tensor to the target row refresh logic module [perform a black box operation on the input tensor and to output an output score as a result of the black box operation (Application ‘448: claim 1); where various simulators may be used to simulate the computer assets (Keller: para. 0103, etc.) including refresh logic of rows of a DRAM memory (para. 0119, etc.), which can be used by the pattern modeling to produce a score for the hardware (Keller: paras. 0130, 0143, 0202, etc.)]; when the third score is greater than a threshold value, storing a pair of the selected generator network and the third score in the evolution pool [and to store the input tensor and the output score in the memory as the input tensor-score pair, store information used in the black box operation in the evolution pool when the output score is greater than a minimum score stored in the memory (Application ‘448: claim 1)]; when the second number of times of iteration is smaller than the second maximum number of times of iteration, training the critic network based on the second input tensor and the third score [wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number (Application ‘448: claim 4)]; when the second number of times of iteration is smaller than the second maximum number of times of iteration, training the selected generator network based on a training result of the critic network [wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number (Application ‘448: claim 4); where updating the evolution pool includes training the generator network (see Application ‘448: claim 1)]; when the second number of times of iteration is smaller than the second maximum number of times of iteration, again performing the generating the second input tensor, the obtaining of the third score, and the storing the pair of the selected generator network and the third score in the evolution pool [wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number (Application ‘448: claim 4), such that the sampling and producing tensor/scores will occur again and again until the iteration number/threshold is met]. As per claim 11, Application ’448/Keller teaches wherein the input tensor does not have a boundary condition [generate an input tensor from the trained generator network (Application ‘448: claim 1), which does not include a boundary condition]. As per claim 12, the claim is compared with claim 4 (which depends upon claim 1) of Application No. 17/941448—where any differences between them have been highlighted (in bold)—as follows: Instant Application Application 17/941448 An electronic device comprising: a processor; and a memory A device comprising: a memory configured to store an evolution pool and an input tensor-score pair; and a processor [claim 1] wherein the processor is configured to execute a simulator performing simulation of target row refresh logic of a dynamic random access memory by using the memory initiate a simulator, trained to output a simulator parameter for a design of a semiconductor device, to perform a black box operation on the input tensor and to output an output score as a result of the black box operation, [claim 1] wherein the simulator includes: a first module configured to execute an algorithm of the target row refresh logic and to output a risk level as a first score initiate a simulator, trained to output a simulator parameter for a design of a semiconductor device, to perform a black box operation on the input tensor and to output an output score as a result of the black box operation, [claim 1] and a second module configured to perform the simulation by using the first module and to store the input tensor and the output score in the memory as the input tensor-score pair, store information used in the black box operation in the evolution pool when the output score is greater than a minimum score stored in the memory, [claim 1] wherein the second module includes: a generator pool including a plurality of generator networks each configured to generate an input tensor of the first module a processor configured to sample a generator network in the evolution pool … generate an input tensor from the trained generator network [claim 1] … wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number [claim 4] and a critic network configured to be trained to replicate the first module and to infer a second score from the input tensor initiate a critic network to train the sampled generator network in a back- propagation manner, [claim 1] … wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number [claim 4] wherein each of the plurality of generator networks is repeatedly trained together with the critic network initiate a critic network to train the sampled generator network in a back- propagation manner, [claim 1] … wherein the processor is configured to train the critic network and update the evolution pool until a number of iterations thereof reaches a preset maximum number [claim 4] and wherein, in each iteration where the training is repeated, when the first score is greater than a threshold value, a generator network corresponding to the first score is stored in an evolution pool together with the first score and to store the input tensor and the output score in the memory as the input tensor-score pair, store information used in the black box operation in the evolution pool when the output score is greater than a minimum score stored in the memory [claim 1] As illustrated above, claim 4 of Application No. 17/941448 claims all of the limitations set forth in the instant application, except for the simulation being performed being simulation of target row refresh logic of a dynamic random access memory. Keller teaches simulation of target row refresh logic of a dynamic random access memory as [various simulators may be used to simulate the computer assets (para. 0103, etc.) including refresh logic of rows of a DRAM memory (para. 0119, etc.), which can be used by the pattern modeling to produce a score for the hardware (paras. 0130, 0143, 0202, etc.)]. Application ‘448 and Keller are analogous art, as they are within the same field of endeavor, namely simulating and malicious/attack/risk scoring computer hardware components. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to simulate row refresh logic of the DRAM memory as part of the modeling/scoring of the hardware components, as taught by Keller, as the simulator producing a simulator parameter for a design of a semiconductor device, in the claimed invention of Application ‘448. Keller provides motivation as [The read process in DRAM is destructive and removes the charge on the memory cells in an entire row, so there is a row of specialized latches on the chip called sense amplifiers, one for each column of memory cells, to temporarily hold the data. During a normal read operation the sense amplifiers, after reading and latching the data, rewrite the data in the accessed row before sending the bit from a single column to output. The normal read electronics on the chip has the ability to refresh an entire row of memory in parallel, significantly speeding up the refresh process. The refresh circuitry must perform a refresh cycle on each of the rows on the chip within the refresh time interval, to make sure that each cell gets refreshed (para. 0119, etc.) and particular exemplary embodiments analyze these characteristics which conceptually provide a fingerprint or fingerprints of what is anticipated and hence form a pattern or patterns. These patterns can be tracked, monitored and verified to be certain to a degree of probability or alternatively a quantified score that the hardware, firmware or software is not substantially different or modified and is only doing what was originally intended to be accomplished and that added, disabled or modified circuitry, code or algorithms have not been implemented and are not doing something unwanted behind the scenes (para. 0130)]. As per claim 13, see the rejection of claim 6, above. As per claim 14, see claim 4 of Application No. 17/941448. As per claim 15, see claim 4 of Application No. 17/941448. As per claim 18, see claim 3 of Application No. 17/941448. As per claim 19, see claim 3 of Application No. 17/941448. As per claim 20, see the rejections of claims 1 and 12, above. Claims 8-10, 16, and 17 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 of copending Application No. 17/941448, in view of Keller (US 2016/0098561), and further in view of Esmaeilzadeh (US 2022/0269928). This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. As per claim 8, Application ’448/Keller teaches periodically regenerating the third score of a generator network from among the generator networks of the evolution pool [wherein the updating of the evolution pool includes: recalculating and rearranging scores of the stored information every set period, and leaving only information satisfying a preset condition and deleting a remaining portion of the information (Application ‘448: claim 6)]. While Application ’448/Keller teaches periodically regenerating the third score of generator networks in the evolution pool, it has not been relied upon for teaching the network being stochastic. Esmaeilzadeh teaches a generator network being stochastic [the generative models can include both stochastic and non-stochastic versions and/or layers (para. 0034, fig. 1; etc.)]. Application ’448/Keller and Esmaeilzadeh are analogous art, as they are within the same field of endeavor, namely training generator models. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include stochastic and non-stochastic generator models, as taught by Esmaeilzadeh, in the evolution pool of generator networks of Application ‘448/Keller. Esmaeilzadeh provides motivation as [the stochastic and non-stochastic layers/models allow each model parameter to have a unique respective learned distribution (para. 0024, etc.) and repeated inputs of the same record may product variation in corresponding outputs of those stochastic noise layers, as different values are expected to be sampled from the probability distributions in each iteration (para. 0025, etc.)]. As per claim 9, Application ’448/Keller teaches removing an oldest generator network from generator networks among the generator networks of the evolution pool from the evolution pool [wherein the updating of the evolution pool includes: sorting the stored information in an oldest order based on a time at which the information is stored, leaving only information having a storing time thereof after a preset timing, and deleting a remaining portion of the information (Application ‘448: claim 7)]. While Application ’448/Keller teaches removing an oldest generator from the evolution pool of generator networks, it has not been relied upon for teaching (some of) the networks being non-stochastic. Esmaeilzadeh teaches a generator network being non-stochastic [the generative models can include both stochastic and non-stochastic versions and/or layers (para. 0034, fig. 1; etc.)]. Application ’448/Keller and Esmaeilzadeh are analogous art, as they are within the same field of endeavor, namely training generator models. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include stochastic and non-stochastic generator models, as taught by Esmaeilzadeh, in the evolution pool of generator networks of Application ‘448/Keller. Esmaeilzadeh provides motivation as [the stochastic and non-stochastic layers/models allow each model parameter to have a unique respective learned distribution (para. 0024, etc.) and repeated inputs of the same record may product variation in corresponding outputs of those stochastic noise layers, as different values are expected to be sampled from the probability distributions in each iteration (para. 0025, etc.)]. As per claim 10, Application ’448/Keller/Esmaeilzadeh teaches training the selected generator network based on a random number [Some embodiments augment otherwise deterministic neural networks with one or more stochastic layers in which parameters of the layers (e.g., some or all of the weights of a subset of layers in a neural network) are randomly (e.g., pseudo-randomly) sampled from probability distributions (also called noise distributions, or just distributions) learned during training (Esmaeilzadeh: para. 0024, etc.) and training can include random sampling of outputs (Esmaeilzadeh: paras. 0042-47; etc.); which uses a random number in training of the generator model]. As per claim 16, see the rejection of claim 8, above. As per claim 17, see the rejection of claim 9, above. Allowable Subject Matter Examiner’s Note: The cited art teaches various systems including multiple generator models, critic models, and simulation of components. However, besides the double patenting rejections described above, none of the cited art, either alone or in combination, appears to include motivation for combining the claimed elements in the manner claimed, including generating an input tensor by a generator network, inputting the input tensor to target row refresh logic module(s) to obtain a score, storing the generator and score in an evolution pool if the score is greater than a threshold, training a critic network based on the generated input tensor and score, etc., repeatedly until a threshold/max number of iteration is reached (see, e.g., claim 1). Conclusion The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): claims 1-20 are rejected. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hong (US 2023/0196759), Dorium (US 2023/0196760), and Hong (US 2023/0194298) – disclose systems/methods including updating an image map database with generated data based upon a discriminator score being above a specified threshold. Lee (US 2020/0381039) – discloses a memory architecture including row refresh logic of DRAM cells, used for layers of a neural network. Pierre (US 2023/0342454) – discloses row-hammer attack simulations including generating system degradation scores for simulated components. Hoang et al. (MGAN: Training Generative Adversarial Nets with Multiple Generators, Oct 2017, pgs. 1-23) – discloses a system/method of training a GAN that includes training multiple generators with a single discriminator network. Olsson et al. (Skill Rating for Generative Models, Aug 2018, pgs. 1-28) – discloses a system/method for training and evaluating generative models of GANs, including utilizing a tournament system for the generators/discriminators and assigning skill ratings. Ohsawa et al. (Optimizing the DRAM Refresh Count for Merged DRAM/Logic LSIs, 1998, pgs. 82-87) – discloses several architectures for DRAM refresh logic to eliminate unnecessary refreshes. Mathew et al. (Using Run-Time Reverse-Engineering to Optimize DRAM Refresh, Oct 2017, pgs. 1-11) – discloses a system/method for optimizing DRAM refresh timing parameters. The examiner requests, in response to this Office action, that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE GIROUX whose telephone number is (571)272-9769. The examiner can normally be reached M-F 10am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEORGE GIROUX/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Feb 24, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572807
Neural Network Methods for Defining System Topology
2y 5m to grant Granted Mar 10, 2026
Patent 12572818
DEVICE AND METHOD FOR RANDOM WALK SIMULATION
2y 5m to grant Granted Mar 10, 2026
Patent 12554986
WEIGHT QUANTIZATION IN NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Patent 12554983
MACHINE LEARNING-BASED SYSTEMS AND METHODS FOR IDENTIFYING AND RESOLVING CONTENT ANOMALIES IN A TARGET DIGITAL ARTIFACT
2y 5m to grant Granted Feb 17, 2026
Patent 12541696
ENHANCED VALIDITY MODELING USING MACHINE-LEARNING TECHNIQUES
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
93%
With Interview (+27.1%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 612 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month