Prosecution Insights
Last updated: April 19, 2026
Application No. 18/254,158

METHOD AND APPARATUS FOR MODEL TRAINING AND DATA ENHANCEMENT, ELECTRONIC DEVICE AND STORAGE MEDIUM

Non-Final OA §101§102§112
Filed
May 23, 2023
Examiner
ALGIBHAH, MAHER N
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Jingdong City (Beijing) Digits Technology Co. Ltd.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
214 granted / 244 resolved
+32.7% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
16 currently pending
Career history
260
Total Applications
across all art units

Statute-Specific Performance

§101
22.2%
-17.8% vs TC avg
§103
44.0%
+4.0% vs TC avg
§102
6.1%
-33.9% vs TC avg
§112
13.3%
-26.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 244 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-13 remain pending and are ready for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/27/2024, was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are in claim 8: a generation module, configured for generating, by the generator, reference sample data; a first calculation module, configured for calculating, by the first discriminator, a first distance between the reference sample data and preset negative sample data; a second calculation module, configured for calculating, by the second discriminator, a second distance between negative class data composed of the reference sample data and the preset negative sample data and preset positive sample data; a selection module, configured for determining an objective function based on the first distance and the second distance; and a training module, configured for training the generative adversarial network model by using the objective function until the generative adversarial network model converges, to obtain the generative adversarial network model. For claim 9: a generating module, configured for generating second negative sample data by using a generative adversarial network model, wherein the generative adversarial network model is trained by using a method for model training according to claim1; and an adding module, configured for adding the second negative sample data to an original data set to obtain a new data set, wherein the original data set comprises preset positive sample data and preset negative sample data. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 8-9 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 8: a generation module, configured for generating, by the generator, reference sample data; a first calculation module, configured for calculating, by the first discriminator, a first distance between the reference sample data and preset negative sample data; a second calculation module, configured for calculating, by the second discriminator, a second distance between negative class data composed of the reference sample data and the preset negative sample data and preset positive sample data; a selection module, configured for determining an objective function based on the first distance and the second distance; and a training module, configured for training the generative adversarial network model by using the objective function until the generative adversarial network model converges, to obtain the generative adversarial network model. And in claim 9: a generating module, configured for generating second negative sample data by using a generative adversarial network model, wherein the generative adversarial network model is trained by using a method for model training according to claim1; and an adding module, configured for adding the second negative sample data to an original data set to obtain a new data set, wherein the original data set comprises preset positive sample data and preset negative sample data. The claims limitations above, invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: step 1 is satisfied for claims 1-13. Step 2A Prong One: The claim(s) recite(s) mental process steps of: generating, by the generator, reference sample data; (this step recite mathematical algorithms that is applied and performed in a computing environment—i.e., an abstract idea. See MPEP § 2106.04(a)(2)(I]); see also Elec. Power Grp., 830 F.3d at 1354 (“[A]nalyzing information by steps people go through in their minds, or by mathematical algorithms, without more, [are] essentially mental processes within the abstract-idea category.”’). ). calculating, by the first discriminator, a first distance between the reference sample data and preset negative sample data; (this step recite mathematical algorithms that is applied and performed in a computing environment—i.e., an abstract idea. See MPEP § 2106.04(a)(2)(I]); see also Elec. Power Grp., 830 F.3d at 1354 (“[A]nalyzing information by steps people go through in their minds, or by mathematical algorithms, without more, [are] essentially mental processes within the abstract-idea category.”’). ). calculating, by the second discriminator, a second distance between negative class data composed of the reference sample data and the preset negative sample data and preset positive sample data; (this step recite mathematical algorithms that is applied and performed in a computing environment—i.e., an abstract idea. See MPEP § 2106.04(a)(2)(I]); see also Elec. Power Grp., 830 F.3d at 1354 (“[A]nalyzing information by steps people go through in their minds, or by mathematical algorithms, without more, [are] essentially mental processes within the abstract-idea category.”’). ). determining an objective function based on the first distance and the second distance; (this step recite mathematical algorithms that is applied and performed in a computing environment—i.e., an abstract idea. See MPEP § 2106.04(a)(2)(I]); see also Elec. Power Grp., 830 F.3d at 1354 (“[A]nalyzing information by steps people go through in their minds, or by mathematical algorithms, without more, [are] essentially mental processes within the abstract-idea category.”’). ). training the generative adversarial network model by using the objective function until the generative adversarial network model converges, to obtain the generative adversarial network model. (this step recite mathematical algorithms/ mental processes that is applied and performed in a computing environment—i.e., an abstract idea. See MPEP § 2106.04(a)(2)(I]); see also Elec. Power Grp., 830 F.3d at 1354 (“[A]nalyzing information by steps people go through in their minds, or by mathematical algorithms, without more, [are] essentially mental processes within the abstract-idea category.”’). ). Step 2A Prong Two: The claim/s do not recites any additional elements. The judicial exception is not integrated into a practical application because the there is no additional elements amount to nothing more than generic components (e.g. processor and memory) recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. See MPEP 2106.04(d)(I) and 2106.05(f). Step 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, no additional elements amount to nothing more than mere instructions to apply the exception using generic computer component(s). These cannot provide an inventive concept, and thus the claims are patent-ineligible. Claims 2-7, 9-13 directed to the same abstract idea without significantly more. The claims recite an additional mental process or mathematical algorithms. There are no additional elements recited in these claims that integrates the abstract idea into a practical application or amounts to significantly more than the abstract idea. Therefore, the claims are rejected under the same abstract idea as claim 1 or 8. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “Dual Discriminator Generative Adversarial Nets” 2017, Nguyen, (Hereinafter “Nguyen”). Regarding claim 1, Nguyen disclosers A method for model training, wherein a generative adversarial network model comprises a generator and two discriminators, an output of the generator is used as an input of the two discriminators (see abstract, wherein dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators and together with a generator, it also has the analogy of a minimax game. See also Introduction section, wherein a generator G that generates data by mapping samples from a noise space to the input space), the method comprising: generating, by the generator, reference sample data (see Introduction section and Generative Adversarial Nets section, wherein the generator first maps a noise vector z drawn from a prior P (z) to the data space, obtaining a sample G(z) that resembles the training data, and then uses this sample to challenge the discriminator.); calculating, by the first discriminator, a first distance between the reference sample data and preset negative sample data (see Dual Discriminator Generative Adversarial Nets section wherein D1 (x) rewards a high score if x is drawn from the data distribution Pdata, and gives a low score if generated from the model distribution PG. In contrast, D2 (x) returns a high score for x generated from PG whilst giving a low score for a sample drawn from Pdata); calculating, by the second discriminator, a second distance between negative class data composed of the reference sample data and the preset negative sample data and preset positive sample data (see Dual Discriminator Generative Adversarial Nets section wherein D1 (x) rewards a high score if x is drawn from the data distribution Pdata, and gives a low score if generated from the model distribution PG. In contrast, D2 (x) returns a high score for x generated from PG whilst giving a low score for a sample drawn from Pdata)); determining an objective function based on the first distance and the second distance (see sections 3, 3.1, wherein the objective function can be determined); and training the generative adversarial network model by using the objective function until the generative adversarial network model converges, to obtain the generative adversarial network model (see sections 3, 3.1, wherein the model is trained until it reaches a Nash equilibrium, where the generator distribution PG recovers the data distribution Pdata ). Regarding claim 2, Nguyen further disclosers wherein an optimization objective of the objective function is to minimize the first distance and maximize the second distance (see sections 3, 3.1, wherein this loss shows that increasing promotes the optimization towards minimizing the KL divergence DKL(Pdata PG), thus helping the generative distribution cover multiple modes, but may include potentially undesirable samples; whereas increasing encourages the minimization of the reverse KLdivergence DKL (PGPdata), hence enabling the generator capture a single mode better, but may miss many modes. By empirically adjusting these two hyperparameters, we can balance the effect of two divergences, and hence effectively avoid the mode collapsing issue.). Regarding claim 3, Nguyen further disclosers wherein training the generative adversarial network model by using the objective function until the generative adversarial network model converges, to obtain the generative adversarial network model comprises: training the generative adversarial network model by using the objective function to obtain generator parameters of the generator, first discriminator parameters of the first discriminator and second discriminator parameters of the second discriminator (see section 3, 3.1, wherein trained by alternatively updating D1D2 and G); and inputting the generator parameters, the first discriminator parameters and the second discriminator parameters into the generative adversarial network model to obtain the generated countermeasure network model (see section 4, wherein the trained model parameters release on (TensorFlow) as the final result of the training process). Regarding claim 4, Nguyen further disclosers wherein the objective function is: PNG media_image1.png 126 650 media_image1.png Greyscale wherein, posData represents positive class data, negData represents negative class data, allData represents a union of generated negative class data and original negative class data, Di represents a first discriminator parameter, D2represents a second discriminator parameter, G represents a generator parameter (see section 2-3.2). Regarding claim 5, Nguyen further disclosers wherein the structure of the first discriminator and the structure of the second discriminator are the same (see section 3), the first discriminator comprises a plurality of cascaded discriminant units and sigmoid layers, the output of the last discriminant unit serves as an input to the sigmoid layer, and each of the discriminant units comprises cascaded fully connected layer, leaky-ReLU layer and sigmoid layer (see section 1, wherein the discriminators described as deep generative models which utilizing neural networks with multiple hidden layers. The sue of leaky-ReLU is disclosed in section 4). Regarding claim 6, Nguyen further disclosers wherein the generator comprises a plurality of cascaded generation units, each of the generation unit comprises cascaded full-connection layers, normalization layers, and leaky- ReLU layers (see section 1, wherein the discriminators described as deep generative models which utilizing neural networks with multiple hidden layers. The sue of leaky-ReLU is disclosed in section 4). Regarding claim 7, Nguyen further disclosers A method for data enhancement, comprising: generating second negative sample data by using a generative adversarial network model, wherein the generative adversarial network model is trained by using a method for model training according to claim 1 (see abstract, wherein D2GAN is designed to recover the data distribution and generate diverse sample that were missing or under-represented); and adding the second negative sample data to an original data set to obtain a new data set, wherein the original data set comprises preset positive sample data and preset negative sample data (see abstract, wherein D2GAN experiments is to demonstrate that the generator can produce goo quality and diverse samples to augment or represent a complete data distribution). Claim 8 is rejected under the same rationale as claim 1. Claim 9 is rejected under the same rationale as claim 7. Claim 10 is rejected under the same rationale as claim 1. Claim 11 is rejected under the same rationale as claim 1. Claim 12 is rejected under the same rationale as claim 7. Claim 13 is rejected under the same rationale as claim 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHER N ALGIBHAH whose telephone number is (571)272-0718. The examiner can normally be reached on Monday-Thursday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached on (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-1264. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAHER N ALGIBHAH/Primary Examiner , Art Unit 2165
Read full office action

Prosecution Timeline

May 23, 2023
Application Filed
Jan 07, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602441
Self-Supervised Learning through Data Augmentation for Recommendation Systems
2y 5m to grant Granted Apr 14, 2026
Patent 12602366
DISTRIBUTED TABLE LOCK APPLICATION METHODS, APPARATUSES, STORAGE MEDIA, AND ELECTRONIC DEVICES
2y 5m to grant Granted Apr 14, 2026
Patent 12602405
CROSS-PLATFORM CONTENT MANAGEMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602360
METHODS AND APPARATUS TO ESTIMATE AUDIENCE SIZES OF MEDIA USING DEDUPLICATION BASED ON BINOMIAL SKETCH DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12591585
SYSTEMS AND METHODS FOR ADVANCED ENTERPRISE DATA STORAGE AND RETRIEVAL
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+19.3%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 244 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month