Prosecution Insights
Last updated: April 19, 2026
Application No. 18/076,711

DATA PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Final Rejection §102
Filed
Dec 07, 2022
Examiner
GONZALES, VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
89%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
410 granted / 522 resolved
+23.5% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
26 currently pending
Career history
548
Total Applications
across all art units

Statute-Specific Performance

§101
21.2%
-18.8% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 522 resolved cases

Office Action

§102
DETAILED ACTION This action is written in response to the remarks and amendments dated 12/15/25. This action is made final. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments In view of the Applicant’s arguments, as well as the current amendments to the claims, the Examiner withdraws all outstanding rejections under §101. The Applicants argue that the previous art of record does not anticipate or render obvious the claims as currently amended. The Examiner is not persuaded, and provides updated prior art rejections below necessitated by the current amendments. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States. Claims 1, 3, 5-6, 8, 10, 12-13, 15, 17 and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang (EP 3836037 A1, cited by Applicant in IDS dated 7/21/23). Regarding claims 1, 8 and 15, Wang discloses a data processing method, comprising: acquiring a target directed acyclic graph (DAG) corresponding to a service processing logic of a model self-taught learning service, wherein the service processing logic comprises: “[0058] Specifically, the directed acyclic graph (DAG diagram) shown in the middle part of FIG. 7 shows 6 nodes: "feedback data" node, "behavioral data" node, "data splitting" node, and "feature engineering" node, "LR (logistic regression) algorithm" node, "GBDT (gradient boosting decision tree) algorithm" node, "HE-TreeNet (high-dimensional discrete embedded tree network) algorithm" node, and "NN (neural network) algorithm" node.” (Emphasis added.) PNG media_image1.png 386 496 media_image1.png Greyscale Wang, fig. 7. execution logic for acquiring service data generated by an online released service model, execution logic for training a to-be-trained service model based on the service data, and execution logic for releasing the trained service model online; and [0022] “The model auto-training unit 130 may, according to a configured model updating scheme, generate updated training samples based on the collected prediction data and corresponding real results thereof and continuously obtain the updated machine learning models by using the updated training samples.” (Emphasis added.) [0024] “As an example, before the model auto-training unit 130 continuously obtains the updated machine learning models, there is already an initial machine learning model in the system 100, and the initial machine learning model may be a machine learning model previously trained by the system 100 (for example, the model autotraining unit 130) by using a model training scheme, …” [0067] “In step S205, the service providing unit 140 may automatically save the prediction data included in the prediction service request, and the data collecting unit 110 may continuously collect the prediction data from the service providing unit, wherein the collected prediction data (with corresponding real results) will be used to obtain the updated machine learning models by the model auto-training unit 130, which will be described in detail later. Through the step S205, the automatic backflow of data may be realized, thereby providing a necessary data source for the continuous loop of the automatic machine learning processes.” (Emphasis added.) performing self-taught learning on the to-be-trained service model according to the target DAG; [0026] “After the updated training samples are generated, the model auto-training unit 130 may further continuously obtain the update machine learning models by using the updated training samples according to settings regarding model training (for example, the model algorithms, the parameter adjusting and optimizing, etc.) defined in the configured model updating scheme. As described above, the configured model updating scheme may be generated by the model auto-training unit 130 on the basis of the model training scheme based on which the initial machine learning model is trained, or it may be any scheme for continuously training and obtaining the machine learning models, the model updating scheme herein aims to emphasize that the scheme may be used to more automatically and continuously generate models, but does not limit the manners of model generation to full retraining or incremental learning training.” (Emphasis added.) wherein a service model is a resource recommendation model, and the service data is interactive data of a recommended resource or [0031] “For the updating resource auto-configuration manner, the model auto-training unit 130 needs to know how to utilize system resources (for example, CPU, bus, bandwidth, memory and other resources) during the process of obtaining the updated machine learning models. Here, the auto-training unit 130 may configure the resources according to a data amount together with a rule, but the disclosure is not limited thereto.” (Emphasis added.) [The Examiner notes that only the first limitation of this Markush group is taught by Wang.] wherein the target DAG comprises at least two DAG subgraphs, different DAG subgraphs are configured to implement different execution logic, and the different DAG subgraphs construct the target DAG based on a data flow direction of the service processing logic; [0059] “Referring to FIG. 7, through corresponding configuration at the "data splitting" node in the DAG diagram, the model auto-training unit 130 may split the historical data into the training data and the verification data. Thereafter, through corresponding configuration at the "feature engineering" node in the DAG graph, the model autotraining unit 130 may perform automatic feature generation on the split training data/validation data to extract at least one feature, preferably, the model auto-training unit 130 may also perform automatic feature combination after automatic feature generation to obtain various features including combined features.” [0109] “Referring to FIG. 7, through the corresponding configuration at the "data splitting" node in the DAG diagram, the training data obtained after the splicing of the behavioral data and the feedback data may be split into a training set and a validation set. Thereafter, through the corresponding configuration at the "feature engineering" node in the DAG diagram, automatic feature generation may be performed on the training set and the validation set to extract at least one feature to generate a training sample. At the three nodes corresponding to the lowest layer in the DAG diagram (i.e. "LR algorithm" node, "GBDT algorithm" node, "HE-TreeNet algorithm" node and "NN algorithm" node), the training samples is utilized to perform at least one round of training with respect to the four preset algorithms, respectively, and then the corresponding multiple machine learning models are trained.” wherein in a case where the at least two DAG subgraphs comprise a training DAG subgraph that implements the execution logic for training the to-be-trained service model based on the service data, performing the self-taught learning on the to-be-trained service model according to the target DAG comprises: operating the model training DAG subgraph to train the to-be-trained service model according to the service data in a case where a training condition is satisfied; [0029] “the model auto-training unit 130 may update the machine learning model according to a certain model updating cycle (i.e., generate a new machine learning model). The model updating cycle may be pre-configured by the user, or may be modified in real time according to a specific condition based on a certain rule.” (Emphasis added.) [0079] “Specifically, the service providing unit 140 may select one or more machine learning models as the online machine learning model from the machine learning models obtained and stored by the model auto-training unit 130 according to the model selecting rule included in the model application scheme, wherein the model selecting rule may include a rule for selecting the machine learning model with the highest AUC, a rule for selecting the newly generated machine learning model or the like. For example, the service providing unit 140 may select the machine learning model with the highest AUC from the stored machine learning models as the online machine learning model according to the AUC value.” (Emphasis added.) wherein the satisfied training condition comprises at least one of the following: start training time of a preset training period being reached, and duration of acquisition of the service data reaching preset duration. [The Examiner notes that this is a Markush group.] [0105] “In Fig. 15, a configuration of a self-learning cycle is provided, the user may select the operating mode as "single run", "cyclic run" and "crontab expression", and select a task start time as "2019-06-17 11:38:43", and a self-learning data configuration is further provided, the users may perform selection of data source, data slices, model naming result, and task timeout duration, etc.” (Emphasis added.) start training time :: “task start time” duration of acquisition of the service data :: “task timeout duration” Regarding independent claims 8 and 15, the recited computing components (ie “at least one processor”, “a memory”, and a “non-transitory computer-readable storage medium” are inherent throughout the Wang disclosure. Regarding claims 3, 10 and 17, Wang discloses the further limitations wherein in a case where the at least two DAG subgraphs comprise an acquisition DAG subgraph that implements the execution logic for acquiring the service data generated by the online released service model, performing the self-taught learning on the to-be-trained service model according to the target DAG comprises: operating the acquisition DAG subgraph to acquire the service data in a case where an acquisition condition is satisfied when the online released service model generates the service data in response to a service request. [0029] “the model auto-training unit 130 may update the machine learning model according to a certain model updating cycle (i.e., generate a new machine learning model). The model updating cycle may be pre-configured by the user, or may be modified in real time according to a specific condition based on a certain rule.” (Emphasis added.) [0079] “Specifically, the service providing unit 140 may select one or more machine learning models as the online machine learning model from the machine learning models obtained and stored by the model auto-training unit 130 according to the model selecting rule included in the model application scheme, wherein the model selecting rule may include a rule for selecting the machine learning model with the highest AUC, a rule for selecting the newly generated machine learning model or the like. For example, the service providing unit 140 may select the machine learning model with the highest AUC from the stored machine learning models as the online machine learning model according to the AUC value.” (Emphasis added.) Regarding claims 5, 12 and 19, Wang discloses the further limitations wherein in a case where the at least two DAG subgraphs comprise a model online DAG subgraph that implements the execution logic for releasing the trained service model online, performing the self-taught learning on the to-be-trained service model according to the target DAG comprises: operating the model online DAG subgraph to release the trained service model online in a case where a releasing online condition is satisfied. [0029] “the model auto-training unit 130 may update the machine learning model according to a certain model updating cycle (i.e., generate a new machine learning model). The model updating cycle may be pre-configured by the user, or may be modified in real time according to a specific condition based on a certain rule.” (Emphasis added.) [0073] “For example, the model updating cycle may be set to 1 week, the data selecting rule may be set to select data according to a time range (for example, a data range is set to "last 7 days"), and the model storage location may be set to the model center inside the system 100, and the updating resource auto-configuration manner is set to configure the resources according to the data amount in conjunction with a rule.” Regarding claims 6, 13 and 20, Wang discloses the further limitations wherein the model online DAG subgraph comprises a model releasing DAG subgraph and a model push DAG subgraph; and operating the model online DAG subgraph to release the trained service model online when the releasing online condition is satisfied comprises: operating the model releasing DAG subgraph to release the trained service model to a model center when a releasing condition is satisfied; and [0029] “the model auto-training unit 130 may update the machine learning model according to a certain model updating cycle (i.e., generate a new machine learning model). The model updating cycle may be pre-configured by the user, or may be modified in real time according to a specific condition based on a certain rule.” (Emphasis added.) [0030] “For the model storage location, due to the continuous updating of the machine learning model, multiple machine learning models will be obtained, in order to enable the service providing unit 140 to select an online machine learning model used to provide an online prediction service from the multiple machine learning models, the model auto-training unit 130 needs to determine locations for storing the updated machine learning models which are continuously obtained. For example, the machine learning models may be stored in a model center inside the system 100, which may also enable the user to view model-related interpretations and reports.” (Emphasis added.) operating the model push DAG subgraph to control to push the trained service model from the model center to an online platform according to a preset push requirement in a case where a push condition is satisfied. Id. Additional Relevant Prior Art The following references were identified by the Examiner as being relevant to the disclosed invention, but are not relied upon in any particular prior art rejection: Jin discloses a system for graph-based analysis of job flow data which includes directed acyclic graphs (DAGs). See eg fig. 12. (US 10,380,185 B2) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Vincent Gonzales whose telephone number is (571) 270-3837. The examiner can normally be reached on Monday-Friday 7 a.m. to 4 p.m. MT. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang, can be reached at (571) 270-7092. Information regarding the status of an application may be obtained from the USPTO Patent Center. /Vincent Gonzales/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Dec 07, 2022
Application Filed
Sep 19, 2025
Non-Final Rejection — §102
Dec 15, 2025
Response Filed
Feb 13, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585920
PREDICTING OPTIMAL PARAMETERS FOR PHYSICAL DESIGN SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12580040
DIFFUSION MODEL FOR GENERATIVE PROTEIN DESIGN
2y 5m to grant Granted Mar 17, 2026
Patent 12566984
METHODS AND SYSTEMS FOR EXPLAINING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12561402
IDENTIFICATION OF A SECTION OF BODILY TISSUE FOR PATHOLOGY TESTS
2y 5m to grant Granted Feb 24, 2026
Patent 12547647
Unsupervised Machine Learning System to Automate Functions On a Graph Structure
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
89%
With Interview (+10.5%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 522 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month