Prosecution Insights
Last updated: April 19, 2026
Application No. 18/148,779

DATA PROCESSING METHOD AND ELECTRONIC DEVICE

Non-Final OA §103
Filed
Dec 30, 2022
Examiner
MILLER, ALEXANDRIA JOSEPHINE
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
1 (Non-Final)
18%
Grant Probability
At Risk
1-2
OA Rounds
4y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
5 granted / 27 resolved
-36.5% vs TC avg
Strong +71% interview lift
Without
With
+71.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
40 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are presented for examination. This office action is in response to submission of application on 30-DECEMBER-2022. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 30-JULY-2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-7, 10-11, 13-14, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ko et al. (Pub. No. US 20210027172 A1, filed July 24th 2020, hereinafter Ko) in view of Matei et al. (Pub. No. US 20220253578 A1, filed February 9th 2021, hereinafter Matei). Regarding claim 1: Claim 1 recites: A data processing method, comprising: acquiring data to be processed, the data to be processed indicating at least one of: first state information, a first action, and second state information after executing the first action when the first state information is satisfied; determining result data based on the data to be processed using a trained data generation model, the result data indicating third state information after executing a second action when the first state information is satisfied, and the data generation model being obtained based on a training set and a causal model corresponding to at least one data item in the training set; and outputting the result data Ko discloses acquiring data to be processed, the data to be processed indicating at least one of: first state information, a first action, and second state information after executing the first action when the first state information is satisfied: Ko teaches that a state, action, and next state data are used for a reinforcement learning model (Paragraph 62), wherein the next state data refers to a new environment based on the reward for the action. Therefore, the state data is analogous to the first state information, the action is analogous to a first action, and the next state data is analogous to second state information after executing the first action when the first state information is satisfied since the first action has created to environment of the second state, where ‘next’ would indicate that it follows the initial action. Ko discloses determining result data based on the data to be processed using a trained data generation model, the result data indicating third state information after executing a second action when the first state information is satisfied: Ko teaches that the reinforcement learning model that uses this information as input is a method for selecting an action and changing a state by the action (Paragraph 61). This would be the result data based on the data to be processed, wherein the third state information is the changes in the state after executing a second action when the first state information is satisfied. Therefore, through the generation of result data this model would be a data generation model. Ko discloses the data generation model being obtained based on a training set and a causal model corresponding to at least one data item in the training set; and outputting the result data: Ko teaches that a first semantic vector and second semantic vector may be used as the state and action data (Paragraph 63). Furthermore, these vectors may be used to train the AI model (Paragraph 12). Therefore, the data generation model (the AI model) is obtained based on a training set wherein the result data is output (Paragraph 61) through the action and state selection. Furthermore, these vectors correspond to at least one data item in the training set as they are used to train the model. However, Ko does not teach a causal model. Matei discloses the use of a causal model Matei in the same field of endeavor of machine learning teaches using a causal model as a surrogate for a nonlinear block in a dynamic model (Paragraph 7). Matei and the present application are analogous art because they are in the same field of endeavor of machine learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and the teachings of Matei. This would have provided the advantage of faster running models with improved accuracy (Matei, Paragraphs 32-33). Regarding claim 2, which depends upon claim 1: Claim 2 recites: The method according to claim 1, wherein the data generation model comprises a first submodel and a second submodel, and wherein determining the result data comprises: inputting at least one of the first state information, the first action, and the second state information into the first submodel, to obtain an influence parameter corresponding to the data to be processed; and inputting the first state information, the second action, and the influence parameter into the second submodel, to obtain the third state information Ko in view of Matei disclose the method of claim 1. Furthermore, Ko discloses wherein the data generation model comprises a first submodel and a second submodel, and wherein determining the result data comprises: Ko teaches at least two AI models in its overall method, a first and a second model that are used in conjunction with each other (Paragraphs 57-58), making them a first and second submodel. Ko discloses inputting at least one of the first state information, the first action, and the second state information into the first submodel, to obtain an influence parameter corresponding to the data to be processed: Ko teaches that the first semantic vector may be used as input to the first AI model, wherein the first semantic vector contains the first state data (Paragraph 70). Therefore, at least one of the above are used as input for the first AI model, wherein a weight of the model is also provided as an influence parameter for the data to be processed by the second model (Paragraph 71). Ko discloses inputting the first state information, the second action, and the influence parameter into the second submodel, to obtain the third state information: Ko teaches that using the above influence parameter as well as the first semantic vector that is the first state information and the second semantic vector which may be action data as for the second action, the second AI model may generate a first semantic vector for a user (Paragraph 70), wherein first semantic vectors are descriptive of state information and may therefore be third state information. Regarding claim 3, which depends upon claim 2: Claim 3 recites: The method according to claim 2, wherein the influence parameter comprises at least one of: attribute information of an object represented by the first state information, or a noise parameter Ko in view of Matei disclose the method of claim 2. Furthermore, Ko discloses the limitations of claim 3: Ko teaches that a weight of the first AI model may be obtained as a result of learning as an influence parameter (Paragraph 50), wherein the weight of the first AI model would be attribute information of an object represented by the first state information as the first state information is used as input into the first AI model, and the weight value is calculated through the processing of this state information, and therefore is a representation of attributes of the first state information object. Regarding claim 5, which depends upon claim 1: Claim 5 recites: The method according to claim 1, further comprising: acquiring input information from a user, the input information comprising input state information; determining at least one target decision based on the input information using a trained decision model generated at least based on the result data; and outputting the at least one target decision Ko in view of Matei disclose the method of claim 1. Furthermore, Ko discloses the limitations of claim 5: Ko teaches that a recommendation item can be recommended to a user based on user interaction, including log data left by online activities (Paragraph 7). Log data would be input state information as it describes the state of the environment at the time of the user interaction, e.g. which objects existed for the user to interact with. Therefore, the recommendation being based on user interaction would demonstrate that input information is acquired from a user, wherein the input information comprises input state information, and the recommendation would be a target decision based on the input information that is outputted at the time of the recommendation being given to the user. This is given as an example of implementing a trained decision model (Paragraph 6) wherein this recommendation may be given as a result of the second AI model’s processing, which outputs the result data (Paragraph 72). Regarding claim 6, which depends upon claim 1: Claim 6 recites: The method according to claim 1, further comprising: constructing the training set, the training set comprising a plurality of data items, each of the plurality of data items comprising at least one of: first state information, an action, and second state information after executing the action when the first state information is satisfied; acquiring the causal model; and generating the trained data generation model at least based on the training set and the causal model Ko in view of Matei disclose the method of claim 1. Furthermore, Ko discloses constructing the training set, the training set comprising a plurality of data items, each of the plurality of data items comprising at least one of: first state information, an action, and second state information after executing the action when the first state information is satisfied: Ko teaches storing a first semantic vector and second semantic vector, which are state data and action data respectively (Paragraph 63), to be used as training data (Paragraph 12). Be used as training data would demonstrate their construction into a training set, wherein the training set comprises a plurality of data items, each of the plurality of data items comprising first state information, and an action as provided by the vectors, which would be at least one of the above list. Ko discloses generating the trained data generation model at least based on the training set and the causal model: Ko teaches training an model based on the training set (Paragraph 12) wherein training a model would be generating a trained data generation model. Ko does not disclose the use of a causal model, which is taught by Matei further below. Matei discloses acquiring the causal model: Matei teaches using a causal model as a surrogate for a nonlinear block in a dynamic model, wherein the causal model is constructed based on the blocks, which is a form of acquiring the causal model (Paragraph 7). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and the teachings of Matei. This would have provided the advantage of faster running models with improved accuracy (Matei, Paragraphs 32-33). Regarding claim 7, which depends upon claim 6: Claim 7 recites: The method according to claim 6, wherein each of the plurality of data items further comprises attribute information of an object represented by the first state information Ko in view of Matei disclose the method of claim 6. Furthermore, Ko discloses the limitations of claim 7: Ko teaches that part of the semantic vectors, such as the first semantic vector, may express user data as a vector of user data and item data, wherein the item data would be data items the comprise attribute of an object (i.e., the item) represented by the first state information, i.e., the first semantic vector (Paragraph 40). Regarding claim 10, which depends upon claim 6: Claim 10 recites: The method according to claim 6, wherein acquiring the causal model comprises: generating the causal model based on at least one data item in the plurality of data items, wherein the causal model indicates causal relations among a plurality of factors in the at least one data item Ko in view of Matei disclose the method of claim 6. Furthermore, Matei discloses the limitations of claim 10: Matei teaches that for each of a series of nonlinear blocks, a causal model may be generated, which would be generating a causal model based on at least one data item in the plurality of data items, where the data items are the nonlinear blocks (Paragraph 7). Furthermore, the causal model is used to convert the nonlinear blocks, i.e. a plurality of factors, into a linear relationship, which indicates a causal relationship for the factors in the data item (Paragraph 33). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and the teachings of Matei. This would have provided the advantage of faster running models with improved accuracy (Matei, Paragraphs 32-33). Regarding claim 11, which depends upon claim 6: Claim 11 recites: The method according to claim 6, wherein generating the trained data generation model comprises: constructing a model structure of the data generation model based on the causal model; and training the model structure at least based on the training set, to generate the trained data generation model Ko in view of Matei disclose the method of claim 6. Furthermore, Matei discloses constructing a model structure of the data generation model based on the causal model; and training the model structure at least based on the training set, to generate the trained data generation model Matei teaches that the causal models are used as substitutes for the original nonlinear blocks, making the greater model a model structure based on the causal model. Furthermore, the causal models are trained (Paragraph 7) via training data (Paragraph 33) e.g., a training set, which trains the model structure they are applied to and yields a trained model. Matei does not disclose a data generation model, but this has been disclosed by Ko in claim 1, upon which this claim relies. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and the teachings of Matei. This would have provided the advantage of faster running models with improved accuracy (Matei, Paragraphs 32-33). Regarding claim 13: Claim 13 recites: A model training method, comprising: constructing a training set, the training set comprising a plurality of data items, each of the plurality of data items comprising at least one of: first state information, an action, and second state information after executing the action when the first state information is satisfied; acquiring a causal model corresponding to at least one data item in the training set; and generating a trained data generation model at least based on the training set and the causal model Ko discloses constructing the training set, the training set comprising a plurality of data items, each of the plurality of data items comprising at least one of: first state information, an action, and second state information after executing the action when the first state information is satisfied: Ko teaches storing a first semantic vector and second semantic vector, which are state data and action data respectively (Paragraph 63), to be used as training data (Paragraph 12). Be used as training data would demonstrate their construction into a training set, wherein the training set comprises a plurality of data items, each of the plurality of data items comprising first state information, and an action as provided by the vectors, which would be at least one of the above list. Ko discloses generating the trained data generation model at least based on the training set and the causal model: Ko teaches training an model based on the training set (Paragraph 12) wherein training a model would be generating a trained data generation model. Ko does not disclose the use of a causal model, which is taught by Matei further below. Matei discloses acquiring a causal model corresponding to at least one data item in the training set: Matei teaches using a causal model as a surrogate for a nonlinear block in a dynamic model, wherein the causal model is constructed based on the blocks, which is a form of acquiring the causal model (Paragraph 7), and the causal model corresponds to at least one data item in the training set as the causal model corresponds to nonlinear blocks (Paragraph 7), and the causal models correspond to an item in the training set as they are trained by a particular set of training data (Paragraph 51). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and the teachings of Matei. This would have provided the advantage of faster running models with improved accuracy (Matei, Paragraphs 32-33). Regarding claim 14, which depends upon claim 13: Claim 14 recites: The method according to claim 6, wherein each of the plurality of data items further comprises attribute information of an object represented by the first state information Ko in view of Matei disclose the method of claim 13. Furthermore, Ko discloses the limitations of claim 14: Ko teaches that part of the semantic vectors, such as the first semantic vector, may express user data as a vector of user data and item data, wherein the item data would be data items the comprise attribute of an object (i.e., the item) represented by the first state information, i.e., the first semantic vector (Paragraph 40). Regarding claim 17, which depends upon claim 13: Claim 17 recites: The method according to claim 13, wherein acquiring the causal model comprises: generating the causal model based on at least one data item in the plurality of data items, wherein the causal model indicates causal relations among a plurality of factors in the at least one data item Ko in view of Matei disclose the method of claim 13. Furthermore, Matei discloses the limitations of claim 17: Matei teaches that for each of a series of nonlinear blocks, a causal model may be generated, which would be generating a causal model based on at least one data item in the plurality of data items, where the data items are the nonlinear blocks (Paragraph 7). Furthermore, the causal model is used to convert the nonlinear blocks, i.e. a plurality of factors, into a linear relationship, which indicates a causal relationship for the factors in the data item (Paragraph 33). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and the teachings of Matei. This would have provided the advantage of faster running models with improved accuracy (Matei, Paragraphs 32-33). Regarding claim 18, which depends upon claim 13: Claim 18 recites: The method according to claim 13, wherein generating the trained data generation model comprises: constructing a model structure of the data generation model based on the causal model; and training the model structure at least based on the training set, to generate the trained data generation model Ko in view of Matei disclose the method of claim 13. Furthermore, Matei discloses constructing a model structure of the data generation model based on the causal model; and training the model structure at least based on the training set, to generate the trained data generation model Matei teaches that the causal models are used as substitutes for the original nonlinear blocks, making the greater model a model structure based on the causal model. Furthermore, the causal models are trained (Paragraph 7) via training data (Paragraph 33) e.g., a training set, which trains the model structure they are applied to and yields a trained model. Matei does not disclose a data generation model, but this has been disclosed by Ko previously. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and the teachings of Matei. This would have provided the advantage of faster running models with improved accuracy (Matei, Paragraphs 32-33). Claims 19 and 20 recite a device that parallels the method of claims 1 and 6 respectively. Therefore, the analysis discussed above with respect to claims 1 and 6 also applies to claims 19 and 20 respectively. Accordingly, claims 19 and 20 are rejected based on substantially the same rationale as set forth above with respect to claims 1-6 respectively. Claims 4, 8-9, 12, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Ko in view of Matei, further in view of Gutierrez et al. (Pub. No. US 20220215142 A1, filed April 26th 2021, hereinafter Gutierrez). Regarding claim 4, which depends upon claim 2: Claim 4 recites: The method according to claim 2, wherein the data generation model further comprises a third submodel, and the method further comprises: inputting the first state information, the second action, and the third state information into the third submodel, to determine availability of the result data Ko in view of Matei disclose the method of claim 2 upon which claim 4 depends. However, Ko in view of Matei do not fully disclose the limitations of claim 4. Gutierrez in the same field of endeavor of machine learning discloses wherein the data generation model further comprises a third submodel, and the method further comprises: inputting the first state information, the second action, and the third state information into the third submodel, to determine availability of the result data: Gutierrez teaches a machine learning model that determines the validity of a given synthetic data generation model by inputting into it the output synthetic datasets, which would be a form of result data (Paragraph 185), and furthermore determines the availability of the result data, because as seen in the specification (Present application, Paragraph 97) availability may be seeing if the result data is real data, as part of the goal of the synthetic data generation model is scrubbing actual real data from the results (Paragraph 14). This model could act as a third model appended to the methodology of Ko, as it would provide Ko with the advantage of improving the validation of models (Gutierrez, Paragraph 13). Furthermore, the first state information, the second action, and the third state previously taught by Ko could be used as the input information of Gutierrez for similar reasoning. Gutierrez and the present application are analogous art because they are in the same field of endeavor of machine learning It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and Matei and the teachings of Gutierrez. This would have provided the advantage of improving the validation of models (Gutierrez, Paragraph 13). Regarding claim 8, which depends upon claim 7: Claim 8 recites: The method according to claim 7, wherein the data generation model comprises a first submodel, a second submodel, and a third submodel, and wherein an input of the first submodel comprises the first state information, the action, and the second state information, an input of the second submodel comprises the first state information, the action, and the attribute information, and the third submodel is used to determine discrepancies between an output of the second submodel and the second state information Ko in view of Matei disclose the method of claim 7 upon which claim 8 depends. Furthermore, Ko discloses wherein the data generation model comprises a first submodel, a second submodel, and a third submodel, and wherein an input of the first submodel comprises the first state information, the action, and the second state information: Ko teaches at least two AI models in its overall method, a first and a second model that are used in conjunction with each other (Paragraphs 57-58), making them a first and second submodel. Furthermore, Ko teaches that the state, action, and next state data, which would be analogous to the first state information, the action, and the second state information, may be used as input to the model (Paragraph 62). Ko does not teach a third submodel, which is taught by Gutierrez below. Ko discloses an input of the second submodel comprises the first state information, the action, and the attribute information: Ko teaches using the previously disclosed influence parameter which may act as attribute information (Paragraph 50) as well as the first semantic vector that is the first state information and the second semantic vector which may be action data as for the second action, the second AI model may generate a first semantic vector for a user (Paragraph 70). Gutierrez discloses the third submodel is used to determine discrepancies between an output of the second submodel and the second state information: Gutierrez teaches a machine learning model that determines the validity of a given synthetic data generation model by inputting into it the output synthetic datasets, which would be a form of result data (Paragraph 185), and furthermore determines discrepancies between an output of the model and real data (Paragraph 186). This model could act as a third model appended to the methodology of Ko, as it would provide Ko with the advantage of improving the validation of models (Gutierrez, Paragraph 13). Furthermore, the second submodel and second state information previously taught by Ko could be used as real data of Gutierrez for similar reasoning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and Matei and the teachings of Gutierrez. This would have provided the advantage of improving the validation of models (Gutierrez, Paragraph 13). Regarding claim 9, which depends upon claim 8: Claim 9 recites: The method according to claim 8, wherein an input of the second submodel further comprises an influence parameter, and the influence parameter comprises at least one of: attribute information of an object represented by the first state information, or a noise parameter Ko in view of Matei further in view of Gutierrez disclose the method of claim 8 upon which claim 9 depends. Furthermore, Ko discloses the limitations of claim 9: Ko teaches that a weight of the model is also provided as an influence parameter for the data to be processed by the second model as an input, wherein the weight of the first AI model would be attribute information of an object represented by the first state information as the first state information is used as input into the first AI model, and the weight value is calculated through the processing of this state information, and therefore is a representation of attributes of the first state information object (Paragraph 71). Regarding claim 12, which depends upon claim 1: Claim 12 recites: The method according to claim 1, wherein the data to be processed is factual-based data, and the result data is counterfactual data Ko in view of Matei disclose the method of claim 1 upon which claim 12 depends. However, Ko in view of Matei do not fully disclose the limitations of claim 12. Furthermore, Ko discloses wherein the data to be processed is factual-based data: Ko teaches that the state information is based on the actual states that may occur, which is factual based data (Paragraph 62). Gutierrez discloses the result data is counterfactual data: Gutierrez teaches counterfactual data as result data as it teaches that the synthetic data (i.e., the result) may be counterfactual data (Paragraph 14). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and Matei and the teachings of Gutierrez. This would have provided the advantage of improving the validation of models (Gutierrez, Paragraph 13). Regarding claim 15, which depends upon claim 14: Claim 15 recites: The method according to claim 14, wherein the data generation model comprises a first submodel, a second submodel, and a third submodel, and wherein an input of the first submodel comprises the first state information, the action, and the second state information, an input of the second submodel comprises the first state information, the action, and the attribute information, and the third submodel is used to determine discrepancies between an output of the second submodel and the second state information Ko in view of Matei disclose the method of claim 14 upon which claim 15 depends. Furthermore, Ko discloses wherein the data generation model comprises a first submodel, a second submodel, and a third submodel, and wherein an input of the first submodel comprises the first state information, the action, and the second state information: Ko teaches at least two AI models in its overall method, a first and a second model that are used in conjunction with each other (Paragraphs 57-58), making them a first and second submodel. Furthermore, Ko teaches that the state, action, and next state data, which would be analogous to the first state information, the action, and the second state information, may be used as input to the model (Paragraph 62). Ko does not teach a third submodel, which is taught by Gutierrez below. Ko discloses an input of the second submodel comprises the first state information, the action, and the attribute information: Ko teaches using the previously disclosed influence parameter which may act as attribute information (Paragraph 50) as well as the first semantic vector that is the first state information and the second semantic vector which may be action data as for the second action, the second AI model may generate a first semantic vector for a user (Paragraph 70). Gutierrez discloses the third submodel is used to determine discrepancies between an output of the second submodel and the second state information: Gutierrez teaches a machine learning model that determines the validity of a given synthetic data generation model by inputting into it the output synthetic datasets, which would be a form of result data (Paragraph 185), and furthermore determines discrepancies between an output of the model and real data (Paragraph 186). This model could act as a third model appended to the methodology of Ko, as it would provide Ko with the advantage of improving the validation of models (Gutierrez, Paragraph 13). Furthermore, the second submodel and second state information previously taught by Ko could be used as real data of Gutierrez for similar reasoning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a methodology that utilized the teachings of Ko and Matei and the teachings of Gutierrez. This would have provided the advantage of improving the validation of models (Gutierrez, Paragraph 13). Regarding claim 16, which depends upon claim 15: Claim 16 recites: The method according to claim 15, wherein the input of the second submodel further comprises an influence parameter, and the influence parameter comprises at least one of: attribute information of an object represented by the first state information, or a noise parameter Ko in view of Matei further in view of Gutierrez disclose the method of claim 15 upon which claim 16 depends. Furthermore, Ko discloses the limitations of claim 16: Ko teaches that a weight of the model is also provided as an influence parameter for the data to be processed by the second model as an input, wherein the weight of the first AI model would be attribute information of an object represented by the first state information as the first state information is used as input into the first AI model, and the weight value is calculated through the processing of this state information, and therefore is a representation of attributes of the first state information object (Paragraph 71). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRIA JOSEPHINE MILLER whose telephone number is (703)756-5684. The examiner can normally be reached Monday-Thursday: 7:30 - 5:00 pm, every other Friday 7:30 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.J.M./Examiner, Art Unit 2142 /HAIMEI JIANG/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Dec 30, 2022
Application Filed
Dec 10, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566943
METHOD AND APPARATUS WITH NEURAL NETWORK QUANTIZATION
2y 5m to grant Granted Mar 03, 2026
Patent 12481890
SYSTEMS AND METHODS FOR APPLYING SEMI-DISCRETE CALCULUS TO META MACHINE LEARNING
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
18%
Grant Probability
90%
With Interview (+71.4%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month