Prosecution Insights
Last updated: April 19, 2026
Application No. 18/325,533

Federated Learning Method and Apparatus, Device, System, and Computer-Readable Storage Medium

Non-Final OA §101§103
Filed
May 30, 2023
Examiner
MRABI, HASSAN
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
285 granted / 363 resolved
+23.5% vs TC avg
Strong +32% interview lift
Without
With
+32.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
19 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 363 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is sent in response to Application’s Communication received on 05/30/2023 for application number 18/325533. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawing, Abstract, Oath/Declaration, and Claims. Claims (1-14), (15-18) and (19-20) are presented for examination. Information Disclosure Statement The information disclosure statements (IDS) submitted on 12/17/2024 was filed prior to current Office Action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC§ 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title. Claims (1-14), (15-18) and (19-20) are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Claims (1-14), (15-18) and (19-20) are drawn to a method each of which is within the four statutory categories (e.g., a process, a machine). Step 2A - Prong One: In prong one of step 2A, the claims are analyzed to evaluate whether they recite a judicial exception. Claim 1. receiving data distribution information from a plurality of second devices participating in federated learning, wherein the data distribution information comprises at least one of first gain information or label type information, wherein the first gain information indicates a first correction degree for a first model to adapt to a current training sample of a second device of the plurality of second devices, and wherein the label type information indicates a type corresponding to a label of the current training sample; selecting a matched federated learning policy based on the data distribution information; and sending a parameter reporting policy corresponding to the federated learning policy to at least one second device in the plurality of second devices. The limitations recite “receiving data distribution information from a plurality of second devices …” which can be defined as concepts that can practically be performed in the human mind, or by a human using pen and paper as a physical aid. Examples of mental processes include observations, evaluations, judgments, and opinions. For example, the claimed “receiving” under its broadest reasonable interpretation when read in light of the specification encompasses training sample feature information sent by the plurality of second devices, where the training sample feature information represents label distribution or a sample quantity. Thus, the limitation is a mental process. The limitations recite “sending a parameter reporting policy corresponding to the federated learning policy to at least one second device in the plurality of second devices” which can be defined as concepts that can practically be performed in the human mind, or by a human using pen and paper as a physical aid. Examples of mental processes include observations, evaluations, judgments, and opinions. For example, the claimed “sending” under its broadest reasonable interpretation when read in light of the specification encompasses reporting policy that corresponds to the federated learning policy. Thus, the limitation is a mental process. Step 2A Prong 2: Claim 1 recites additional elements such as “federated learning” and “selecting a matched federated learning…” which are recited at a high level, the elements are merely reciting the words that pertain to a generic computer (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The “applying” is an additional element amount to merely the words “apply it” (or an equivalent) or are mere instructions to implement an abstract idea or other exception on a computer. The limitation does not integrate the judicial exception into a practical application. Dependent claims (2-14), (16-18) and 20 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims (2-14), (16-18) and 20 are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that the elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea. Step 2B: The claim does not provide an inventive concept (significantly more than the abstract idea). The claim is ineligible. The “federated learning” and “selecting a matched federated learning…” steps are considered insignificant extra solution activity. The limitations are mere data gathering and output using federated learning that is recited at a high level of generality and amount to processing input data using federated learning that is recited at high level of generality using a generic computer. Even when considered in combination, the additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. Dependent claims (2-14), (16-18) and 20 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims (2-14), (16-18) and 20 are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that the elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-9, 12-20 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Prakash et al. US Patent Application Publication US 20190138934 A1 (hereinafter Prakash) in view of Pastore et al. US Patent Application Publication US 20220383132 A1 (hereinafter Pastore) and further in view of Hu et al. US Patent Application Publication US 20250209383 A1 (hereinafter Hu). Regarding claim 1, Prakash teaches A method implemented by a first device, the method comprising: receiving data distribution information from a plurality of second devices participating in federated learning, wherein the data distribution information comprises at least one of first gain information or label type information (FIG. 1, Abstract. [0035-0037], [0041-0049], [0072-0074], [0131] wherein Prakash describes federated learning for distributed data where learning takes place by a federation of client compute nodes that are coordinated by a central server. Wherein the clients’ computers send information to a server) selecting a matched federated learning policy based on the data distribution information ([0045] wherein Prakash loads balancing policy (or multiple load balancing policies) to partition the computational load across the plurality of edge compute nodes, wherein the load balancing policy may define one or more actions and the conditions under which the actions are executed. The load balancing policy may include, for example, algorithms, weight factors for individual pieces of data, analysis techniques/functions, system rules, policy definitions, ML models to be solved or otherwise obtained, ML algorithms to use to obtain the ML models. Prakash does not teach wherein the first gain information indicates a first correction degree for a first model to adapt to a current training sample of a second device of the plurality of second devices, and wherein the label type information indicates a type corresponding to a label of the current training sample. However in analogous art of federated learning method, Pastore teaches wherein the first gain information indicates a first correction degree for a first model to adapt to a current training sample of a second device of the plurality of second devices, and wherein the label type information indicates a type corresponding to a label of the current training sample (Abstract, [0008], [0079] wherein Pastore teaches a deep learning model in federated learning that is tailored to semantic meanings of different participants on different devices. The deep learning model adapts the data sample sent from the devices with different labelling to correct unique semantic labels). It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Prakash with Pastore by incorporating the method of wherein the first gain information indicates a first correction degree for a first model to adapt to a current training sample of a second device of the plurality of second devices, and wherein the label type information indicates a type corresponding to a label of the current training sample of Pastore into the method receiving data distribution information from a plurality of second devices participating in federated learning, wherein the data distribution information comprises at least one of first gain information or label type information of Prakash for the purpose of incorporating a deep learning model for the distributed computing devices to perform machine learning classification in federated learning (Pastore: Abstract). Prakash does not teach and sending a parameter reporting policy corresponding to the federated learning policy to at least one second device in the plurality of second devices. However in analogous art of federated learning method, Hu teaches sending a parameter reporting policy corresponding to the federated learning policy to at least one second device in the plurality of second devices (FIG. 1, [0004], [0026], [0039-0040], [0049] wherein Hu identify the effective set of nodes, models, and parameters (or parameter groups) based on the attribute information captured in the knowledge graph and compiles policies for federated learning and send instructions to the computing system wherein a knowledge graph system provides an effective solution for tracking and enforcing the data sharing policies between devices, as illustrated in FIG. 1, for the federated learning process). It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Prakash with Hu by incorporating the method of sending a parameter reporting policy corresponding to the federated learning policy to at least one second device in the plurality of second devices of Hu into the method receiving data distribution information from a plurality of second devices participating in federated learning, wherein the data distribution information comprises at least one of first gain information or label type information of Prakash for the purpose of data sharing policies of particular datasets, data type information for the data samples on the federate nodes, (Hu: [0026]). Regarding claim 2, Prakash as modified by Pastore and Hu teaches wherein selecting the matched federated learning policy based on the data distribution information comprises selecting the matched federated learning policy based on a difference between data distribution, wherein the difference between the data distribution is based on the data distribution information ([0049] wherein Hui extracts one or more model attributes capturing information related to the models and training process, wherein the model attributes captures information in the knowledge graph related training efficiency or other model metrics and determine that different paths that have different policies that may not be consistent with each other). Regarding claim 3, Prakash as modified by Pastore and Hu teaches wherein the data distribution information further comprises the first gain information and the label type information, and wherein prior to selecting the matched federated learning policy, the method further comprises: determining feature distribution information based on the first gain information of the plurality of second devices, wherein the feature distribution information indicates whether feature distribution of current training samples of different second devices is the same; and determining the difference between the data distribution using the feature distribution information and the label type information ([0005], [0008], [0028] wherein Pastore codifying accurately labels and names for data samples even if the parties/devices give different labels to the same sample type, and improve the federated learning system adaptability to unique samples and private semantic labels for the samples of each party that participates). Regarding claim 4, Prakash as modified by Pastore and Hu teaches wherein selecting the matched federated learning policy comprises selecting a model average fusion as the matched federated learning policy when the feature distribution information indicates that the feature distribution of the current training samples is the same and that the label type information is the same, and wherein the model average fusion is for performing federated learning in a gain information averaging manner ([0068], [0072], [0087] wherein Pastore describes an aggregator may receive cluster information from distributed computing devices, the cluster information may relate to identified clusters in sample data of the distributed computing devices. The cluster information may include centroid information per cluster for clustering data samples based on labels). Regarding claim 5, Prakash as modified by Pastore and Hu teaches wherein selecting the matched federated learning policy comprises selecting a model differentiated update as the matched federated learning policy when the feature distribution information indicates that the feature distribution is different and that the label type information is the same, and wherein the model differentiated update is for performing federated learning in a gain information differentiated processing manner ([0005], [0008], [0060], [0065], [0084], wherein Pastor describes different scenarios of having different or the same data samples or having different names or labels for the sample data or the same labels for different data and Pastor incorporates a semantic meaning for performing federated learning processing). Regarding claim 8, Prakash as modified by Pastore and Hu teaches wherein the data distribution information further comprises the first gain information, and wherein prior to selecting the matched federated learning policy, the method further comprises: determining feature distribution information based on the first gain information of the plurality of second devices, wherein the feature distribution information indicates whether feature distribution of current training samples of different second devices is the same; and determining the difference between the data distribution based on the feature distribution information ([0090] wherein Pastore describes multiple terminal devices participating in the current round of federated learning, the network device sends the first information to the multiple terminal devices respectively. As the current data transmission capabilities and/or the data processing capabilities of the terminal devices participating in federated learning may be same, or may be different, therefore the configuration information included in the first information generated by the network device for the multiple terminal devices respectively may be same, or may be different, which is not limited by the embodiment of the present application). Regarding claim 9, Prakash as modified by Pastore and Hu teaches receiving second gain information from the plurality of second devices, wherein the second gain information is based on the parameter reporting policy and the current training sample; performing federated fusion on the second gain information based on the federated learning policy to obtain third gain information corresponding to each second device; and sending, to the second device in the at least one second device, the third gain information corresponding to the second device or a second model based on the corresponding third gain information and the first model of the second device (Abstract, [0002-0008], [0032], [0079] wherein Pastore teaches a deep learning model in federated learning that is tailored to semantic meanings of different participants on different devices. The deep learning model adapts the data sample sent from the devices with different labelling to correct unique semantic labels. Wherein Pastore describes an aggregator that sends a tuned deep learning model to individual devices without those devices receiving raw data from the other devices and wherein a deep learning model may be for the distributed computing devices to perform machine learning classification in federated learning and then to send a global model or neural network to the first, second, and third computers). Regarding claim 12, Prakash as modified by Pastore and Hu teaches wherein the first model comprises a second first model of the second device, wherein the first gain information comprises second gain information corresponding to the second first model, and wherein the second gain information indicates a second correction degree to the second first model to adapt to the current training sample of the second device (Abstract, [0008], [0079] wherein Pastore teaches a deep learning model in federated learning that is tailored to semantic meanings of different participants on different devices. The deep learning model adapts the data sample sent from the devices with different labelling to correct unique semantic labels). Regarding claim 13, Prakash as modified by Pastore and Hu teaches wherein the first model comprises a second first model of the second device and a third first model of another second device participating in federated learning, wherein the first gain information comprises second gain information corresponding to the second first model and third gain information corresponding to the third first model, and wherein the second gain information indicates a second correction degree to the second first model to adapt to the current training sample of the second device ([0112], [0162], [0262] wherein Pastor incorporates a sample data adopted for model training can be flexibly adjusted according to the training task; if the ML model is centrally optimized in a certain aspect, better training effect can be realized; in addition, according to different characteristics of sample data collected from the terminal, different types of sample data are selected for different terminal devices to perform model training, so that the ML model with broader representativeness can be obtained, wherein a second model is trained according the configuration information included in the new first information and sample data stored in the terminal device. Regarding claim 14, Prakash as modified by Pastore and Hu teaches sending, prior to receiving the data distribution information, the third first model to the second device (FIG. 1, [0004], [0026], [0039-0040], [0049] wherein Hu identify the effective set of nodes, models, and parameters (or parameter groups) based on the attribute information captured in the knowledge graph and compiles policies for federated learning and send instructions to the computing system wherein a knowledge graph system provides an effective solution for tracking and enforcing the data sharing policies between devices, as illustrated in FIG. 1, for the federated learning process), ([0090] wherein Pastore describes multiple terminal devices participating in the current round of federated learning, the network device sends the first information to the multiple terminal devices respectively. As the current data transmission capabilities and/or the data processing capabilities of the terminal devices participating in federated learning may be same, or may be different, therefore the configuration information included in the first information generated by the network device for the multiple terminal devices respectively may be same, or may be different, which is not limited by the embodiment of the present application). Regarding claim 15, Prakash teaches A first device comprising: a memory configured to store instructions; and one or more processors coupled to the memory and configured to execute the instructions to cause the first device to ([0048]). The claim is similar in scope to claim 1 therefore the claim is rejected under similar rationale. Regarding claim 16, the claim is similar in scope to claim 2 therefore the claim is rejected under similar rationale. Regarding claim 17, the claim is similar in scope to claim 3 therefore the claim is rejected under similar rationale. Regarding claim 18, the claim is similar in scope to claim 4 therefore the claim is rejected under similar rationale. Regarding claim 19, Prakash teaches A federated learning system comprising: a first device comprising a first memory configured to store first instructions; and one or more first processors coupled to the first memory and configured to execute the first instructions to cause the first device to ([0048]). The claim is similar in scope to claim 1 therefore the claim is rejected under similar rationale. Regarding claim 20, the claim is similar in scope to claim 3 therefore the claim is rejected under similar rationale. Claims 8-7 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Prakash et al. US Patent Application Publication US 20190138934 A1 (hereinafter Prakash) in view of Pastore et al. US Patent Application Publication US 20220383132 A1 (hereinafter Pastore) and further in view of Hu et al. US Patent Application Publication US 20250209383 A1 (hereinafter Hu) and further in view of Tsuchida. US Patent Application Publication US 20230214666 A1 (hereinafter Tsuchida). Regarding claim 6, Prakash, Pastore and Hu do not teach wherein selecting the matched federated learning policy comprises selecting a model partial update as the matched federated learning policy when the feature distribution information indicates that the feature distribution is the same and that the label type information is different, and wherein the model partial update is for performing federated learning in a partial gain information averaging manner. However in analogous art of federated learning method, Tsuchida teaches wherein selecting the matched federated learning policy comprises selecting a model partial update as the matched federated learning policy when the feature distribution information indicates that the feature distribution is the same and that the label type information is different, and wherein the model partial update is for performing federated learning in a partial gain information averaging manner (Abstract, Claims 1, 6-7 text, [0014-0016], [0029] wherein Tsuchida machine learning method wherein a client, connectable to a server, the server having a federated learning part, the federated learning part exchanging model update parameter including a gradient information with the client by a federated learning to train a target model, trains a classification model, the classification model inferring a property of an input data from the gradient information; and computes the gradient information of the target model using a training data, the target model and the classification model, and transmitting the gradient information to the server, wherein the property of the input data that the classification model infers can be set for each client, and the property classification model training part trains the classification model using the target model and a second training data labelled with a teacher label regarding the property of the input data). It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Prakash, Pastore and Hu with Tsuchida by incorporating the method of wherein selecting the matched federated learning policy comprises selecting a model partial update as the matched federated learning policy when the feature distribution information indicates that the feature distribution is the same and that the label type information is different, and wherein the model partial update is for performing federated learning in a partial gain information averaging manner of Tsuchida into the method receiving data distribution information from a plurality of second devices participating in federated learning, wherein the data distribution information comprises at least one of first gain information or label type information of Prakash, Pastore and Hu for the purpose of incorporating a target model training part that computes the gradient information of the target model using a training data, the target model and the classification model and transmits the gradient information to the server (Tsuchida: Abstract). Regarding claim 7, Prakash as modified by Pastore, Hu and Tsuchida teaches wherein selecting the matched federated learning policy comprises selecting a model partial differentiated update as the matched federated learning policy when the feature distribution information indicates that the feature distribution is different and that the label type information is different, and wherein the model partial differentiated update is for performing federated learning in a partial gain information differentiated processing manner (Abstract, [0008], [0079] wherein Pastore teaches a deep learning model in federated learning that is tailored to semantic meanings of different participants on different devices. The deep learning model adapts the data sample sent from the devices with different labelling to correct unique semantic labels), (Abstract, Claims 1, 6-7 text, [0014-0016], [0029] wherein Tsuchida machine learning method wherein a client, connectable to a server, the server having a federated learning part, the federated learning part exchanging model update parameter including a gradient information with the client by a federated learning to train a target model, trains a classification model, the classification model inferring a property of an input data from the gradient information; and computes the gradient information of the target model using a training data, the target model and the classification model, and transmitting the gradient information to the server, wherein the property of the input data that the classification model infers can be set for each client, and the property classification model training part trains the classification model using the target model and a second training data labelled with a teacher label regarding the property of the input data). Claims 10-11 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Prakash et al. US Patent Application Publication US 20190138934 A1 (hereinafter Prakash) in view of Pastore et al. US Patent Application Publication US 20220383132 A1 (hereinafter Pastore) and further in view of Hu et al. US Patent Application Publication US 20250209383 A1 (hereinafter Hu) and further in view of Vandikas Konstantinos et al. Foreign Application Publication WO 2021071399 A1 (hereinafter Vandikas). Regarding claim 10, Prakash, Pastore and Hu do not teach wherein prior to sending the parameter reporting policy, the method further comprises receiving training sample feature information from the plurality of second devices, wherein the training sample feature information represents label distribution or a sample quantity, and wherein sending the parameter reporting policy corresponding to the federated learning policy to the at least one second device comprises sending, to the second device in the at least one second device, a hyperparameter for obtaining the second gain information, and wherein the hyperparameter is based on the training sample feature information from the second device. However in analogous art of federated learning method, Vandikas teaches wherein prior to sending the parameter reporting policy, the method further comprises receiving training sample feature information from the plurality of second devices, wherein the training sample feature information represents label distribution or a sample quantity, and wherein sending the parameter reporting policy corresponding to the federated learning policy to the at least one second device comprises sending, to the second device in the at least one second device, a hyperparameter for obtaining the second gain information, and wherein the hyperparameter is based on the training sample feature information from the second device (Abstract, page. 13, paragraphs 4-8 page. 14, paragraphs 4-5, wherein Vandikas obtaining quantity of labels for each client and a Gaussian mixture model of the data set distribution is obtained. In this example, the representation of the data distribution therefore comprises the quantity of labels per category and the Gaussian mixture model. However, it will be appreciated that the representation of the data distribution may comprise any suitable parameters or descriptors. In this example, each client within a learning group receives hyper-parameters from the global server, the hyperparameters being appropriate for the learning group to which the client belongs. The hyperparameters may include particular features which are generic to all members of all learning groups, features which are generic to all members of the learning group to which the client belongs, and/or features which are specific to the client). It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Prakash, Pastore and Hu with Vandikas by incorporating the method of wherein prior to sending the parameter reporting policy, the method further comprises receiving training sample feature information from the plurality of second devices, wherein the training sample feature information represents label distribution or a sample quantity, and wherein sending the parameter reporting policy corresponding to the federated learning policy to the at least one second device comprises sending, to the second device in the at least one second device, a hyperparameter for obtaining the second gain information, and wherein the hyperparameter is based on the training sample feature information from the second device of Vandikas into the method receiving data distribution information from a plurality of second devices participating in federated learning, wherein the data distribution information comprises at least one of first gain information or label type information of Prakash, Pastore and Hu for the purpose of incorporating a centralized data set to train a machine learning model that may be supplemented by employing distributed machine learning techniques (Vandikas: page. 2, paragraph 1). Regarding claim 11, Prakash as modified by Pastore, Hu and Vandikas teaches wherein the training sample feature information comprises the label distribution information or the sample quantity, wherein the label distribution information comprises at least one of label proportion information or a first quantity of labels of each type, wherein the label proportion information indicates a proportion of labels of each type in labels of the current training samples, and wherein the sample quantity indicates a second quantity of samples comprised in the current training sample ([0090] wherein Pastore describes multiple terminal devices participating in the current round of federated learning, the network device sends the first information to the multiple terminal devices respectively. As the current data transmission capabilities and/or the data processing capabilities of the terminal devices participating in federated learning may be same, or may be different, therefore the configuration information included in the first information generated by the network device for the multiple terminal devices respectively may be same, or may be different, which is not limited by the embodiment of the present application), (Abstract, page. 13, paragraphs 4-8 page. 14, paragraphs 4-5, wherein Vandikas obtaining quantity of labels for each client and a Gaussian mixture model of the data set distribution is obtained. In this example, the representation of the data distribution therefore comprises the quantity of labels per category and the Gaussian mixture model. However, it will be appreciated that the representation of the data distribution may comprise any suitable parameters or descriptors. In this example, each client within a learning group receives hyper-parameters from the global server, the hyperparameters being appropriate for the learning group to which the client belongs. The hyperparameters may include particular features which are generic to all members of all learning groups, features which are generic to all members of the learning group to which the client belongs, and/or features which are specific to the client). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASSAN MRABI whose telephone number is (571)272-8875. The examiner can normally be reached on Monday-Friday, 7:30am-5pm. Alt, Friday, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HASSAN MRABI/Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

May 30, 2023
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579411
RESONATOR NETWORK BASED NEURAL NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12579710
Transforming Content Across Visual Mediums Using Artificial Intelligence and User Generated Media
2y 5m to grant Granted Mar 17, 2026
Patent 12554924
Computer-Implemented Methods and Systems for Generative Text Painting
2y 5m to grant Granted Feb 17, 2026
Patent 12547905
PROBABILISTIC ENTITY-CENTRIC KNOWLEDGE GRAPH COMPLETION
2y 5m to grant Granted Feb 10, 2026
Patent 12536782
METHOD AND APPARATUS FOR TRAINING CLASSIFICATION TASK MODEL, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+32.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 363 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month