Prosecution Insights
Last updated: April 19, 2026
Application No. 17/914,297

FEDERATED MIXTURE MODELS

Non-Final OA §101§103
Filed
Sep 23, 2022
Examiner
SOMERS, MARC S
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Qualcomm Technologies, Inc.
OA Round
3 (Non-Final)
65%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
364 granted / 563 resolved
+9.7% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
36 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 563 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The amendments was received on 12/18/2025. Claims 1-7, 9-16, and 18 are pending where claims 1-7, 9-16, and 18 were previously presented and claims 8 and 17 were cancelled. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/8/2026 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 9-16, and 18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. With regard to claim 1: Step 2A, Prong One: The claim recites the following limitations which are drawn towards an abstract idea: A processor-implemented method As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below. Step 2A, Prong Two: The following limitations have been identified as being additional elements as discussed below. “performed by one or more processors” (recites generic computer hardware to implement the judicial exception which amounts to apply-it type limitations of using a computer to perform the abstract idea, see MPEP 2106.05(f)), receiving a neural network model from a server, the neural network model being collaboratively trainable across multiple clients via a set of specialized neural network models, each specialized neural network model being associated with a subset of a first dataset (recites insignificant extrasolution activity of receiving information over a network, see MPEP 2106.05(g)); generating a local dataset including one or more local examples (recites insignificant extrasolution activity of mere data gathering, see MPEP 2106.05(g)); and generating a personalized model by fine tuning the neural network model based the selected one or more specialized neural network models and the local dataset (recites training/learning for a machine learning model which relates to merely using a computer as a tool to perform the abstract idea, see MPEP 2106.05(f)). As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)). This judicial exception is not integrated into a practical application because the additional elements merely recite receiving information/data gathering and merely configuring/training a machine learning model with respect to its usage to perform the abstract idea. Step 2B: Below is the analysis of the claims: “performed by one or more processors” (recites generic computer hardware to implement the judicial exception which amounts to apply-it type limitations of using a computer to perform the abstract idea, see MPEP 2106.05(f)), receiving a neural network model from a server, the neural network model being collaboratively trainable across multiple clients via a set of specialized neural network models, each specialized neural network model being associated with a subset of a first dataset (recites well-understood, routine, and conventional activity of receiving information over a network, see MPEP 2106.05(d)); generating a local dataset including one or more local examples (recites well-understood, routine, and conventional activity of mere data gathering, see MPEP 2106.05(d)); and generating a personalized model by fine tuning the neural network model based the selected one or more specialized neural network models and the local dataset (recites training/learning for a machine learning model which relates to merely using a computer as a tool to perform the abstract idea, see MPEP 2106.05(f)). As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements merely recite receiving information/data gathering and merely configuring/training a machine learning model with respect to its usage to perform the abstract idea. With regard to claim 2, this claim recites receiving an input; and generating an inference via the personalized model based on the input (which recites mental process step of evaluating/inferring information based on some stimulus, i.e. input). With regard to claim 3, this claim recites in which the first dataset comprises non-independent and identically distributed (non-i.i.d.) data (which recites field of use/technological environment limitations describing the meaning or differences of the data that is being used in the dataset, see MPEP 2106.05(h)). With regard to claim 4: Step 2A, Prong One: The claim recites the following limitations which are drawn towards an abstract idea: A processor-implemented method As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below. Step 2A, Prong Two: The following limitations have been identified as being additional elements as discussed below. “performed by one or more processors” (recites generic computer hardware to implement the judicial exception which amounts to apply-it type limitations of using a computer to perform the abstract idea, see MPEP 2106.05(f)), receiving a local update of the neural network model from a subset of multiple users (recites insignificant extrasolution activity of receiving information over a network, see MPEP 2106.05(g)), each of the local updates being related to one or more subsets of a dataset according to a gating function that indicates the one or more subsets of the dataset to which each local update relates (recite field of use limitations describing meaning of the data and what it represents, see MPEP 2106.05(h)); and transmitting the global update to the subset of the multiple users (recites insignificant extrasolution activity of transmitting information over a network, see MPEP 2106.05(g)). As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)). This judicial exception is not integrated into a practical application because the additional elements merely recite receiving/transmitting information/data. Step 2B: Below is the analysis of the claims: “performed by one or more processors” (recites generic computer hardware to implement the judicial exception which amounts to apply-it type limitations of using a computer to perform the abstract idea, see MPEP 2106.05(f)), receiving a local update of the neural network model from a subset of multiple users (recites well-understood, routine, and conventional activity of receiving information over a network, see MPEP 2106.05(d)), each of the local updates being related to one or more subsets of a dataset according to a gating function that indicates the one or more subsets of the dataset to which each local update relates (recite field of use limitations describing meaning of the data and what it represents, see MPEP 2106.05(h)); and transmitting the global update to the subset of the multiple users (recites well-understood, routine, and conventional activity of transmitting information over a network, see MPEP 2106.05(d)). As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements merely recite receiving/transmitting information/data. With regard to claim 5, this claim recites in which the global update is computed by aggregating the local updates (which recites mental process steps of aggregating or summing values together via mathematical calculations). With regard to claim 6, this claim recites in which the neural network model comprises multiple independent neural network models (recites at a high-level of generality as merely using multiple machine learning models as a tool to implement the abstract idea similar to reciting using multiple processor cores, see MPEP 2106.05(f)). With regard to claim 7, this claim recites in which each user of the multiple users has a different mixture of the multiple independent neural network models based on data characteristics for local data (recites field of use limitations describing at a high-level the intended sorting or partitioning of data and respective models to distributed sites, see MPEP 2106.05(h)). With regard to claim 9, this claim recites in which the dataset includes non-independent and identically distributed (non-i.i.d.) data (recites field of use limitations describing the intended particular relationship of the underlying data to be used, see MPEP 2106.05(h)). With regard to claims 10-12, these claims are substantially similar to claims 1-3 and are rejected and are rejected for the same reasons as discussed above. The main difference between claims 10-12 from claims 1-3 is that claims 10-12 recite a memory and a processor (recites usage of generic computer elements to implement the abstract idea in a computer environment, see MPEP 2106.05(f)). With regard to claims 13-16 and 18, these claims are substantially similar to claims 4-7 and 9 and are rejected and are rejected for the same reasons as discussed above. The main difference between claims 13-16 and 18 from claims 4-7 and 9 is that claims 13-16 and 18 recite a memory and a processor (recites usage of generic computer elements to implement the abstract idea in a computer environment, see MPEP 2106.05(f)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Yan et al, Neural Data Server: A Large-Scale Search Engine for Transfer Learning Data (from IDS) in view of Fidler et al [US 2021/0125077 A1] and Chu et al [US 2021/0374617 A1]. With regard to claim 1, Yan teaches a processor-implemented method performed by one or more processors, the processor-implemented method comprising: receiving a neural network model from a server (see first paragraph of section 3.3; the client downloads/receives a model from the server), each specialized neural network model being associated with a subset of a first dataset (see first five paragraphs of section 3.2.1; the respective experts are associated with a subset of the dataset); generating a local dataset including one or more local examples (see second paragraph in section 3; the client has a target/local dataset with a small set of examples); selecting one or more of the specialized neural network models (see Figure 3 and second to last paragraph in section 3; the system can determine the specialized model/expert that is most relevant or useful to the client’s local dataset). Yan teaches the concepts of transfer learning and federated learning but does not appear to explicitly teach: the neural network model being collaboratively trainable across multiple clients via a set of specialized neural network models, selecting one or more specialized neural network models based on a gating function, the gating function controlling selection of the one or more of the specialized neural network models for the one or more local examples according to a region of the first dataset; and generating a personalized model by fine tuning the neural network model based the selected one or more specialized neural network models and the local dataset. Fidler teaches selecting one or more specialized neural network models based on a gating function, the gating function controlling selection of the one or more of the specialized neural network models for the one or more local examples according to a region of the first dataset associated with the local data set (see paragraphs [0053] and [0043]; the system can utilize a gating function to assign the data points to various independent/expert models); generating a personalized model by fine tuning the neural network model based the selected one or more specialized neural network models and the local dataset (see Figure 2, box 270 and Figure 4; and paragraph [0045]; the system can personalize the model by fine tuning the model for the respective target domain/local dataset). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the neural network training and learning system of Yan by allowing clients to update their model with the dataset of the most relevant expert as taught by Fidler in order to allow the clients to improve the accuracy of the client’s local model while ensuring that the most relevant data is utilized by the client for the training/fine-tuning of the model while minimizing the storage space and bandwidth needed by not sending large quantities of training data thus allowing budget-constrained devices to utilize the best datasets to train their models while not overwhelming or overtaxing their device(s). Yan in view of Fidler teach the concept of federated learning but do not appear to explicitly teach the neural network model being collaboratively trainable across multiple clients via a set of specialized neural network models. Chu teaches the neural network model being collaboratively trainable across multiple clients via a set of specialized neural network models (see paragraphs [0047] and [0050]; the system allows the clients to train a local model that can be utilized to collaboratively update an global model for a particular task). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the neural network training and learning system of Yan in view of Fidler by allowing distributed and collaborative training of the experts for particular tasks as taught by Chu in order to increase system performance by distributing the computational load of the various task/expert models so that a single system is not creating and training each expert thus leveraging numerous distributed processing systems to collaboratively train local models that can be used to update/train a global/expert model for that task. With regard to claim 2, Yan in view of Fidler and Chu teach receiving an input; and generating an inference via the personalized model based on the input (see Chu, paragraphs [0082] and [0085]; once the local model is trained it can be used to receive new data and make an output/inference). With regard to claim 3, Yan in view of Fidler and Chu teach in which the first dataset comprises non-independent and identically distributed (non-i.i.d.) data (see Chu, paragraph [0004]; the system can make use of non-IID datasets). With regard to claims 10-12, these claims are substantially similar to claims 1-3 and are rejected for similar reasons as discussed above. Claims 4-7, 9, 13-16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al [US 2021/0374617 A1] in view of Fidler et al [US 2021/0125077 A1]. With regard to claim 4, Chu teaches a processor-implemented method performed by one or more processor, the processor-implemented method, comprising: receiving a local update of the neural network model from a subset of multiple users, each of the local updates being related to one or more subsets of a dataset the multiple users (see paragraphs [0077], [0046], and [0045]; the system can receive local updates from multiple users and aggregate them together to compute a global update and be able to provide that global update back to the clients). Chu does not appear to explicitly teach according to a gating function that indicates the one or more subsets of the dataset to which local update relates. Fidler teaches according to a gating function that indicates the one or more subsets of the dataset to which local update relates (see Fidler, paragraphs [0053] and [0043] and [0064]; the system can partition data and utilize a gating function to assign the data points to the various independent/expert models). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the multitask federated learning system of Chu by utilizing a gating function as taught by Fidler in order to allow the system to usage a function that partitions the datasets into mutually exclusive subsets so that the training of each distributed model can be parallelized and performed independently on its own subset of data (see Fidler, paragraph [0056]) which allows for less data to be distributed to the various clients thus saving client storage space and reducing network bandwidth usage while allowing for localized models to achieve good performance on an appropriate set of data associated with their respective clients (see Fidler, paragraph [0035]). With regard to claim 5, Chu in view of Fidler teach in which the global update is computed by aggregating the local updates (see Chu, paragraph [0045]; the global update is based on aggregating the local updates). With regard to claim 6, Chu in view of Fidler teach in which the neural network model comprises multiple independent neural network models (see Fidler, paragraph [0031] and [0041]; the server can incorporate multiple neural network models associated with various tasks). With regard to claim 7, Chu in view of Fidler teach in which each user of the multiple users has a different mixture of the multiple independent neural network models based on data characteristics for local data (see Chu, paragraphs [0045] and [0050] and [0053]; see Fidler, paragraph [0064]; the system allows clients to have multiple local models that can be associated with different independent/expert models). With regard to claim 9, Chu in view of Fidler teach in which the dataset includes non-independent and identically distributed (non-i.i.d.) data (see Chu, paragraph [0004]; the system can make use of non-IID datasets). With regard to claims 13-16 and 18, these claims are substantially similar to claims 4-7 and 9 respectively and are rejected for similar reasons as discussed above. Response to Arguments Applicant's arguments (see the first paragraph on page 6 through the top of page 10) have been fully considered but they are not persuasive. The applicant argues (a) that independent claims integrate into a practical application by improving technology or technical field via the selecting one or more models based on gating function and generating personalized models limitations which provide a particular way of achieving model personalization with increased model accuracy and reduced training time (see first two paragraphs on page 7 through); (b) the claims recite an improvement to the technological fields of model personalization in federated learning settings (see paragraphs on page 8); and (c) with regard to claim 4, this claim recites benefits reduced convergence time and improved model accuracy. The Examiner respectfully disagrees. With regard to arguments (a)-(c), these arguments argue that the respective claims recite an improvement to computer functionality. Per MPEP 2106.05(a), the Examiner notes that the “Claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology” and “[a]n important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome”. Additionally, “[i]t is important to note, the judicial exception alone cannot provide the improvement”. With regards to claim 4 (argument (c)), as illustrated in the 35 USC 101 rejections above, the respective limitation the applicant indicates as providing the improvement is indicated as a computation step that was evaluated to be part of the judicial exception; where, as noted above, the judicial exception alone cannot provide the improvement. The claim also recites additional elements of receiving and transmitting data; albeit, at a high-level of generality. As such, applicant’s arguments are not persuasive and the respective 35 USC 101 rejections still stand. With regard to arguments (a) and (b) with respect to claim 1, the additional elements relate to receiving a computerized tool/program such as a machine learning model and then training/re-training various models at a high-level of generality. As noted above, selecting a particular model based on a function or rules/criteria relate to mental process steps which, as noted above, cannot be the source of the improvement. Therefore, applicant’s arguments about the claims reciting the purported improvement is not persuasive since the high-level of generality of the claim recitations appear to merely recite the idea of a solution. Applicant's arguments (see the first paragraph on page 10 with respect to the 35 USC 103 rejections through last paragraph on page 11) have been fully considered but they are not persuasive. The applicant argues that (a) the cited prior art references do not teach the selecting…based on the gating function limitation with respect to claims 1-3 and 10-12 since “the selection is not based on the region of a global dataset” and (b) that the Chu reference does not teach computing a global update according to gradients aligned with the subsets of the dataset. The Examiner respectfully disagrees. With regard to both arguments, applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. In particular, with regards to argument (a), the Examiner notes paragraph [0069] indicates that regions relate to “regions of the input space” or regions of a data set. As illustrated in the 35 USC 103 rejections, the Fidler reference utilizes a gating function to provide a subset (i.e. region) of the data set. With regard to argument (b), the Chu reference teaches in paragraph [0045] that the updates to the central node are based on local updates that include gradients. As such, as can be seen, the combination of references teach the claim limitations as recited. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARC S SOMERS whose telephone number is (571)270-3567. The examiner can normally be reached M-F 11-8 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached at 5712729767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARC S SOMERS/Primary Examiner, Art Unit 2159 3/9/2026
Read full office action

Prosecution Timeline

Sep 23, 2022
Application Filed
Jul 01, 2025
Non-Final Rejection — §101, §103
Sep 15, 2025
Response Filed
Nov 04, 2025
Final Rejection — §101, §103
Dec 18, 2025
Response after Non-Final Action
Jan 08, 2026
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579099
CONTROL LEVEL TAGGING METHOD AND SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12561288
METHOD AND APPARATUS TO VERIFY FILE METADATA IN A DEDUPLICATION FILESYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554681
SYSTEM AND METHOD OF UNDOING DATA BASED ON DATA FLOW MANAGEMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12541502
METHODS AND APPARATUSES FOR IMPROVING PROCESSING EFFICIENCY IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12530365
SYSTEMS AND METHODS FOR A MACHINE LEARNING FRAMEWORK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+34.6%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 563 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month