Prosecution Insights
Last updated: April 19, 2026
Application No. 18/466,333

PERSONALIZED FEDERATED LEARNING UNDER A MIXTURE OF JOINT DISTRIBUTIONS

Non-Final OA §102
Filed
Sep 13, 2023
Examiner
IDOWU, OLUGBENGA O
Art Unit
2494
Tech Center
2400 — Computer Networks
Assignee
NEC Laboratories America Inc.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
452 granted / 636 resolved
+13.1% vs TC avg
Strong +19% interview lift
Without
With
+19.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
26 currently pending
Career history
662
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
25.2%
-14.8% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 636 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 – 20 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Vakhutinsky (VAK), publication number: US 2024/0020716 . As per claim 1, VAK teaches a computer implemented method for personalized federated learning comprising: receiving at a central server local models from a plurality of clients (sharing model parameters with a federated learning server, [0049], Fig. 4) ; aggregating a heterogeneous data distribution extracted from the local models (Federated server averaging received parameters, [0049]) ; processing the data distribution as a linear mixture of joint distributions to provide a global learning model (Linear function, [0134], which is further described in the [ 0123] of document incorporated by reference [0163]) ; and transmitting the global learning model to the clients, wherein the global learning model is used to update the local model (Transmitting averaged model parameters to hierarchical models for prediction, [0049]) . As per claim 2, VAK teaches wherein the receiving local models at the central server includes sending parameters from the local models (Sharing model parameters, [0049]) . As per claim 3, VAK teaches wherein the receiving local models at the central server does not include private data from the plurality of clients (Privacy concerns, [0049]) . As per claim 4, VAK teaches wherein the local model updated by the global learning model is used to predict an outcome from input data applied to the local model updated by the global learning model (Using updated probabilities to upsell, [0049]) . As per claim 5, VAK teaches wherein an outcome predicted is used to perform a sale at a price and specification in accordance with input data that results in prediction of a successful sale (Using updated probabilities to upsell, [0049]) . As per claim 6, VAK teaches wherein training of the local models uses a log-likelihood maximization as training criterion (Log-odd model, [0052]) . As per claim 7, VAK teaches wherein the global learning model is a Federated Gaussian Mixture Model (Fed-GMM) that jointly models joint probability of samples in each client of the plurality of clients (generating a predictive model, [ 0049][ 0095][0097-0098], further described in the incorporated reference [0163]) . As per claim 8, VAK teaches wherein the Fed-GMM that provides the linear mixture of joint distributions, further includes weight for parameters of the model that is personalized for each client (sending new priors, [0049]) . Claims 9 – 15 are rejected based on claims 1 – 7 Claims 16 – 20 are rejected based on claims 1 – 5 Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT OLUGBENGA O IDOWU whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-1450 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 8am - 5pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Jung Kim can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 5712723804 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUGBENGA O IDOWU/ Primary Examiner, Art Unit 2494
Read full office action

Prosecution Timeline

Sep 13, 2023
Application Filed
Mar 19, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591707
Privacy Preserving Insights and Distillation of Large Language Model Backed Experiences
2y 5m to grant Granted Mar 31, 2026
Patent 12587397
MULTI DIMENSION BLOCKCHAIN
2y 5m to grant Granted Mar 24, 2026
Patent 12585753
VALIDATED MOVEMENT OF SHARED IHS HARDWARE COMPONENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12562912
APPLICATION PROGRAMMING INTERFACE (API) PROVISIONING USING DECENTRALIZED IDENTITY
2y 5m to grant Granted Feb 24, 2026
Patent 12556416
METHOD AND SYSTEM FOR ATOMIC, CONSISTENT AND ACCOUNTABLE CROSS-CHAIN REWRITING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
90%
With Interview (+19.1%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 636 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month