Prosecution Insights
Last updated: April 19, 2026
Application No. 17/949,787

RESTRICTED REUSE OF MACHINE LEARNING MODEL DATA FEATURES

Final Rejection §103
Filed
Sep 21, 2022
Examiner
SAX, STEVEN PAUL
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
AT&T Intellectual Property I, L.P.
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
320 granted / 460 resolved
+14.6% vs TC avg
Strong +45% interview lift
Without
With
+44.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
20 currently pending
Career history
480
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
62.5%
+22.5% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 460 resolved cases

Office Action

§103
Detailed Action Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. The amendment filed 12/10/25 has been entered. Claims 6-7 have been cancelled. Claims 1-5 and 8-22 are pending. 3. In view of applicant’s remarks, the objection to claim 13 has been removed. 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claim(s) 1-5, 8-12, and 15-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al “Hu” (US 2025/0209383 A1) and Ma et al “Ma” (US 2021/0392147 A1) and Nagpal et al “Nagpal” (US 20200125775 A1). 6. Regarding claim 1, Hu shows a method comprising obtaining, by a processing system including at least one processor, a request from a first entity to train a machine learning model (para 39, 45, Figure 4C show a first node requesting to train a machine learning model, para 64, 86, 87 show the processors that request and accomplish this); accessing, by the processing system, at least one feature dataset of at least a second entity (para 41, 46-47, 52-54 show accessing the feature dataset from another node); training, by the processing system, the machine learning model on behalf of the first entity in accordance with the at least one data feature of the at least the second entity to generate a trained machine learning model, wherein the at least one data feature of the at least the second entity is a restricted data feature that is inaccessible to the first entity (para 52-54, 56 show training the machine learning model on behalf of the first node using the feature dataset from the second node that had been restricted to the first node). Hu para 27, 33, 56 show accessing the at least one data feature of the second entity/node via a data storage platform of the processing system, but Hu does not explicitly show for each entity of a plurality of different entities including the at least the second entity, the data storage platform stores respective data features, and wherein the at least one data feature of the at least the second entity is accessible to the at least the second entity via the data storage platform, and is inaccessible to others of the plurality of entities via the data storage platform. Ma however does show for each entity of a plurality of different entities including the at least the second entity, the data storage platform stores respective data features wherein the at least one data feature of the at least the second entity is accessible to the at least the second entity via the data storage platform, and is inaccessible to others of the plurality of entities via the data storage platform (Ma para 42, 46-48 show the cloud based storage system stores respective data features for each tenant such that a data feature of a [second] tenant is accessible to that same [second] tenant via the cloud based storage but inaccessible to the other tenants via the cloud based storage). It would have been obvious to a person with ordinary skill in the art to before the effective filing date of the claimed invention to have this privacy structure as in the cloud based storage system of Ma, in the federated learning method of Hu, because it would allow the first node to utilize the data of the second node when training the machine learning model, yet without the first node accessing the second node data itself. This way the privacy of the second node/tenant is not compromised (Ma para 19, 66). Hu para 23, 36 show providing the trained machine learning model to the central server for efficient aggregation, but Hu and Ma do not explicitly show providing the trained machine learning model to the first node per se. Nagpal however shows a network system including internet-of-things device, that provides to a first entity a trained machine learning model using data that had been restricted from that first entity (para 18, 24, 30 show training the machine learning model with the previously restricted data and providing the resulting product to clients/devices/user from which the data had been restricted). It would have been obvious to a person with ordinary skill in the art to before the effective filing date of the claimed invention to have provided the trained machine learning model to the first node in Hu, especially as modified by Ma, because it would allow the first node to utilize the result of training the machine learning model with the restricted feature dataset. Doing so would increase efficiency because it would update the local model before being aggregated at the central server. 7. Regarding claim 2, Hu shows obtaining, by the processing system, at least one data feature of the first entity (Hu para 31, 33, 40, 43 show obtaining feature data of the first node). 8. Regarding claim 3, the obtaining of the at least one data feature of the first entity comprises accessing the at least one data feature of the first entity via the data storage platform of the processing system (Hu para 27, 33, 56 show how the feature data may be stored and accessed through a storage platform of the system. See also Ma para 46-48). 9. Regarding claim 4, the training further comprises training the machine learning model in accordance with the at least one data feature of the first entity and the at least one data feature of the at least the second entity (Hu para 40, 50, 52, 56 show training the machine learning model with feature data of the first node, as well as with feature data from the second node). 10. Regarding claim 5, Hu shows the first entity comprises a provider of a first internet-of-things ecosystem and wherein the at least the second entity comprises a provider of a second internet-of-things ecosystem, and wherein the at least one data feature of the at least the second entity and the at least one data feature of the first entity each comprises data from a respective internet-of-things device at a user premises (Hu para 20, 59, 69, 90 for example show mobile client computing devices connected in a network over the Internet for various automated functionality. The federated learning of Hu such as described in para 20, 22, 25 for example demonstrate a technique within the internet-of-things ecosystem). 11. Regarding claim 8, the at least one data feature of the at least the second entity are further accessible to the processing system via the data storage platform (Hu para 56, 64, 86, 87 show the data, including the feature data of the second node, is accessible to a processor via the storage platform). 12. Regarding claim 9, in addition to that mentioned for claim 8, the at least one data feature of the at least the second entity is accessible to the processing system in accordance with at least one consent obtained from the at least the second entity (Hu para 40-42 show the feature data of the second node is accessible to the first node processor according to permission from the second node). 13. Regarding claim 10, in addition to that mentioned for claim 1, Nagpal shows providing the trained machine learning model to the first entity comprises transmitting the trained machine learning model to the first entity (Nagpal para 18, 24, 30 show transmitting the resulting product of the trained machine learning model to clients/devices/user from which the feature had been restricted). It would have been obvious to a person with ordinary skill in the art to before the effective filing date of the claimed invention to have transmitted the trained machine learning model to the first node in Hu, because it would allow the first node to utilize the result of training the machine learning model with the restricted feature dataset. Doing so would increase efficiency because it would update the local model before being aggregated at the central server. 14. Regarding claim 11, in addition to that mentioned for claim 1, Nagpal shows the providing the trained machine learning model to the first entity comprises deploying the trained machine learning model via the processing system (Nagpal para 18, 24, 30 show deploying the resulting product of the trained machine learning model to clients/devices/user from which the feature had been restricted. Nagpal para 21, 26, Claim 10 show the processors that send and receive the data). It would have been obvious to a person with ordinary skill in the art to before the effective filing date of the claimed invention to have deployed the trained machine learning model to the first node via the processing system in Hu, because it would provide an efficient way to allow the first node to utilize the result of training the machine learning model with the restricted feature dataset. Furthermore, doing so would increase efficiency because it would update the local model before being aggregated at the central server. 15. Regarding claim 12, in addition to that mentioned for claims 1 and 11, Hu shows obtaining an input data set comprising new data associated with the at least one data feature of the second entity and applying the input data set to the trained machine learning model to obtain at least one output of the trained machine learning model (Hu para 52-54 and 56 show training the machine learning model with the obtained feature data of the second node that had been restricted to the first node). Hu does not explicitly show providing the trained machine learning model output to the first node per se. Nagpal however provides to a first entity the output of the trained machine learning model that was trained using data that had been restricted from that first entity (Nagpal para 18, 24, 30 show training the machine learning model with the previously restricted data and providing the resulting product to clients/devices/user from which the data had been restricted). It would have been obvious to a person with ordinary skill in the art to before the effective filing date of the claimed invention to have provided the trained machine learning model output to the first node in Hu, especially as modified by Ma, because it would allow the first node to utilize the result of training the machine learning model with the restricted feature dataset. Doing so would increase efficiency because it would update the local model before being aggregated at the central server. 16. Regarding claim 15, obtaining further comprises obtaining the machine learning model from the first entity (Hu para 20-22, 56 show the machine learning model is transmitted from the first node to the second or third node). 17. Regarding claim 16, the machine learning model is stored via the data storage platform of the processing system (Hu para 20-22, 56 show the machine learning model is transmitted to the local nodes/sites. Hu para 27, 33, 56 show how the transmitted data which includes the machine learning model may be stored and accessed through a storage platform on the system). 18. Regarding claim 17, the request further includes an identification of the at least one data feature of the at least the second entity for training the machine learning model (Hu para 22, 40, 43, 48 show identifying feature data for training the machine learning model). 19. Regarding claim 18, the at least one data feature of the at least the second entity is presented in a data feature catalog of features of a plurality of different entities that are available for training of machine learning models (Hu para 22, 40, 43, 48 show identifying feature data from different nodes that are available for training different machine learning models. Para 50, 73, 82 identify feature data from nodes that meet criteria for training different models, and also retrieve and present the feature data). 20. Claim 19 shows the same features as claim 1 and is rejected for the same reasons. Additionally, Hu para 87, 92 show the non-transitory computer readable medium which stores instructions executed by the processing system to perform the method. 21. Claim 20 shows the same features as claim 1 and is rejected for the same reasons. Additionally, Hu para 87, 92 show the apparatus including the non-transitory computer readable medium which stores instructions executed by the processing system to perform the method. 22. Regarding claim 21, Hu shows obtaining at least one data feature of the first entity (Hu para 31, 33, 40, 43 show obtaining feature data of the first node). 23. Regarding claim 22, the obtaining of the at least one data feature of the first entity comprises accessing the at least one data feature of the first entity via the data storage platform of the processing system (Hu para 27, 33, 56 show how the feature data may be stored and accessed through a storage platform of the system. See also Ma para 46-48). 24. Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu and Ma and Nagpal and Nevatia et al “Nevatia” (US 2020/0412726 A1). 25. Regarding claim 13, Hu para 28 does providing feedback to the learning model, but Hu and Ma and Nagpal do not explicitly show providing usage feedback information to the at least second entity associated with a usage of the at least one data feature of the at least second entity for training the machine learning model. Nevatia however shows providing usage feedback information to a second entity associated with a usage of a data feature of the second entity for training a machine learning model (Nevatia para 37, 38, 47, 102 show providing feedback as to the usage of feature data from a particular second dataset profile on a security monitoring platform, different from one in which certain feature data was restricted, for training a machine learning model). It would have been obvious to a person with ordinary skill in the art to before the effective filing date of the claimed invention to have this in the method of Hu, especially as modified by Ma and Nagpal, and thus provide feedback to the second node regarding usage of its feature data for training the machine learning model, because it would help determine whether feature data should be shared among the nodes and used to train a model. Making this determination would help the federated learning to be more efficient. 26. Regarding claim 14, in addition to that mentioned for claim 13, the usage feedback information comprises at least one of an identity of the first entity; a type of the machine learning model; a topic of the machine learning model: or at least one additional feature used to train the machine learning model (note the alternative recitation; Nevatia para 37-39 show the feedback is regarding additional features now accessible to an access rights profile and that is used to train the machine learning model. Furthermore, Nevatia para 20 shows the feature may be an identifier or other types of identifying information of the access profile). It would have been obvious to a person with ordinary skill in the art to before the effective filing date of the claimed invention to have this in the method of Hu, especially as modified by Ma and Nagpal, because it would help determine whether feature data should be shared among the nodes and available to train a model. Making this determination would help the federated learning to be more efficient. Response to Arguments 27. Applicant's arguments filed 12/10/25 have been fully considered but they are not persuasive. Applicant argues that Hu, Nagpal and Nevatia do not show the amended features to the independent claims, but Ma is brought in to show these. Note though that Hu para 27, 33, 56 do show accessing the at least one data feature of the second entity/node via a data storage platform of the processing system, as explained in the Action. Conclusion 28. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. (Note that the amendment is slightly narrower than the recitation of previous claims 6-7, in that the amended language now refers specifically to a data feature accessible to the second entity via the data storage platform and inaccessible to the others of the plurality of entities via the data storage platform). Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 29. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: a) Rudden et al (US 2021/0065178 A1) shows training a machine learning model operating in an Internet-of-things ecosystem, based on feedback regarding usage of the system. b) Capota et al (EP 3757789 A1) shows providing restricted data from an Internet-of-things ecosystem to train a machine learning model. c) Mudgal (US 20240135209 A1) shows a cloud computing configuration to maintain privacy among tenants during machine learning training. 30. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN PAUL SAX whose telephone number is (571)272-4072. The examiner can normally be reached Monday - Friday, 9:30 - 6:00 Est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed, can be reached at 571-27. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEVEN P SAX/ Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Sep 21, 2022
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Dec 10, 2025
Response Filed
Feb 22, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602537
METHODS FOR SERVING INTERACTIVE CONTENT TO A USER
2y 5m to grant Granted Apr 14, 2026
Patent 12596343
GRAPHICAL ELEMENT SEARCH TECHNIQUE SELECTION, FUZZY LOGIC SELECTION OF ANCHORS AND TARGETS, AND/OR HIERARCHICAL GRAPHICAL ELEMENT IDENTIFICATION FOR ROBOTIC PROCESS AUTOMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12547922
BENCHMARK-DRIVEN AUTOMATION FOR TUNING QUANTUM COMPUTERS
2y 5m to grant Granted Feb 10, 2026
Patent 12541708
TRUSTED AND DECENTRALIZED AGGREGATION FOR FEDERATED LEARNING
2y 5m to grant Granted Feb 03, 2026
Patent 12524691
CENTRAL CONTROLLER FOR A QUANTUM SYSTEM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+44.8%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 460 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month