Prosecution Insights
Last updated: April 19, 2026
Application No. 18/681,403

METHODS, ARCHITECTURES, APPARATUSES AND SYSTEMS FOR AI/ML MODEL DISTRIBUTION

Non-Final OA §101§103
Filed
Feb 05, 2024
Examiner
KHAN, HASSAN ABDUR-RAHMAN
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
Interdigital Ce Patent Holdings SAS
OA Round
2 (Non-Final)
72%
Grant Probability
Favorable
2-3
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
227 granted / 315 resolved
+14.1% vs TC avg
Strong +17% interview lift
Without
With
+17.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
27 currently pending
Career history
342
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
14.9%
-25.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 315 resolved cases

Office Action

§101 §103
DETAILED ACTION Claim 14 has been amended. Claims 1 – 19 have been examined and are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/02/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 9 - 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 9 is directed to a “first wireless transmit/receive unit (WTRU)” that is “configured to” perform a series of steps involving sending and receiving information, determining another WTRU, downloading an AI/ML model, and generating inference results. However, claim 9 does not recite any structural limitations or components that constitute a “machine” as required by 35 U.S.C. § 101. Instead, the claim is drafted in terms of functional results and steps to be performed, without specifying any particular physical structure or arrangement of parts. Under 35 U.S.C. § 101, statutory subject matter includes processes, machines, manufactures, and compositions of matter. The Supreme Court has explained that a “machine” is a concrete thing, consisting of parts, or of certain devices and combination of devices. (In re Nuitjen, 500 F.3d 1346, 1355 (Fed. Cir. 2007); see also In re Ferguson, 558 F.3d 1359, 1364 (Fed. Cir. 2009)). To qualify as a machine, a claim must recite physical or structural components, not merely a capability or intended use. ([In re Ching, 2012 WL 1890347 (BPAI 2012)]). Here, claim 9 merely recites a “first WTRU” configured to perform a series of functional steps, such as sending, determining, receiving, downloading, and generating results, without reciting any physical or structural elements or a combination of parts that define the “machine.” The claim does not specify any hardware, circuitry, or other tangible components that constitute the WTRU. As such, claim 9 is not directed to a statutory “machine” under 35 U.S.C. § 101. Accordingly, claims 9 – 16 are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 19 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application US Patent Application Publication No. 2020/0050951 to Wang et al. (hereinafter Wang), and in view of US Patent Application Publication No. 2024/0187127 to Narayanan Thangaraj et al. (hereinafter Narayanan). Claim 1, Wang discloses distribution of machine learning across collaborative computing nodes, and further it discloses: method implemented by a first wireless transmit/receive unit (WTRU), the method comprising (Wang teaches that a model requester node generates a request for machine learning model and distributes it to peers to identify those that can provide parameters, and further Wang discloses (¶5) computer-implemented techniques for collaborative distributed machine learning using a model requester node, which is an edge node of a network of cloud computing nodes) sending, to a network entity, a subscription request for downloading an artificial intelligence/machine learning (AI/ML) model (Wang discloses (¶5) sending a subscription request for model portions i.e. a model requester node that sends specification/request for a machine learning model to broker node (¶67) or other peers and recruiting nodes based on replies) However, Wang does not explicitly disclose wherein the AI/ML model comprising comprises a first model portion, and one or more further model portions, determining a second WTRU storing at least the first model portion of the AI/ML model, sending, to the network entity, first information comprising an indication of the second WTRU, receiving, from the network entity, second information indicating a schedule for downloading at least the first model portion of the AI/ML model from the second WTRU, downloading, from the second WTRU via a device-to-device communication between the first WTRU and the second WTRU, at least the first model portion of the AI/ML model at a scheduled time using the second information, wherein the first model portion is a base AI/ML model of the AI/ML model, and generating inference results using the at least first model portion. However, in an analogous art, Narayanan teaches: wherein the AI/ML model comprising comprises a first model portion (Narayanan discloses (¶6- ¶8) A data processing model or “first model portion” may be one of an artificial intelligence (AI) model, a machine learning (ML) model), and one or more further model portions (Narayanan teaches (¶89 and ¶126) adaptive processing of AI components with different characteristics (e.g., large AI component, small AI component, rule-based components etc.) determining a second WTRU storing at least the first model portion of the AI/ML model (Narayanan teaches ¶90 AI components stored in a device such as a gNB, a WTRU etc. Further, Narayanan teaches (¶188, ¶191) availability, configurations and/or use of an AI component/model configured at the WTRU, for example, AI models with different properties/characteristics/configuration aspects) sending, to the network entity, first information comprising an indication of the second WTRU (Narayanan teaches (¶196) WTRU capability related to AI processing is shared between multiple processes, and the WTRU (¶199) may be configured with a base model i.e. first information) receiving, from the network entity, second information indicating a schedule for downloading at least the first model portion of the AI/ML model from the second WTRU (Narayanan teaches (¶73) WTRU may initiate a change for adaptation i.e. indicating a schedule for downloading (¶75) of an applicable AI model based on one or more criteria for the change and/or adaptation of the AI component (¶76) a triggering condition has been met ¶78, or based on one or more selection criteria, for example, when the WTRU monitor a change in WTRU capability or it detects (e.g., upon a detection of) a change in the execution environment (or context) of the AI component) at a time instant (¶229), it may change dynamically and adjust AI processing power and/or switch to a different AI model, ¶ 230). downloading, from the second WTRU via a device-to-device communication between the first WTRU and the second WTRU (Narayanan teaches ¶101 WTRU configured with a first AI model and a second AI model. Narayanan teaches (¶141) adaptive processing associated with AI components, and (¶261) peer-to-peer (P2P) communications wherein the WTRU indicate to the network that an adaptation has occurred e.g., so that the network may choose its peer AI model), at least the first model portion of the AI/ML model at a scheduled time using the second information (Narayanan teaches (¶146, ¶261) network node may determine (i.e. scheduled time) using one or more techniques that a WTRU has performed such adaptation, and/or that a change in the execution or performance of an AI component may have occurred to apply a suitable peer AI/ML component), wherein the first model portion is a base AI/ML model of the AI/ML model (Narayanan teaches (¶199) the WTRU may be configured with a base AI model and/or rules to derive a plurality of child models from the base model) and generating inference results using the at least first model portion (Narayanan teaches (¶215) WTRU may adapt AI processing using adaptation triggered based on change in context, for example, to improve an AI model performance e.g., inference accuracy) It would have been obvious as of the effective filing date to one of ordinary skill in the art to combine method implemented by a first wireless transmit/receive unit (WTRU), the method comprising: sending, to a network entity, a subscription request for downloading an artificial intelligence/machine learning (AI/ML) model, as disclosed by Wang, wherein the AI/ML model comprising comprises a first model portion, and one or more further model portions, determining a second WTRU storing at least the first model portion of the AI/ML model, sending, to the network entity, first information comprising an indication of the second WTRU, receiving, from the network entity, second information indicating a schedule for downloading at least the first model portion of the AI/ML model from the second WTRU, downloading, from the second WTRU via a device-to-device communication between the first WTRU and the second WTRU, at least the first model portion of the AI/ML model at a scheduled time using the second information, wherein the first model portion is a base AI/ML model of the AI/ML model, and generating inference results using the at least first model portion, as taught by Narayanan, for the purpose of implementing (¶3) systems, methods, and instrumentalities for modifying, adapting and/or changing the processing associated with an artificial intelligence (AI) component in a node (e.g., a wireless transmit/receive unit, WTRU). Claim 2, Wang in view of Narayanan discloses all the elements of claim 1. Further, they teach: sending, to the network entity, information indicating a successful download of at least the first model portion (Narayanan teaches (¶268-¶269) the WTRU generating a response as an acknowledgement (ACK/NACK). The WTRU is configured to receive an explicit or implicit indication related to AI processing in a DL or a UL transmission. The WTRU may apply the indicated AI model if a successful response may be received associated with the DL transmission.) The motivation to combine the references is similar to the reasons in Claim 1. Claim 3, Wang in view of Narayanan discloses all the elements of claim 1. Further, they teach: wherein the AI/ML model has any of: (1) a greater accuracy than the base AI/ML model for a predetermined validation data set (Wang discloses (¶6) that collaboratively trained models have improved accuracy by updating the seed parameters (¶52) when it aggregates the external and internal updated parameters, and estimates learning utility of each of the plurality of other edge nodes. This discloses that model performance is measured by comparing results against validation (learning utility), which supports the greater accuracy element), (2) a greater number of floating point operations (Wang discloses (¶6) using the set of participating edge nodes, and the model aggregates the updates provided by these edge nodes, and this shows increasing model parameter size/complexity and implying greater floating point operations) and (3) a greater memory size (Narayanan discloses (¶132) resources required to process the model e.g., processing power, memory requirements etc.) The motivation to combine the references is similar to the reasons in Claim 1. Claim 4, Wang in view of Narayanan discloses all the elements of claim 1. Further, they teach: wherein the second information indicates a score associated with the first model portion of the AI/ML model (Wang discloses (¶49) learning utility score associated with the distributed machine learning model and recruit only nodes that can contribute significantly towards updating model with higher learning utilities), and wherein the schedule for downloading at least the first model portion of the AI/ML model is based on the score associated with the first model portion of the AI/ML model (Narayanan teaches (¶119) AI processing may apply one or more AI model architectures to perform one or more of classification, prediction, pattern recognition, dimensionality reduction, estimation, interpolation, clustering, regression, compression, recommendation, approximation of an arbitrary function etc. Narayanan teaches (¶211) WTRU may be configured with a base model having model weights (i.e. score), activations, and model structure/computational graph. The WTRU may be configured to determine (e.g., derive) a plurality of child model instances from the base model.) The motivation to combine the references is similar to the reasons in Claim 1. Claim 5, Wang in view of Narayanan discloses all the elements of claim 4. Further, they teach: wherein the score associated with the first model portion of the AI/ML model is based on any of: (1) a model portion scarcity (Wang discloses (¶6) the model requester node then aggregates the external and internal updated parameters at the model requester node, estimates learning utility of each of the plurality of other edge nodes (based on comparison of the external updated parameters to the internal updated parameters) … the learning utility estimation depends on comparison of parameter sets, which maps to scarcity (how unique parameters are, and order (external vs. internal updates) (2) a model portion order (Wang discloses (¶7) the model requester node may determine a set of participating edge nodes for a next round of training based on the estimated learning utility values), and (3) a condition that a model portion corresponds to a base AI/ML model of the AI/ML model (Narayanan teaches (¶211) WTRU may determine the learned parameters (e.g., weights) of the child model instance, for example, as a function of learned parameters (e.g., weights) in the base model.) The motivation to combine the references is similar to the reasons in Claim 1. Claim 6, Wang in view of Narayanan discloses all the elements of claim 4. Further, they teach: wherein the score associated with the first model portion of the AI/ML model is based on any of: a distance between the first WTRU and the second WTRU, and a throughput between the first WTRU and the second WTRU (Narayanan teaches (¶75) AI model input/output dimensions and model weights (i.e. scores) are dependent on (¶28) varying quality of service (QoS), mobility and latency i.e. geographic distance (¶74) specific parameters requirements, and the differing throughput requirements for WTRUs in the RAN. The WTRU (¶75) initiate changes in execution of an applicable AI component/AI model based on (e.g., upon) a detection of a change in the execution environment (or context) of the AI component.) The motivation to combine the references is similar to the reasons in Claim 1. Claim 7, Wang in view of Narayanan discloses all the elements of claim 1. Further, they teach: sending, to the network entity, third information comprising any of: (1) a location of the first WTRU, (2) a speed of the first WTRU (3) a direction of the first WTRU (Narayanan teaches (¶38) current location of the WTRU, the (¶165) WTRU mobility state i.e. speed of the WTRU and measured doppler spread), and (4) a throughput between the first WTRU and the second WTRU (Narayanan teaches (¶62) throughput between the WTRUs 102a, 102b, 102c.) The motivation to combine the references is similar to the reasons in Claim 1. Claim 8, Wang in view of Narayanan discloses all the elements of claim 1. Further, they teach: wherein the second WTRU comprises a local server storing at least the first model portion of the AI/ML model (Wang discloses (¶5) an edge node of a network of cloud computing nodes with seed parameters (i.e. the first model portion of the AI/ML model) These participating edge nodes or edge servers (¶98) performs a one-step update of the see parameters and provide the updated parameters to the model requester node. Wang verbatim teaching that these peers can be an edge server (local server) and storage/serving node for updated model parameters.) The motivation to combine the references is similar to the reasons in Claim 1. Claim 9, do not teach or further define over the limitations in Claim 1. Therefore, claim 9 is rejected for the same rationale of rejection as set forth in Claim 1. Claim 10, do not teach or further define over the limitations in Claim 2. Therefore, claim 10 is rejected for the same rationale of rejection as set forth in Claim 2. Claim 11, do not teach or further define over the limitations in Claim 3. Therefore, claim 11 is rejected for the same rationale of rejection as set forth in Claim 3. Claim 12, do not teach or further define over the limitations in Claim 4. Therefore, claim 12 is rejected for the same rationale of rejection as set forth in Claim 4. Claim 13, do not teach or further define over the limitations in Claim 5. Therefore, claim 13 is rejected for the same rationale of rejection as set forth in Claim 5. Claim 14, do not teach or further define over the limitations in Claim 6. Therefore, claim 14 is rejected for the same rationale of rejection as set forth in Claim 6. Claim 15, do not teach or further define over the limitations in Claim 7. Therefore, claim 15 is rejected for the same rationale of rejection as set forth in Claim 7. Claim 16, do not teach or further define over the limitations in Claim 8. Therefore, claim 16 is rejected for the same rationale of rejection as set forth in Claim 8. Claim 17, do not teach or further define over the limitations in Claim 1. Therefore, claim 17 is rejected for the same rationale of rejection as set forth in Claim 1. Claim 18, Wang in view of Narayanan discloses all the elements of claim 1. Further, they teach: wherein determining the second WTRU storing at least the first model portion of the AI/ML model (Wang disclose (¶5) the peer edge nodes (i.e. second WTRU) stores machine learning model parameters (i.e. portions) that are provided to the requester) sending a discovery request to the second WTRU, wherein the second WTRU is in a vicinity of the first WTRU (Narayanan teaches (¶Fig. 1) WTRUs 102a, 102b, 102c in vicinity of one another) and receiving, from the second WTRU, fourth information indicating that the second WTRU stores the first model portion (Narayanan teaches (¶94-¶95) the WTRU may be configured with one (or more) selection criteria associated with the AI models (e.g., the first AI model and the second AI model). The selection criteria may be associated with the context of an AI model. The first context may be a first WTRU capability (e.g., one or more of memory, available processing power, etc.) within a first range and a second context may be a second WTRU capability (e.g., one or more of memory, available processing power, etc.) within a second range. The WTRU upon detecting (Fig. 2A and 3) a change of context, may replace the first AI model with the second AI model; and the WTRU may initiate a procedure (e.g., a transmission of an indication) that may implicitly or explicitly indicate the change in AI model to another node in the wireless network.) The motivation to combine the references is similar to the reasons in Claim 1. Claim 19, do not teach or further define over the limitations in Claim 18. Therefore, claim 19 is rejected for the same rationale of rejection as set forth in Claim 18. Response to Arguments Claim Rejections - 35 USC § 103 Applicant’s arguments, filed on 12/17/2025 with respect to the Claims 1 – 19 have been fully considered and they are persuasive. Hence, the 35 USC § 103 rejection is withdrawn. However, based on the arguments, the search is updated and a new reference (US Patent Application Publication No. 2024/0187127 to Narayanan Thangaraj et al.) is being introduced for the 35 USC § 103 rejection. Conclusion Citation of Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent Application Publication No. 2021/0342747 to Feng et al. (Method and system for distributed deep machine learning) Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASSAN KHAN whose telephone number is (313) 446-6574 and fax number is (571) 483-7559. The examiner can normally be reached on MONDAY - THURSDAY. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/Awww.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Christopher L. Parry can be reached on (571) 272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent- center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H. A. K./ Examiner, Art Unit 2451 /Chris Parry/Supervisory Patent Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Feb 05, 2024
Application Filed
Sep 26, 2025
Non-Final Rejection — §101, §103
Dec 17, 2025
Response Filed
Jan 22, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602038
LOGGING SUPPORT APPARATUS, LOGGING SYSTEM, METHOD FOR LOGGING SUPPORT, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12598142
PACKET LOAD-BALANCING
2y 5m to grant Granted Apr 07, 2026
Patent 12598112
METHOD FOR PERFORMING TRANSFER LEARNING, COMMUNICATION DEVICE, PROCESSING DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12585558
Remote Online Volume Cloning Method and System
2y 5m to grant Granted Mar 24, 2026
Patent 12574297
SYSTEMS AND METHODS BUDGET-CONSTRAINED SENSOR NETWORK DESIGN FOR DISTRIBUTION NETWORKS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
72%
Grant Probability
90%
With Interview (+17.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 315 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month