Prosecution Insights
Last updated: April 19, 2026
Application No. 18/179,629

NEURAL NETWORK MODEL PARTITIONING IN A WIRELESS COMMUNICATION SYSTEM

Non-Final OA §101§102§103§112
Filed
Mar 07, 2023
Examiner
TRAN, DANIEL DUC
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
35 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
33.3%
-6.7% vs TC avg
§103
39.0%
-1.0% vs TC avg
§102
10.0%
-30.0% vs TC avg
§112
16.9%
-23.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/03/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: The paragraph numbering in the specification jumps around as seen in pages 2, 5, 6, 20, 23, 25, 26, 31, 42, and 44. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “An apparatus for wireless communication at a first device, comprising: means for obtaining” in claim 29. “means for receiving” in claim 29. “means for selecting” in claim 29 Because these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112a The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 29 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. There is no mention of structure for performing the means for obtaining, receiving, and selecting. Claim Rejections - 35 USC § 112b The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim limitation “means for obtaining”, “means for receiving”, “means for selecting” in claim 29 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. no association between the structure and the function can be found in the specification. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. For examination purposes, the means for is performed by a generic processor Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 28 and 30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In reference to claim 1: Step 1 - Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is directed to a machine Step 2A Prong 1 - Does the claim recite an abstract idea, law of nature, or natural phenomenon? “select, based at least in part on the first performance information and the second performance information, a candidate partition layer of the different candidate partition layers for partitioning the neural network model into the first sub-neural network model on the first device and the second sub-neural network model on the second device.” which is an abstract idea because it is directed to a mental process, an observation, evaluation, judgement, or opinion. The limitation as drafted, and under a broadest reasonable interpretation, can be performed in the human mind, or by a human using a pen and paper (MPEP 2106.04(a)(2)(Ill)(c)). For example, a person could select a candidate partition layer based at least in part on the first performance information and the second performance information. Step 2A Prong 2 - Does the claim recite additional elements that integrate the judicial exception into a practical application? “A first device for wireless communication, comprising: a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the first device to:” is merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). “obtain first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device;” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) “receive second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) The claim does not include additional elements that are integrated into a practical application. Step 2B - Does the claim recite additional elements that amount to significantly more than the judicial exception? “A first device for wireless communication, comprising: a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the first device to:” is merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). “obtain first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device;” (well-understood, routine, conventional MPEP 2106.05(d)) “receive second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and” (well-understood, routine, conventional MPEP 2106.05(d)) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In reference to claim 2: Step 1 - Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is directed to a machine Step 2A Prong 1 - Does the claim recite an abstract idea, law of nature, or natural phenomenon? “The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: select, after performing a first iteration of a training session using the candidate partition layer, a second candidate partition layer for partitioning the neural network model; and” which is an abstract idea because it is directed to a mental process, an observation, evaluation, judgement, or opinion. The limitation as drafted, and under a broadest reasonable interpretation, can be performed in the human mind, or by a human using a pen and paper (MPEP 2106.04(a)(2)(Ill)(c)). For example, a person could select a second candidate partition layer. Step 2A Prong 2 - Does the claim recite additional elements that integrate the judicial exception into a practical application? “perform a second iteration of the training session using the second candidate partition layer.” is merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). The claim does not include additional elements that are integrated into a practical application. Step 2B - Does the claim recite additional elements that amount to significantly more than the judicial exception? “perform a second iteration of the training session using the second candidate partition layer.” is merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In reference to claim 3: Step 1 - Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is directed to a machine Step 2A Prong 1 - Does the claim recite an abstract idea, law of nature, or natural phenomenon? “wherein the second candidate partition layer is selected based at least in part on the updated first performance information and the updated second performance information.” which is an abstract idea because it is directed to a mental process, an observation, evaluation, judgement, or opinion. The limitation as drafted, and under a broadest reasonable interpretation, can be performed in the human mind, or by a human using a pen and paper (MPEP 2106.04(a)(2)(Ill)(c)). For example, a person could select the second candidate partition layer is selected based at least in part on the updated first and second performance information. Step 2A Prong 2 - Does the claim recite additional elements that integrate the judicial exception into a practical application? “The first device of claim 2, wherein the instructions are further executable by the processor to cause the first device to: obtain updated first performance information of the first device based at least in part on performing a threshold quantity of iterations of the training session;” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) “receive updated second performance information of the second device based at least in part on performing the threshold quantity of iterations,” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) The claim does not include additional elements that are integrated into a practical application. Step 2B - Does the claim recite additional elements that amount to significantly more than the judicial exception? “The first device of claim 2, wherein the instructions are further executable by the processor to cause the first device to: obtain updated first performance information of the first device based at least in part on performing a threshold quantity of iterations of the training session;” (well-understood, routine, conventional MPEP 2106.05(d)) “receive updated second performance information of the second device based at least in part on performing the threshold quantity of iterations,” (well-understood, routine, conventional MPEP 2106.05(d)) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In reference to claim 4: Step 1 - Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is directed to a machine Step 2A Prong 1 - Does the claim recite an abstract idea, law of nature, or natural phenomenon? “The first device of claim 2, wherein the second candidate partition layer is selected based at least in part on a gradient for updating a weight of the second candidate partition layer being less than a threshold gradient.” which is an abstract idea because it is directed to a mental process, an observation, evaluation, judgement, or opinion. The limitation as drafted, and under a broadest reasonable interpretation, can be performed in the human mind, or by a human using a pen and paper (MPEP 2106.04(a)(2)(Ill)(c)). For example, a person could select the second candidate partition layer based at least in part on a gradient. Step 2A Prong 2 - Does the claim recite additional elements that integrate the judicial exception into a practical application? No Step 2B - Does the claim recite additional elements that amount to significantly more than the judicial exception? No In reference to claim 5: Claim 5 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 6: Claim 6 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 7: Claim 7 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 8: Claim 8 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 9: Claim 9 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 10: Claim 10 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 11: Claim 11 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 12: Claim 12 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 13: Claim 13 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 14: Claim 14 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 15: Step 1 - Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is directed to a process Step 2A Prong 1 - Does the claim recite an abstract idea, law of nature, or natural phenomenon? “selecting, based at least in part on the first performance information and the second performance information, a candidate partition layer of the different candidate partition layers for partitioning the neural network model into the first sub-neural network model on the first device and the second sub-neural network model on the second device.” which is an abstract idea because it is directed to a mental process, an observation, evaluation, judgement, or opinion. The limitation as drafted, and under a broadest reasonable interpretation, can be performed in the human mind, or by a human using a pen and paper (MPEP 2106.04(a)(2)(Ill)(c)). For example, a person could select a candidate partition layer based at least in part on the first performance information and the second performance information. Step 2A Prong 2 - Does the claim recite additional elements that integrate the judicial exception into a practical application? “obtaining first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device;” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) “receiving second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) The claim does not include additional elements that are integrated into a practical application. Step 2B - Does the claim recite additional elements that amount to significantly more than the judicial exception? “obtaining first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device;” (well-understood, routine, conventional MPEP 2106.05(d)) “receiving second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and” (well-understood, routine, conventional MPEP 2106.05(d)) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In reference to claim 16: Step 1 - Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is directed to a process Step 2A Prong 1 - Does the claim recite an abstract idea, law of nature, or natural phenomenon? “The method of claim 15, further comprising: selecting, after performing a first iteration of a training session using the candidate partition layer, a second candidate partition layer for partitioning the neural network model; and” which is an abstract idea because it is directed to a mental process, an observation, evaluation, judgement, or opinion. The limitation as drafted, and under a broadest reasonable interpretation, can be performed in the human mind, or by a human using a pen and paper (MPEP 2106.04(a)(2)(Ill)(c)). For example, a person could select a second candidate partition layer. Step 2A Prong 2 - Does the claim recite additional elements that integrate the judicial exception into a practical application? “performing a second iteration of the training session using the second candidate partition layer.” is merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). The claim does not include additional elements that are integrated into a practical application. Step 2B - Does the claim recite additional elements that amount to significantly more than the judicial exception? “performing a second iteration of the training session using the second candidate partition layer.” is merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In reference to claim 17: Step 1 - Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is directed to a process Step 2A Prong 1 - Does the claim recite an abstract idea, law of nature, or natural phenomenon? “wherein the second candidate partition layer is selected based at least in part on the updated first performance information and the updated second performance information.” which is an abstract idea because it is directed to a mental process, an observation, evaluation, judgement, or opinion. The limitation as drafted, and under a broadest reasonable interpretation, can be performed in the human mind, or by a human using a pen and paper (MPEP 2106.04(a)(2)(Ill)(c)). For example, a person could select the second candidate partition layer is selected based at least in part on the updated first and second performance information. Step 2A Prong 2 - Does the claim recite additional elements that integrate the judicial exception into a practical application? “The method of claim 16, further comprising: obtaining updated first performance information of the first device based at least in part on performing a threshold quantity of iterations of the training session;” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) “receiving updated second performance information of the second device based at least in part on performing the threshold quantity of iterations,” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) The claim does not include additional elements that are integrated into a practical application. Step 2B - Does the claim recite additional elements that amount to significantly more than the judicial exception? “The method of claim 16, further comprising: obtaining updated first performance information of the first device based at least in part on performing a threshold quantity of iterations of the training session;” (well-understood, routine, conventional MPEP 2106.05(d)) “receiving updated second performance information of the second device based at least in part on performing the threshold quantity of iterations,” (well-understood, routine, conventional MPEP 2106.05(d)) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In reference to claim 18: Step 1 - Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is directed to a process Step 2A Prong 1 - Does the claim recite an abstract idea, law of nature, or natural phenomenon? “The method of claim 16, wherein the second candidate partition layer is selected based at least in part on a gradient for updating a weight of the second candidate partition layer being less than a threshold gradient.” which is an abstract idea because it is directed to a mental process, an observation, evaluation, judgement, or opinion. The limitation as drafted, and under a broadest reasonable interpretation, can be performed in the human mind, or by a human using a pen and paper (MPEP 2106.04(a)(2)(Ill)(c)). For example, a person could select the second candidate partition layer based at least in part on a gradient. Step 2A Prong 2 - Does the claim recite additional elements that integrate the judicial exception into a practical application? No Step 2B - Does the claim recite additional elements that amount to significantly more than the judicial exception? No In reference to claim 19: Claim 19 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 20: Claim 20 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 21: Claim 21 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 22: Claim 22 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 23: Claim 23 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 24: Claim 24 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 25: Claim 25 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 26: Claim 26 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 27: Claim 27 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 28: Claim 28 is directed to a judicial exception from claim(s) depended on and does not recite additional elements that integrate the judicial exception into a practical application and amount to significantly more than the judicial exception. In reference to claim 30: Step 1 - Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is directed to a manufacture Step 2A Prong 1 - Does the claim recite an abstract idea, law of nature, or natural phenomenon? “select, based at least in part on the first performance information and the second performance information, a candidate partition layer of the different candidate partition layers for partitioning the neural network model into the first sub-neural network model on the first device and the second sub-neural network model on the second device.” which is an abstract idea because it is directed to a mental process, an observation, evaluation, judgement, or opinion. The limitation as drafted, and under a broadest reasonable interpretation, can be performed in the human mind, or by a human using a pen and paper (MPEP 2106.04(a)(2)(Ill)(c)). For example, a person could select a candidate partition layer based at least in part on the first performance information and the second performance information. Step 2A Prong 2 - Does the claim recite additional elements that integrate the judicial exception into a practical application? “A non-transitory computer-readable medium storing code for wireless communication at a first device, the code comprising instructions executable by a processor to:” is merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). “obtain first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device;” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) “receive second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and” (insignificant extra-solution activity mere data gathering MPEP 2106.05(g)) The claim does not include additional elements that are integrated into a practical application. Step 2B - Does the claim recite additional elements that amount to significantly more than the judicial exception? “A non-transitory computer-readable medium storing code for wireless communication at a first device, the code comprising instructions executable by a processor to:” is merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). “obtain first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device;” (well-understood, routine, conventional MPEP 2106.05(d)) “receive second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and” (well-understood, routine, conventional MPEP 2106.05(d)) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 10-19, and 24-30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Eric Samikwa et al; “ARES: Adaptive Resource – Aware Split Learning for Internet of Things” available online Sept 24, 2022 (hereinafter “Samikwa”). Regarding claim 1, Samikwa anticipates A first device for wireless communication, comprising: a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the first device to: (Samikwa Fig 5 and Table 3 shows a processor (CPU); memory coupled with the processor (RAM)) obtain first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device; (Samikwa Page 4 Paragraph 5; “In order to distribute the learning task between IoT devices and the edge server, the model is split in two sub-models for each device participating in the training. During training round 𝑘 on an IoT device 𝜙, the first … layers … are executed on the IoT device 𝜙, while the last … layers… are executed on the edge server.” Samikwa Page 7 Paragraph 1; “Both edge server and IoT devices estimate forward and back propagation time of each layer of the model through benchmarking: … For each layer of the model, the mean time needed to perform forward and backward propagation is estimated by averaging the time measured for each of the benchmarking propagations and fed to the Optimization Module.” Examiner notes that first performance information (time measured for each of the benchmarking propagation) is obtained of the first device (IoT device) associated with different candidate partition layers (first layers) for partitioning a neural network model (model) into a first sub-neural network model on the first device and a second sub-neural network model on a second device (edge server)) receive second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and (Examiner refers to previous mapping to show that second performance information (time measured for each of the benchmarking propagation) of the second device (edge server) associated with the different candidate partition layers (last N layers) for partitioning the neural network model) select, based at least in part on the first performance information and the second performance information, a candidate partition layer of the different candidate partition layers for partitioning the neural network model into the first sub neural network model on the first device and the second sub-neural network model on the second device. (Samikwa Page 6 Paragraph 2; “We can now define the total energy consumption 𝐸(𝑘) 𝑠 (𝜙) ∈ R as the energy consumed by the IoT device 𝜙 during the whole 𝑘th training round of a model split at layer 𝐿𝑠… we assume that each of the 𝛷 IoT devices has its own model split point 𝑠(𝑘) 𝜙 ∈ {1,…,𝑁}, which can be different at each training round 𝑘 according to the variable system context (e.g., available wireless throughput, computational load on IoT devices and edge server, etc.). We define the system split vector 𝑠𝑘 as the vector of device-specific split points … for each IoT device 𝜙 during training round 𝑘.” Examiner notes that selecting a candidate layer (layer to be split) based at least in part on the first and second performance information (available wireless throughput, computational load on IoT devices and edge server, etc) for partitioning the neural network model into the first sub neural network model on the first device (IoT device) and the second sub neural network model on the second device (edge server)) Regarding claim 2, Samikwa anticipates The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: select, after performing a first iteration of a training session using the candidate partition layer, a second candidate partition layer for partitioning the neural network model; and (Samikwa Fig 4 and Page 6 Paragraph 6; “During each training round 𝑘, the Estimation Module and Network Module compute the information about the system context and then the Optimization Module computes an optimal split vector 𝑠𝑘 that minimizes training time and energy consumption.” Examiner notes that after performing a first iteration of a training session (Round k-1) using the candidate partition layer (sk-1), selecting/computing a second candidate partition layer (layer sk to be split) for partitioning the neural network model) PNG media_image1.png 544 743 media_image1.png Greyscale perform a second iteration of the training session using the second candidate partition layer. (Examiner refers to previous mapping to show that a second iteration of the training session (round k) using the second candidate partition layer (sk)) Regarding claim 3, Samikwa anticipates The first device of claim 2, wherein the instructions are further executable by the processor to cause the first device to: obtain updated first performance information of the first device based at least in part on performing a threshold quantity of iterations of the training session; and (Samikwa Page 7 Paragraph 1; “the Estimation Module performs a benchmark for IoT devices and edge server, respectively, every 𝑀D and 𝑀C training rounds” Examiner notes that updated first performance information (benchmark information) of the first device (IoT server) is obtained based at least in part on performing a threshold quantity of iterations of the training session (training rounds k)) receive updated second performance information of the second device based at least in part on performing the threshold quantity of iterations, (Examiner refers to previous mapping to show that updated second performance information (benchmark information) of the second device (edge server) is received based at least in part on performing the threshold quantity of iterations (training rounds k)) wherein the second candidate partition layer is selected based at least in part on the updated first performance information and the updated second performance information. (Samikwa Fig 4 and Page 6 Paragraph 6; “During each training round 𝑘, the Estimation Module and Network Module compute the information about the system context and then the Optimization Module computes an optimal split vector 𝑠𝑘 that minimizes training time and energy consumption.” Examiner notes that second candidate partition layer (split point/layer that is split defined in split vector sk) is selected based at least in part on the updated first and second performance information (benchmark information from the estimation module)) Regarding claim 4, Samikwa anticipates The first device of claim 2, wherein the second candidate partition layer is selected based at least in part on a gradient for updating a weight of the second candidate partition layer being less than a threshold gradient. (Samikwa Page 4 Paragraph 6; “During back propagation, the edge server transmits a total volume 𝑉′ 𝑠 … of data generated by layer 𝐿𝑠 … (intermediate gradients) to the IoT device 𝜙 for the mini batch of size𝜉, so that it can continue updating the weights from layer 𝐿𝑠 (𝑘) 𝜙 .” Samikwa Page 11 Paragraph 4; “Techniques such as parameter quantization and adaptive gradient threshold mechanisms may reduce the communication cost” Examiner notes that the second candidate partition layer is selected based on at least in part on a gradient (intermediate gradients) for updating a weight (weights) from the second candidate partition layer (layer Ls) being less than a threshold gradient (gradient threshold mechanisms)) Regarding claim 5, Samikwa anticipates The first device of claim 1, wherein the first performance information and the second performance information each comprise latency information and power consumption information. (Samikwa Page 4 Paragraph 1; “ARES takes into account application constraints to enable efficient mitigation of training optimization tradeoffs (i.e. training time, energy consumption).” Examiner notes that first and second performance information each comprise latency information (training time) and power consumption information (energy consumption)) Regarding claim 10, Samikwa anticipates The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: perform part of a training session iteration using the first sub-neural network model; and (Samikwa Fig 4 and Page 7 Paragraph 2; “For each minibatch fed to model split at layer 𝐿𝑠 during training round 𝑘, a volume 𝑉𝑠 of intermediate activations is transmitted by the IoT device to the edge server and a volume 𝑉′ 𝑠 of intermediate activations is transmitted by the edge server to the IoT device.” Examiner notes that a part of training session iteration (round k) is performed using the first sub neural network model (model present in IoT device)) PNG media_image1.png 544 743 media_image1.png Greyscale transmit an output of the candidate partition layer to the second device based at least in part on performing part of the training session iteration. (Examiner refers to previous mapping to show that an output (volume vs) of the candidate partition layer (layer ls) is transmitted to the second device (edge server) based on at least in part on performing part of the training session iteration (round k)) Regarding claim 11, Samikwa anticipates The first device of claim 10, wherein the instructions are further executable by the processor to cause the first device to: receive, from the second device based at least in part on transmitting the output, a second output of a second layer of the neural network model that is adjacent to the candidate partition layer; and (Samikwa Fig 4 and Page 7 Paragraph 2; “For each minibatch fed to model split at layer 𝐿𝑠 during training round 𝑘, a volume 𝑉𝑠 of intermediate activations is transmitted by the IoT device to the edge server and a volume 𝑉′ 𝑠 of intermediate activations is transmitted by the edge server to the IoT device.” Examiner notes that a second output (𝑉′ 𝑠) of a second layer of the neural network model (model split at layer L present in edge server) that is adjacent to the candidate partition layer (where the layer is split means the layers are adjacent/order of processing) is received from the second device (edge server) based at least in part on transmitting the output (vs)) update one or more weights of the candidate partition layer based at least in part on the second output. (Samikwa Page 4 Paragraph 6; “During backpropagation, the edge server transmits a total volume 𝑉′ 𝑠 (𝑘) 𝜙 of data generated by layer 𝐿𝑠 (𝑘) 𝜙 +1 (intermediate gradients) to the IoT device 𝜙 for the mini batch of size 𝜉, so that it can continue updating the weights from layer𝐿𝑠 (𝑘) 𝜙 .” Examiner notes that one or more weights (weights) of the candidate partition layer (layer L) is updated based at least in part on the second output (𝑉′ 𝑠)) Regarding claim 12, Samikwa anticipates The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: receive an output of the candidate partition layer from the second device; and (Samikwa Fig 4 and Page 7 Paragraph 2; “For each minibatch fed to model split at layer 𝐿𝑠 during training round 𝑘, a volume 𝑉𝑠 of intermediate activations is transmitted by the IoT device to the edge server and a volume 𝑉′ 𝑠 of intermediate activations is transmitted by the edge server to the IoT device.” Examiner notes that IoT device is receiving an output (𝑉′ 𝑠) of the candidate partition layer (layer Ls) from the second device (edge server)) perform part of a training session iteration using the first sub-neural network model based at least in part on the output of the candidate partition layer. (Examiner refers to previous mapping to show that part of a training session iteration (round K) is performed using the first sub neural network model (model present in IoT device) based at least in part on the out of the candidate partition layer (𝑉′ 𝑠 is sent to edge server from IoT device)) Regarding claim 13, Samikwa anticipates The first device of claim 12, wherein the instructions are further executable by the processor to cause the first device to: transmit, to the second device, a second output, of a second layer of the neural network model that is adjacent to the candidate partition layer, for updating one or more weights of the candidate partition layer. (Samikwa Page 4 Paragraph 6; “During forward propagation, an IoT device 𝜙 transmits a total volume 𝑉𝑠 (𝑘) 𝜙 of data generated by layer𝐿 𝑠 (𝑘) 𝜙 (intermediate activations) to the edge server for the mini batch of size 𝜉, so that it can continue the forward propagation from layer 𝐿𝑠 (𝑘) 𝜙 +1 ... so that it can continue updating the weights from layer𝐿𝑠 (𝑘) 𝜙 .” Examiner notes that a second output (𝑉𝑠) of a second layer of the neural network model (Layer Ls+1 of the neural network model present on the edge server) that is adjacent (where the layer is split means the layers are adjacent/order of processing) to the candidate partition layer (Layer Ls) is transmitted to the second device (edge server) for updating one or more weights of the candidate partition layer) Regarding claim 14, Samikwa anticipates The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: perform part of a task using the first sub-neural network model, wherein the first sub-neural network model includes the candidate partition layer; (Samikwa Fig 4 and Page 7 Paragraph 2; “For each minibatch fed to model split at layer 𝐿𝑠 during training round 𝑘, a volume 𝑉𝑠 of intermediate activations is transmitted by the IoT device to the edge server and a volume 𝑉′ 𝑠 of intermediate activations is transmitted by the edge server to the IoT device.” Examiner notes that part of a task (training) using the first sub neural network model (neural network present on IoT device), wherein the first sub neural network model includes the candidate partition layer (layer Ls)) and transmit an output of the candidate partition layer to the second device for use by the second sub-neural network model. (Examiner refers to previous mapping to show that an output of the candidate partition layer (volume vs) is transmitted to the second device (edge server) for use by the second sub neural network model (neural network present on edge server)) Regarding claim 15, Samikwa anticipates A method for wireless communication at a first device, comprising: obtaining first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device; (Samikwa Page 4 Paragraph 5; “In order to distribute the learning task between IoT devices and the edge server, the model is split in two sub-models for each device participating in the training. During training round 𝑘 on an IoT device 𝜙, the first … layers … are executed on the IoT device 𝜙, while the last … layers… are executed on the edge server.” Samikwa Page 7 Paragraph 1; “Both edge server and IoT devices estimate forward and back propagation time of each layer of the model through benchmarking: … For each layer of the model, the mean time needed to perform forward and backward propagation is estimated by averaging the time measured for each of the benchmarking propagations and fed to the Optimization Module.” Examiner notes that first performance information (time measured for each of the benchmarking propagation) is obtained of the first device (IoT device) associated with different candidate partition layers (first layers) for partitioning a neural network model (model) into a first sub-neural network model on the first device and a second sub-neural network model on a second device (edge server)) receiving second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and (Examiner refers to previous mapping to show that second performance information (time measured for each of the benchmarking propagation) of the second device (edge server) associated with the different candidate partition layers (last N layers) for partitioning the neural network model) selecting, based at least in part on the first performance information and the second performance information, a candidate partition layer of the different candidate partition layers for partitioning the neural network model into the first sub neural network model on the first device and the second sub-neural network model on the second device. (Samikwa Page 6 Paragraph 2; “We can now define the total energy consumption 𝐸(𝑘) 𝑠 (𝜙) ∈ R as the energy consumed by the IoT device 𝜙 during the whole 𝑘th training round of a model split at layer 𝐿𝑠… we assume that each of the 𝛷 IoT devices has its own model split point 𝑠(𝑘) 𝜙 ∈ {1,…,𝑁}, which can be different at each training round 𝑘 according to the variable system context (e.g., available wireless throughput, computational load on IoT devices and edge server, etc.). We define the system split vector 𝑠𝑘 as the vector of device-specific split points … for each IoT device 𝜙 during training round 𝑘.” Examiner notes that selecting a candidate layer (layer to be split) based at least in part on the first and second performance information (available wireless throughput, computational load on IoT devices and edge server, etc) for partitioning the neural network model into the first sub neural network model on the first device (IoT device) and the second sub neural network model on the second device (edge server)) Regarding claim 16, Samikwa anticipates The method of claim 15, further comprising: selecting, after performing a first iteration of a training session using the candidate partition layer, a second candidate partition layer for partitioning the neural network model; and (Samikwa Fig 4 and Page 6 Paragraph 6; “During each training round 𝑘, the Estimation Module and Network Module compute the information about the system context and then the Optimization Module computes an optimal split vector 𝑠𝑘 that minimizes training time and energy consumption.” Examiner notes that after performing a first iteration of a training session (Round k-1) using the candidate partition layer (sk-1), selecting/computing a second candidate partition layer (layer sk to be split) for partitioning the neural network model) performing a second iteration of the training session using the second candidate partition layer. (Examiner refers to previous mapping to show that a second iteration of the training session (round k) using the second candidate partition layer (sk)) Regarding claim 17, Samikwa anticipates The method of claim 16, further comprising: obtaining updated first performance information of the first device based at least in part on performing a threshold quantity of iterations of the training session; and (Samikwa Page 7 Paragraph 1; “the Estimation Module performs a benchmark for IoT devices and edge server, respectively, every 𝑀D and 𝑀C training rounds” Examiner notes that updated first performance information (benchmark information) of the first device (IoT server) is obtained based at least in part on performing a threshold quantity of iterations of the training session (training rounds k)) receiving updated second performance information of the second device based at least in part on performing the threshold quantity of iterations, (Examiner refers to previous mapping to show that updated second performance information (benchmark information) of the second device (edge server) is received based at least in part on performing the threshold quantity of iterations (training rounds k)) wherein the second candidate partition layer is selected based at least in part on the updated first performance information and the updated second performance information. (Samikwa Fig 4 and Page 6 Paragraph 6; “During each training round 𝑘, the Estimation Module and Network Module compute the information about the system context and then the Optimization Module computes an optimal split vector 𝑠𝑘 that minimizes training time and energy consumption.” Examiner notes that second candidate partition layer (split point/layer that is split defined in split vector sk) is selected based at least in part on the updated first and second performance information (benchmark information from the estimation module)) Regarding claim 18, Samikwa anticipates The method of claim 16, wherein the second candidate partition layer is selected based at least in part on a gradient for updating a weight of the second candidate partition layer being less than a threshold gradient. (Samikwa Page 4 Paragraph 6; “During back propagation, the edge server transmits a total volume 𝑉′ 𝑠 … of data generated by layer 𝐿𝑠 … (intermediate gradients) to the IoT device 𝜙 for the mini batch of size𝜉, so that it can continue updating the weights from layer 𝐿𝑠 (𝑘) 𝜙 .” Samikwa Page 11 Paragraph 4; “Techniques such as parameter quantization and adaptive gradient threshold mechanisms may reduce the communication cost” Examiner notes that the second candidate partition layer is selected based on at least in part on a gradient (intermediate gradients) for updating a weight (weights) from the second candidate partition layer (layer Ls) being less than a threshold gradient (gradient threshold mechanisms)) Regarding claim 19, Samikwa anticipates The method of claim 15, wherein the first performance information and the second performance information each comprise latency information and power consumption information. (Samikwa Page 4 Paragraph 1; “ARES takes into account application constraints to enable efficient mitigation of training optimization tradeoffs (i.e. training time, energy consumption).” Examiner notes that first and second performance information each comprise latency information (training time) and power consumption information (energy consumption)) Regarding claim 24, Samikwa anticipates The method of claim 15, further comprising: performing part of a training session iteration using the first sub-neural PNG media_image1.png 544 743 media_image1.png Greyscale network model; and (Samikwa Fig 4 and Page 7 Paragraph 2; “For each minibatch fed to model split at layer 𝐿𝑠 during training round 𝑘, a volume 𝑉𝑠 of intermediate activations is transmitted by the IoT device to the edge server and a volume 𝑉′ 𝑠 of intermediate activations is transmitted by the edge server to the IoT device.” Examiner notes that a part of training session iteration (round k) is performed using the first sub neural network model (model present in IoT device)) transmitting an output of the candidate partition layer to the second device based at least in part on performing part of the training session iteration. (Examiner refers to previous mapping to show that an output (volume vs) of the candidate partition layer (layer ls) is transmitted to the second device (edge server) based on at least in part on performing part of the training session iteration (round k)) Regarding claim 25, Samikwa anticipates The method of claim 24, further comprising: receiving, from the second device based at least in part on transmitting the output, a second output of a second layer of the neural network model that is adjacent to the candidate partition layer; and (Samikwa Fig 4 and Page 7 Paragraph 2; “For each minibatch fed to model split at layer 𝐿𝑠 during training round 𝑘, a volume 𝑉𝑠 of intermediate activations is transmitted by the IoT device to the edge server and a volume 𝑉′ 𝑠 of intermediate activations is transmitted by the edge server to the IoT device.” Examiner notes that a second output (𝑉′ 𝑠) of a second layer of the neural network model (model split at layer L present in edge server) that is adjacent to the candidate partition layer (where the layer is split means the layers are adjacent/order of processing) is received from the second device (edge server) based at least in part on transmitting the output (vs)) updating one or more weights of the candidate partition layer based at least in part on the second output. (Samikwa Page 4 Paragraph 6; “During backpropagation, the edge server transmits a total volume 𝑉′ 𝑠 (𝑘) 𝜙 of data generated by layer 𝐿𝑠 (𝑘) 𝜙 +1 (intermediate gradients) to the IoT device 𝜙 for the mini batch of size 𝜉, so that it can continue updating the weights from layer𝐿𝑠 (𝑘) 𝜙 .” Examiner notes that one or more weights (weights) of the candidate partition layer (layer L) is updated based at least in part on the second output (𝑉′ 𝑠)) Regarding claim 26, Samikwa anticipates The method of claim 15, further comprising: receiving an output of the candidate partition layer from the second device; and (Samikwa Fig 4 and Page 7 Paragraph 2; “For each minibatch fed to model split at layer 𝐿𝑠 during training round 𝑘, a volume 𝑉𝑠 of intermediate activations is transmitted by the IoT device to the edge server and a volume 𝑉′ 𝑠 of intermediate activations is transmitted by the edge server to the IoT device.” Examiner notes that IoT device is receiving an output (𝑉′ 𝑠) of the candidate partition layer (layer Ls) from the second device (edge server)) performing part of a training session iteration using the first sub-neural network model based at least in part on the output of the candidate partition layer. (Examiner refers to previous mapping to show that part of a training session iteration (round K) is performed using the first sub neural network model (model present in IoT device) based at least in part on the out of the candidate partition layer (𝑉′ 𝑠 is sent to edge server from IoT device)) Regarding claim 27, Samikwa anticipates The method of claim 26, further comprising: transmitting, to the second device, a second output, of a second layer of the neural network model that is adjacent to the candidate partition layer, for updating one or more weights of the candidate partition layer. (Samikwa Page 4 Paragraph 6; “During forward propagation, an IoT device 𝜙 transmits a total volume 𝑉𝑠 (𝑘) 𝜙 of data generated by layer𝐿 𝑠 (𝑘) 𝜙 (intermediate activations) to the edge server for the mini batch of size 𝜉, so that it can continue the forward propagation from layer 𝐿𝑠 (𝑘) 𝜙 +1 ... so that it can continue updating the weights from layer𝐿𝑠 (𝑘) 𝜙 .” Examiner notes that a second output (𝑉𝑠) of a second layer of the neural network model (Layer Ls+1 of the neural network model present on the edge server) that is adjacent (where the layer is split means the layers are adjacent/order of processing) to the candidate partition layer (Layer Ls) is transmitted to the second device (edge server) for updating one or more weights of the candidate partition layer) Regarding claim 28, Samikwa anticipates The method of claim 15, further comprising: performing part of a task using the first sub-neural network model, wherein the first sub-neural network model includes the candidate partition layer; (Samikwa Fig 4 and Page 7 Paragraph 2; “For each minibatch fed to model split at layer 𝐿𝑠 during training round 𝑘, a volume 𝑉𝑠 of intermediate activations is transmitted by the IoT device to the edge server and a volume 𝑉′ 𝑠 of intermediate activations is transmitted by the edge server to the IoT device.” Examiner notes that part of a task (training) using the first sub neural network model (neural network present on IoT device), wherein the first sub neural network model includes the candidate partition layer (layer Ls)) and transmitting an output of the candidate partition layer to the second device for use by the second sub-neural network model. (Examiner refers to previous mapping to show that an output of the candidate partition layer (volume vs) is transmitted to the second device (edge server) for use by the second sub neural network model (neural network present on edge server)) Regarding claim 29, Samikwa anticipates An apparatus for wireless communication at a first device, comprising: means for obtaining first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device; (Samikwa Page 4 Paragraph 5; “In order to distribute the learning task between IoT devices and the edge server, the model is split in two sub-models for each device participating in the training. During training round 𝑘 on an IoT device 𝜙, the first … layers … are executed on the IoT device 𝜙, while the last … layers… are executed on the edge server.” Samikwa Page 7 Paragraph 1; “Both edge server and IoT devices estimate forward and back propagation time of each layer of the model through benchmarking: … For each layer of the model, the mean time needed to perform forward and backward propagation is estimated by averaging the time measured for each of the benchmarking propagations and fed to the Optimization Module.” Examiner notes that first performance information (time measured for each of the benchmarking propagation) is obtained of the first device (IoT device) associated with different candidate partition layers (first layers) for partitioning a neural network model (model) into a first sub-neural network model on the first device and a second sub-neural network model on a second device (edge server)) means for receiving second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and (Examiner refers to previous mapping to show that second performance information (time measured for each of the benchmarking propagation) of the second device (edge server) associated with the different candidate partition layers (last N layers) for partitioning the neural network model) means for selecting, based at least in part on the first performance information and the second performance information, a candidate partition layer of the different candidate partition layers for partitioning the neural network model into the first sub neural network model on the first device and the second sub-neural network model on the second device. (Samikwa Page 6 Paragraph 2; “We can now define the total energy consumption 𝐸(𝑘) 𝑠 (𝜙) ∈ R as the energy consumed by the IoT device 𝜙 during the whole 𝑘th training round of a model split at layer 𝐿𝑠… we assume that each of the 𝛷 IoT devices has its own model split point 𝑠(𝑘) 𝜙 ∈ {1,…,𝑁}, which can be different at each training round 𝑘 according to the variable system context (e.g., available wireless throughput, computational load on IoT devices and edge server, etc.). We define the system split vector 𝑠𝑘 as the vector of device-specific split points … for each IoT device 𝜙 during training round 𝑘.” Examiner notes that selecting a candidate layer (layer to be split) based at least in part on the first and second performance information (available wireless throughput, computational load on IoT devices and edge server, etc) for partitioning the neural network model into the first sub neural network model on the first device (IoT device) and the second sub neural network model on the second device (edge server)) Regarding claim 30, Samikwa anticipates A non-transitory computer-readable medium storing code for wireless communication at a first device, the code comprising instructions executable by a processor to: (Samikwa Fig 5 and Table 3 shows a processor (CPU); memory coupled with the processor (RAM)) obtain first performance information of the first device associated with different candidate partition layers for partitioning a neural network model into a first sub-neural network model on the first device and a second sub-neural network model on a second device; (Samikwa Page 4 Paragraph 5; “In order to distribute the learning task between IoT devices and the edge server, the model is split in two sub-models for each device participating in the training. During training round 𝑘 on an IoT device 𝜙, the first … layers … are executed on the IoT device 𝜙, while the last … layers… are executed on the edge server.” Samikwa Page 7 Paragraph 1; “Both edge server and IoT devices estimate forward and back propagation time of each layer of the model through benchmarking: … For each layer of the model, the mean time needed to perform forward and backward propagation is estimated by averaging the time measured for each of the benchmarking propagations and fed to the Optimization Module.” Examiner notes that first performance information (time measured for each of the benchmarking propagation) is obtained of the first device (IoT device) associated with different candidate partition layers (first layers) for partitioning a neural network model (model) into a first sub-neural network model on the first device and a second sub-neural network model on a second device (edge server)) receive second performance information of the second device associated with the different candidate partition layers for partitioning the neural network model; and (Examiner refers to previous mapping to show that second performance information (time measured for each of the benchmarking propagation) of the second device (edge server) associated with the different candidate partition layers (last N layers) for partitioning the neural network model) select, based at least in part on the first performance information and the second performance information, a candidate partition layer of the different candidate partition layers for partitioning the neural network model into the first sub neural network model on the first device and the second sub-neural network model on the second device. (Samikwa Page 6 Paragraph 2; “We can now define the total energy consumption 𝐸(𝑘) 𝑠 (𝜙) ∈ R as the energy consumed by the IoT device 𝜙 during the whole 𝑘th training round of a model split at layer 𝐿𝑠… we assume that each of the 𝛷 IoT devices has its own model split point 𝑠(𝑘) 𝜙 ∈ {1,…,𝑁}, which can be different at each training round 𝑘 according to the variable system context (e.g., available wireless throughput, computational load on IoT devices and edge server, etc.). We define the system split vector 𝑠𝑘 as the vector of device-specific split points … for each IoT device 𝜙 during training round 𝑘.” Examiner notes that selecting a candidate layer (layer to be split) based at least in part on the first and second performance information (available wireless throughput, computational load on IoT devices and edge server, etc) for partitioning the neural network model into the first sub neural network model on the first device (IoT device) and the second sub neural network model on the second device (edge server)) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6-9 and 20-23 are rejected under 35 U.S.C. 103 as being unpatentable over Eric Samikwa et al; “ARES: Adaptive Resource – Aware Split Learning for Internet of Things” available online Sept 24, 2022 (hereinafter “Samikwa”) in further view of Jindrich Zejda; US 12093806 B1 filed on Jul 1, 2019 (hereinafter “Zejda”). Regarding claim 6, Samikwa does not teach The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: transmit a request to partition the neural network model to the second device, wherein the second performance information is received based at least in part on transmitting the request. However, Zejda does teach The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: transmit a request to partition the neural network model to the second device, wherein the second performance information is received based at least in part on transmitting the request. (Zejda Column 8 line 1; “In some embodiments, the neural network may be received as part of or in response to a network request to a service interface that compiles and/or executes neural networks on behalf of clients of the service… the number of processing units to utilize may be specified in a request, the partitioning scheme to apply to the neural network may be specified in a request, and other execution parameters/features may be received as part of a request.” Examiner notes that a request to partition the neural network model (request sending partitioning scheme) to the second device (client device), wherein the second performance information (execution parameters/features) is received based at least in part on transmitting the request (received as part of a request)) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Samikwa and Zedja. Samikwa teaches a scheme for efficient model training in IoT systems. Zedja teaches using a partitioning scheme to divide the neural network. One of ordinary skill would have motivation to combine Samikwa and Zedja to avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order to execute a neural network faster “to avoid dynamic memory allocation of weights in dedicated cache (which may involve selectively loading and reloading weight values from a memory multiple times), Static memory allocation may thus improve the performance of systems executing neural networks by avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order that would otherwise be implemented as part of dynamic memory allocation, allowing a neural network compiler to generate the instructions to execute a neural network faster.” (Zedja Column 3 Line 27). Regarding claim 7, Samikwa does not teach The first device of claim 6, wherein the request is transmitted based at least in part on a processing capability of the first device. However, Zedja does teach The first device of claim 6, wherein the request is transmitted based at least in part on a processing capability of the first device. (Zedja Column 6 line 56; “Subgraph partitioning may determine from the received neural network (e.g., a framework or request parameter specified by a programmer/client/user) the configuration, capabilities, or number of tensor processing units to consider when partitioning, or may automatically determine the configuration, capabilities, or number of tensor processing units to consider. Subgraph partitioning 318 may receive a parameter, request, or other indication that specifies which subgraph partitioning scheme to apply or may determine automatically which partitioning scheme to apply.” Examiner notes that request is transmitted based at least in part on a processing capability (capabilities, or number of tensor processing units) of the first device (device hosting neural network)) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Samikwa and Zedja. Samikwa teaches a scheme for efficient model training in IoT systems. Zedja teaches using a partitioning scheme to divide the neural network. One of ordinary skill would have motivation to combine Samikwa and Zedja to avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order to execute a neural network faster “to avoid dynamic memory allocation of weights in dedicated cache (which may involve selectively loading and reloading weight values from a memory multiple times), Static memory allocation may thus improve the performance of systems executing neural networks by avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order that would otherwise be implemented as part of dynamic memory allocation, allowing a neural network compiler to generate the instructions to execute a neural network faster.” (Zedja Column 3 Line 27). Regarding claim 8, Samikwa does not teach The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: receive a request to partition the neural network model from the second device, wherein the first performance information is obtained based at least in part on receiving the request. However, Zedja does teach The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: receive a request to partition the neural network model from the second device, wherein the first performance information is obtained based at least in part on receiving the request. (Zejda Column 8 line 1; “In some embodiments, the neural network may be received as part of or in response to a network request to a service interface that compiles and/or executes neural networks on behalf of clients of the service… the number of processing units to utilize may be specified in a request, the partitioning scheme to apply to the neural network may be specified in a request, and other execution parameters/features may be received as part of a request.” Examiner notes that a request to partition the neural network model (in response to request sending partitioning scheme) from the second device (client device), wherein the first performance information (execution parameters/features) is received based at least in part on transmitting the request (received as part of a request)) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Samikwa and Zedja. Samikwa teaches a scheme for efficient model training in IoT systems. Zedja teaches using a partitioning scheme to divide the neural network. One of ordinary skill would have motivation to combine Samikwa and Zedja to avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order to execute a neural network faster “to avoid dynamic memory allocation of weights in dedicated cache (which may involve selectively loading and reloading weight values from a memory multiple times), Static memory allocation may thus improve the performance of systems executing neural networks by avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order that would otherwise be implemented as part of dynamic memory allocation, allowing a neural network compiler to generate the instructions to execute a neural network faster.” (Zedja Column 3 Line 27). Regarding claim 9, Samikwa teaches based at least in part on selecting the candidate partition layer (Samikwa Page 6 Paragraph 3; “we assume that each of the 𝛷 IoT devices has its own model split point 𝑠(𝑘) 𝜙 ∈ {1,…,𝑁}, which can be different at each training round 𝑘 according to the variable system context (e.g., available wireless throughput, computational load on IoT devices and edge server, etc.). We define the system split vector 𝑠𝑘 as the vector of device-specific split points 𝑠𝑘 = ( )⊺ 𝑠 (𝑘) 1 ,…,𝑠(𝑘) 𝜙 ∈ 𝑆 ={1,…,𝑁}𝛷 for each IoT device 𝜙 during training round 𝑘.” Examiner notes that selecting a candidate partition layer (split point/layer to be split neural network)) Samikwa does not teach The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: transmit an indication of the candidate partition layer to the second device [based at least in part on selecting the candidate partition layer]. However, Samikwa does teach The first device of claim 1, wherein the instructions are further executable by the processor to cause the first device to: transmit an indication of the candidate partition layer to the second device [based at least in part on selecting the candidate partition layer]. (Zedja Column 8 Line 1; “the number of processing units to utilize may be specified in a request, the partitioning scheme to apply to the neural network may be specified in a request” Examiner notes that an indication (request) is transmitted of the candidate layer (partitioning scheme) to the second device) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Samikwa and Zedja. Samikwa teaches a scheme for efficient model training in IoT systems. Zedja teaches using a partitioning scheme to divide the neural network. One of ordinary skill would have motivation to combine Samikwa and Zedja to avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order to execute a neural network faster “to avoid dynamic memory allocation of weights in dedicated cache (which may involve selectively loading and reloading weight values from a memory multiple times), Static memory allocation may thus improve the performance of systems executing neural networks by avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order that would otherwise be implemented as part of dynamic memory allocation, allowing a neural network compiler to generate the instructions to execute a neural network faster.” (Zedja Column 3 Line 27). Regarding claim 20, Samikwa does not teach The method of claim 15, further comprising: transmitting a request to partition the neural network model to the second device, wherein the second performance information is received based at least in part on transmitting the request. However, Zejda does teach The method of claim 15, further comprising: transmitting a request to partition the neural network model to the second device, wherein the second performance information is received based at least in part on transmitting the request. (Zejda Column 8 line 1; “In some embodiments, the neural network may be received as part of or in response to a network request to a service interface that compiles and/or executes neural networks on behalf of clients of the service… the number of processing units to utilize may be specified in a request, the partitioning scheme to apply to the neural network may be specified in a request, and other execution parameters/features may be received as part of a request.” Examiner notes that a request to partition the neural network model (request sending partitioning scheme) to the second device (client device), wherein the second performance information (execution parameters/features) is received based at least in part on transmitting the request (received as part of a request)) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Samikwa and Zedja. Samikwa teaches a scheme for efficient model training in IoT systems. Zedja teaches using a partitioning scheme to divide the neural network. One of ordinary skill would have motivation to combine Samikwa and Zedja to avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order to execute a neural network faster “to avoid dynamic memory allocation of weights in dedicated cache (which may involve selectively loading and reloading weight values from a memory multiple times), Static memory allocation may thus improve the performance of systems executing neural networks by avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order that would otherwise be implemented as part of dynamic memory allocation, allowing a neural network compiler to generate the instructions to execute a neural network faster.” (Zedja Column 3 Line 27). Regarding claim 21, Samikwa does not teach The method of claim 20, wherein the request is transmitted based at least in part on a processing capability of the first device. However, Zedja does teach The method of claim 20, wherein the request is transmitted based at least in part on a processing capability of the first device. (Zedja Column 6 line 56; “Subgraph partitioning may determine from the received neural network (e.g., a framework or request parameter specified by a programmer/client/user) the configuration, capabilities, or number of tensor processing units to consider when partitioning, or may automatically determine the configuration, capabilities, or number of tensor processing units to consider. Subgraph partitioning 318 may receive a parameter, request, or other indication that specifies which subgraph partitioning scheme to apply or may determine automatically which partitioning scheme to apply.” Examiner notes that request is transmitted based at least in part on a processing capability (capabilities, or number of tensor processing units) of the first device (device hosting neural network)) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Samikwa and Zedja. Samikwa teaches a scheme for efficient model training in IoT systems. Zedja teaches using a partitioning scheme to divide the neural network. One of ordinary skill would have motivation to combine Samikwa and Zedja to avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order to execute a neural network faster “to avoid dynamic memory allocation of weights in dedicated cache (which may involve selectively loading and reloading weight values from a memory multiple times), Static memory allocation may thus improve the performance of systems executing neural networks by avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order that would otherwise be implemented as part of dynamic memory allocation, allowing a neural network compiler to generate the instructions to execute a neural network faster.” (Zedja Column 3 Line 27). Regarding claim 22, Samikwa does not teach The method of claim 15, further comprising: receiving a request to partition the neural network model from the second device, wherein the first performance information is obtained based at least in part on receiving the request. However, Zedja does teach The method of claim 15, further comprising: receiving a request to partition the neural network model from the second device, wherein the first performance information is obtained based at least in part on receiving the request. (Zejda Column 8 line 1; “In some embodiments, the neural network may be received as part of or in response to a network request to a service interface that compiles and/or executes neural networks on behalf of clients of the service… the number of processing units to utilize may be specified in a request, the partitioning scheme to apply to the neural network may be specified in a request, and other execution parameters/features may be received as part of a request.” Examiner notes that a request to partition the neural network model (in response to request sending partitioning scheme) from the second device (client device), wherein the first performance information (execution parameters/features) is received based at least in part on transmitting the request (received as part of a request)) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Samikwa and Zedja. Samikwa teaches a scheme for efficient model training in IoT systems. Zedja teaches using a partitioning scheme to divide the neural network. One of ordinary skill would have motivation to combine Samikwa and Zedja to avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order to execute a neural network faster “to avoid dynamic memory allocation of weights in dedicated cache (which may involve selectively loading and reloading weight values from a memory multiple times), Static memory allocation may thus improve the performance of systems executing neural networks by avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order that would otherwise be implemented as part of dynamic memory allocation, allowing a neural network compiler to generate the instructions to execute a neural network faster.” (Zedja Column 3 Line 27). Regarding claim 23, Samikwa teaches based at least in part on selecting the candidate partition layer (Samikwa Page 6 Paragraph 3; “we assume that each of the 𝛷 IoT devices has its own model split point 𝑠(𝑘) 𝜙 ∈ {1,…,𝑁}, which can be different at each training round 𝑘 according to the variable system context (e.g., available wireless throughput, computational load on IoT devices and edge server, etc.). We define the system split vector 𝑠𝑘 as the vector of device-specific split points 𝑠𝑘 = ( )⊺ 𝑠 (𝑘) 1 ,…,𝑠(𝑘) 𝜙 ∈ 𝑆 ={1,…,𝑁}𝛷 for each IoT device 𝜙 during training round 𝑘.” Examiner notes that selecting a candidate partition layer (split point/layer to be split neural network)) Samikwa does not teach The method of claim 15, further comprising: transmitting an indication of the candidate partition layer to the second device [based at least in part on selecting the candidate partition layer]. However, Samikwa does teach The method of claim 15, further comprising: transmitting an indication of the candidate partition layer to the second device [based at least in part on selecting the candidate partition layer]. (Zedja Column 8 Line 1; “the number of processing units to utilize may be specified in a request, the partitioning scheme to apply to the neural network may be specified in a request” Examiner notes that an indication (request) is transmitted of the candidate layer (partitioning scheme) to the second device) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Samikwa and Zedja. Samikwa teaches a scheme for efficient model training in IoT systems. Zedja teaches using a partitioning scheme to divide the neural network. One of ordinary skill would have motivation to combine Samikwa and Zedja to avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order to execute a neural network faster “to avoid dynamic memory allocation of weights in dedicated cache (which may involve selectively loading and reloading weight values from a memory multiple times), Static memory allocation may thus improve the performance of systems executing neural networks by avoiding compile-time complexity, compute time to find valid block sizes and scheduling optimal re-load order that would otherwise be implemented as part of dynamic memory allocation, allowing a neural network compiler to generate the instructions to execute a neural network faster.” (Zedja Column 3 Line 27). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL DUC TRAN whose telephone number is (571)272-6870. The examiner can normally be reached Mon-Fri 8:00-5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.D.T./Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Mar 07, 2023
Application Filed
Mar 03, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month