Prosecution Insights
Last updated: April 19, 2026
Application No. 18/024,903

Source Selection based on Diversity for Machine Learning

Non-Final OA §101§103§112
Filed
Mar 06, 2023
Examiner
JAYAKUMAR, CHAITANYA R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
26%
Grant Probability
At Risk
1-2
OA Rounds
4y 6m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
13 granted / 51 resolved
-29.5% vs TC avg
Strong +22% interview lift
Without
With
+22.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
18 currently pending
Career history
69
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 51 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This action is in response to the submission filed 06 March 2023 for application 18/024,903. Currently claims 1-18 are canceled. Claims 19-38 are newly added. Claims 19-38 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for domestic priority based on the provisional application 63/080,371 filed on 18 September 2020. Information Disclosure Statement Information disclosure statements (IDS) were submitted on 6 March 2023 and 19 March 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “300” has been used to designate both Fig. 3 apparatus and Fig. 6 server node. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 27 is objected to because of the following informalities: The phrase “… the method of any of claims 24 …” does not make sense because there is only one claim 24 recited. Furthermore, even if multiple claims were to be recited it would be improper. For the purposes of examination, claim 27 is interpreted to be dependent on claim 24. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 22 and 34 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 22 and 34 recite the limitations "the Renyi entropy; the Havrda-Charvat entropy; and the Tsallis entropy." In lines 3, 4, and 5. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 19 - 38 are rejected under 35 U.S.C. 101 because the claimed invention is directed towards abstract ideas without significantly more. Regarding claims 19-30: According to the first step (Step 1) of the 101 analysis, claims 19-30 are directed to a method (process) and falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Regarding claim 19: In the next step (Step 2A, prong 1) of the analysis, the limitations of: identifying a plurality of machine-learning source domain candidates; calculating, for each of the identified machine-learning source domain candidates, a diversity metric, the diversity metric representing a marginalized measure of sample diversity of the respective machine-learning source domain candidate; selecting the identified machine-learning source domain candidate having a highest diversity metric among the calculated diversity metrics; Under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, the limitation applying the selected machine-learning source domain candidate to a target domain in a new or changed execution environment. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method that applies the selected machine-learning source domain candidate to a target domain in a new or changed execution environment) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the method that applies the selected machine-learning source domain candidate to a target domain in a new or changed execution environment, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 20: In the next step (Step 2A, prong 1) of the analysis, the limitation of: wherein the diversity metric is calculated based on information theoretic measures. Under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, it does not integrate into a practical application because it does not add any additional elements that integrate the abstract idea into practical application. In the last step (Step 2B) of the analysis, it does not add any additional elements that amount to significantly more than the abstract idea and thus fails to add an inventive concept. The claim is not patent eligible. Regarding claim 21: In the next step (Step 2A, prong 1) of the analysis, the limitation of: wherein the diversity metric is calculated based on a one- parameter measure of generalized entropy. Under the broadest reasonable interpretation, the above limitations are process steps that recite mathematical relationships and calculations but for the recitation of generic computer components. If a claim, under its broadest reasonable interpretation covers mathematical concepts but for the recitation of generic computer components, then it falls within the “Mathematical concepts” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, it does not integrate into a practical application because it does not add any additional elements that integrate the abstract idea into practical application. In the last step (Step 2B) of the analysis, it does not add any additional elements that amount to significantly more than the abstract idea and thus fails to add an inventive concept. The claim is not patent eligible. Regarding claim 22: In the next step (Step 2A, prong 2) of the analysis, the limitation wherein the one-parameter measure is selected from the following: the Renyi entropy; the Havrda-Charvat entropy; and the Tsallis entropy. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein the one-parameter measure is selected from the following: the Renyi entropy; the Havrda-Charvat entropy; and the Tsallis entropy) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the method wherein the one-parameter measure is selected from the following: the Renyi entropy; the Havrda-Charvat entropy; and the Tsallis entropy, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 23: In the next step (Step 2A, prong 1) of the analysis, the limitation of: wherein said selecting comprises selecting a plurality of machine-learning source domain candidates having respective diversity metrics above a predetermined threshold. Under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, the limitation and wherein said applying comprises applying each of the selected machine-learning source domain candidates to the target domain. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein said applying comprises applying each of the selected machine-learning source domain candidates to the target domain) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the method wherein said applying comprises applying each of the selected machine-learning source domain candidates to the target domain, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 24: In the next step (Step 2A, prong 1) of the analysis, the limitation of: wherein said calculating, selecting, and applying comprises transfer learning performed in response to detecting a change in the execution environment of the target domain. Under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, it does not integrate into a practical application because it does not add any additional elements that integrate the abstract idea into practical application. In the last step (Step 2B) of the analysis, it does not add any additional elements that amount to significantly more than the abstract idea and thus fails to add an inventive concept. The claim is not patent eligible. Regarding claim 25: In the next step (Step 2A, prong 1) of the analysis, the limitation of: wherein detecting the change in the execution environment comprises detecting a change in feature space in the target domain. Under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, it does not integrate into a practical application because it does not add any additional elements that integrate the abstract idea into practical application. In the last step (Step 2B) of the analysis, it does not add any additional elements that amount to significantly more than the abstract idea and thus fails to add an inventive concept. The claim is not patent eligible. Regarding claim 26: In the next step (Step 2A, prong 1) of the analysis, the limitation of: wherein detecting the change in the execution environment comprises detecting a change in a machine-learning task in the target domain. Under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, it does not integrate into a practical application because it does not add any additional elements that integrate the abstract idea into practical application. In the last step (Step 2B) of the analysis, it does not add any additional elements that amount to significantly more than the abstract idea and thus fails to add an inventive concept. The claim is not patent eligible. Regarding claim 27: In the next step (Step 2A, prong 1) of the analysis, the limitation of: wherein detecting the change in the execution environment comprises detecting a change in resources available in the execution environment. Under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, it does not integrate into a practical application because it does not add any additional elements that integrate the abstract idea into practical application. In the last step (Step 2B) of the analysis, it does not add any additional elements that amount to significantly more than the abstract idea and thus fails to add an inventive concept. The claim is not patent eligible. Regarding claim 28: In the next step (Step 2A, prong 2) of the analysis, the limitation wherein said calculating, selecting, and applying is performed as part of inclusion in a federation for federated machine learning. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein said calculating, selecting, and applying is performed as part of inclusion in a federation for federated machine learning) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the method wherein said calculating, selecting, and applying is performed as part of inclusion in a federation for federated machine learning, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 29: In the next step (Step 2A, prong 1) of the analysis, the limitation of: wherein identifying the plurality of machine-learning source domain candidates comprises comparing a feature space for each machine-learning source domain candidate to a feature space of the target domain. Under the broadest reasonable interpretation, the above limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the mind or with the aid of pencil and paper but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the “Mental Process” grouping of abstract ideas. In the next step (Step 2A, prong 2) of the analysis, it does not integrate into a practical application because it does not add any additional elements that integrate the abstract idea into practical application. In the last step (Step 2B) of the analysis, it does not add any additional elements that amount to significantly more than the abstract idea and thus fails to add an inventive concept. The claim is not patent eligible. Regarding claim 30: In the next step (Step 2A, prong 2) of the analysis, the limitation wherein the execution environment comprises one or more servers in a telecommunications network and applying the selected machine-learning source domain candidate comprises using the selected machine-learning source domain candidate for management of one or more telecommunications tasks in the telecommunications network. is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a method wherein the execution environment comprises one or more servers in a telecommunications network and applying the selected machine-learning source domain candidate comprises using the selected machine-learning source domain candidate for management of one or more telecommunications tasks in the telecommunications network) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. In the last step (Step 2B) of the analysis, the additional element does not amount to significantly more than the judicial exceptions. As explained with respect to Step 2A Prong Two, the method wherein the execution environment comprises one or more servers in a telecommunications network and applying the selected machine-learning source domain candidate comprises using the selected machine-learning source domain candidate for management of one or more telecommunications tasks in the telecommunications network, is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Mere instructions to apply an exception cannot provide an inventive concept and does not amount to significantly more than the judicial exception. The claim is not patent eligible. Regarding claims 31-37: According to the first step (Step 1) of the 101 analysis, claims 31-37 are directed to a server node (manufacture) and falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Regarding claim 31: In step (Step 2A, prong 2) of the analysis, the limitation of: A server node, comprising: communication circuitry configured for communication with one or more other nodes in a network; and processing circuitry configured to: is considered to be an additional element and it does not integrate the abstract idea into a practical application because the additional element is recited so generically (no details whatsoever are provided other than that it is a server node, comprising: communication circuitry configured for communication with one or more other nodes in a network; and processing circuitry configured to perform something) that it represents no more than mere instructions to apply the judicial exception on a computer. As discussed in MPEP 2106.05(f), mere instructions to implement an abstract idea on a computer as a tool to perform an abstract idea is not indicative of integration into a practical application. The rest of the limitations of claim 31 are substantially similar to claim 19 and therefore is rejected on similar grounds as claim 19 as explained above. Regarding claim 32: Claim 32 is substantially similar to claim 20 and therefore is rejected on similar grounds as claim 20. Regarding claim 33: Claim 33 is substantially similar to claim 21 and therefore is rejected on similar grounds as claim 21. Regarding claim 34: Claim 34 is substantially similar to claim 22 and therefore is rejected on similar grounds as claim 22. Regarding claim 35: Claim 35 is substantially similar to claim 23 and therefore is rejected on similar grounds as claim 23. Regarding claim 36: Claim 36 is substantially similar to claim 24 and therefore is rejected on similar grounds as claim 24. Regarding claim 37: Claim 37 is substantially similar to claims 25-27 and therefore is rejected on similar grounds as claim 25-27. Regarding claim 38: According to the first step (Step 1) of the 101 analysis, claim 38 is directed to a non-transitory computer-readable medium (manufacture) and falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). The rest of the limitations of claim 38 are substantially similar to claim 19 and therefore is rejected on similar grounds as claim 19 as explained above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 19-21, 24-27, 29, 31-33, 36, 37, and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (Entropy Minimization vs. Diversity Maximization for Domain Adaptation, 2020) in view of Taylor et al (Transfer Learning for Reinforcement Learning Domains: A Survey, 2009). Regarding claim 19: Wu teaches: A method for machine-learning adaptation, the method comprising: identifying a plurality of machine-learning source domain candidates ([Page 2] Figure 1. Note: Figure 1 shows a batch of source samples corresponding to source domain candidates and CNN (convolutional neural network) corresponds to Machine Learning); calculating, for each of the identified machine-learning source domain candidates, a diversity metric, the diversity metric representing a marginalized measure of sample diversity of the respective machine-learning source domain candidate ([Page 3, Column 1, Section III] MINIMAL-ENTROPY DIVERSITY MAXIMIZATION A. Proposed Method: As the training of network is often implemented over batches of samples, the supervised loss for a given source batch S (for example, jSj = 32 for the batch size of 32) is accordingly modified as Ls(; S) =1jSjX(x;y)2S`(y; f(x)): [Page 3, Column 2, Paragraph 5] The use of EMO may produce trivial solutions as shown in Figure 1. By noting that a trivial solution shown in Figure 1 often has just one category, a nontrivial domain adaptation method may resort to producing sufficient category diversity in its solution. [Page 3, Column 2, Paragraph 6] In this paper, we employ the entropy of ˆq(T ) = [ˆq1, ˆq2, · · · , ˆqK] (6) for measuring the category diversity in a given target batch T . Formally, this category diversity over T can be measured as K Ld(θ,T ) H(ˆq(T )) = −k=1ˆqk log ˆqk. (8) [Page 3, Column 2, Paragraph 7] As this diversity metric does not require any priori information about the true category distribution q over Dt, its computation is easy to implement in practice. Note that random shuffling should be employed in training for maximizing (8). The objective of the proposed MEDM is to minES;T [Ls(; S) + Le(; T ) 􀀀 Ld(; T )] (9)); selecting the identified machine-learning source domain candidate having a highest diversity metric among the calculated diversity metrics ([Page 1, Column 2, Paragraph 5] In this paper, we make contributions towards close-to-perfect domain adaptation with entropy minimization. 1) We propose a minimal-entropy diversity maximization (MEDM) method for UDA. [Page 4, Column 1, Paragraph 9] With the use of diversity maximization, it may encourage to make prediction evenly across the batch, since the maximum value of Ld(θ∗,T ) could be achieved whenever q∗ = [1/K,··· ,1/K]. [Page 5, Column 1, Paragraph 2] When increases from 0, we would expect that the category diversity (8) increases correspondingly, which can help to avoid the trivial solutions. [Page 7, Column 1, Paragraph 3] although the category diversity is expected to achieve its maximum value when the inferred categories are uniformly-distributed. We guess that it works well due to the collaboration in meeting both requirements, namely, the minimization of entropy and the maximization of category diversity, where the parameter (9) is used to balance two individual requirements. [Conclusion] In this paper, we propose to employ diversity maximization for avoiding the trivial solutions. We show there exists a tradeoff for entropy minimization and diversity maximization towards the close-to-perfect domain adaptation. With the recently-proposed unsupervised model selection method, we show that the proposed MEDM outperforms state-of-the-art methods on several domain adaptation datasets, boosting a large margin especially on the largest VisDA dataset for cross-domain object classification. Note: Diversity maximization corresponds to highest diversity metric); However, Wu does not explicitly disclose: and applying the selected machine-learning source domain candidate to a target domain in a new or changed execution environment. Taylor teaches, in an analogous system: and applying the selected machine-learning source domain candidate to a target domain in a new or changed execution environment ([Page 1636, Paragraph 1] In this scenario, a total time scenario, which explicitly includes the time needed to learn the source task or tasks, would be most appropriate. On the other hand, a second reasonable goal of transfer is to effectively reuse past knowledge in a novel task. In this case, a target task time scenario, which only accounts for the time spent learning in the target task, is reasonable. Note: Novel task corresponds to a new or changed environment). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Wu to incorporate the teachings of Taylor to apply the selected machine-learning source domain candidate to a target domain in a new or changed execution environment. One would have been motivated to do this modification because doing so would give the benefit of reducing the overall time required to learn a complex task as taught by Taylor [Page 1636, Paragraph 1]. Regarding claim 20: The system of Wu and Taylor teaches: The method of claim 19 (as shown above). Wu further teaches: wherein the diversity metric is calculated based on information theoretic measures ([Page 7, Column 2, Paragraph 3] We guess that it works well due to the collaboration in meeting both requirements, namely, the minimization of entropy and the maximization of category diversity, where the parameter B (9) is used to balance two individual requirements). Regarding claim 21: The system of Wu and Taylor teaches: The method of claim 20 (as shown above). Wu further teaches: wherein the diversity metric is calculated based on a one- parameter measure of generalized entropy ([Page 7, Column 2, Paragraph 3] We guess that it works well due to the collaboration in meeting both requirements, namely, the minimization of entropy and the maximization of category diversity, where the parameter B (9) is used to balance two individual requirements. Note: Beta (B) corresponds to a one- parameter measure of generalized entropy). Regarding claim 24: The system of Wu and Taylor teaches: The method of claim 19 (as shown above). Taylor further teaches: wherein said calculating, selecting, and applying comprises transfer learning performed in response to detecting a change in the execution environment of the target domain ([Page 1655, Last but one Paragraph] The idea of RTP is not only unique in this survey, but it is also potentially a very useful idea for transfer in general. While a number of TL methods are able to learn from a set of source tasks, no others attempt to automatically generate these source tasks. If the goal of an agent is perform as well as possible in a novel target task, it makes sense that the agent would try to train on many source tasks, even if they are artificial. How to best generate such source tasks so that they are most likely to be useful for an arbitrary target task in the same domain is an important area of open research. Note: Novel target task corresponds to detecting a change in the execution environment of the target domain). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Wu to incorporate the teachings of Taylor wherein said calculating, selecting, and applying comprises transfer learning performed in response to detecting a change in the execution environment of the target domain. One would have been motivated to do this modification because doing so would give the benefit of reducing the overall time required to learn a complex task as taught by Taylor [Page 1636, Paragraph 1]. Regarding claim 25: The system of Wu and Taylor teaches: The method of claim 24 (as shown above). Taylor further teaches: wherein detecting the change in the execution environment comprises detecting a change in feature space in the target domain ([Page 1661, Last Paragraph] In our opinion, agent- and problem-space are ideas that should be further explored as they will likely yield additional benefits. Particularly in the case of physical agents, it is intuitive that agent sensors and actuators will be static, allowing information to be easily reused. Task-specific items [Page 1662, Paragraph 1] such as features and actions, may change, but should be faster to learn if the agent has already learned something about its unchanging agent-space). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Wu to incorporate the teachings of Taylor wherein detecting the change in the execution environment comprises detecting a change in feature space in the target domain. One would have been motivated to do this modification because doing so would give the benefit of reducing the overall time required to learn a complex task as taught by Taylor [Page 1636, Paragraph 1]. Regarding claim 26: The system of Wu and Taylor teaches: The method of claim 24 (as shown above). Taylor further teaches: wherein detecting the change in the execution environment comprises detecting a change in a machine-learning task in the target domain ([Page 1639, Last but one Paragraph] For instance, if a target task is chosen that humans are relatively proficient at, transfer will provide them very little benefit. If that same target task is difficult for a machine learning algorithm, it will be relatively easy to show that the TL algorithm is quite effective relative to human transfer, even if the agent's absolute performance is extremely poor. [Page 1659, Last but one Paragraph] The learner's bias is important in all machine learning settings. However, Bayesian learning makes such bias explicit. Being able to set the bias through transfer from similar tasks may prove to be a very useful heuristic—we hope that additional transfer methods will be developed to initialize Bayesian learners from past tasks). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Wu to incorporate the teachings of Taylor wherein detecting the change in the execution environment comprises detecting a change in a machine-learning task in the target domain. One would have been motivated to do this modification because doing so would give the benefit of reducing the overall time required to learn a complex task as taught by Taylor [Page 1636, Paragraph 1]. Regarding claim 27: The system of Wu and Taylor teaches: The method of any one of claims 24 (as shown above). Taylor further teaches: wherein detecting the change in the execution environment comprises detecting a change in resources available in the execution environment ([Page 1641, Last Paragraph] The reward function, R : S → R, maps each state of the environment to a single number which is the instantaneous reward achieved for reaching the state. If the task is episodic, the agent begins at a start state and executes actions in the environment until it reaches a terminal state (one or more of the states in s final, which may be referred to as a goal state), at which point the agent is returned to a start state). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Wu to incorporate the teachings of Taylor wherein detecting the change in the execution environment comprises detecting a change in resources available in the execution environment. One would have been motivated to do this modification because doing so would give the benefit of reducing the overall time required to learn a complex task as taught by Taylor [Page 1636, Paragraph 1]. Regarding claim 29: The system of Wu and Taylor teaches: The method of claim 19 (as shown above). Taylor further teaches: wherein identifying the plurality of machine-learning source domain candidates comprises comparing a feature space for each machine-learning source domain candidate to a feature space of the target domain ([Page 1662, Paragraph 2] For instance, in experiments the learner identified the concept of a fork, a state where the player could win on the subsequent turn regardless of what move the opponent took next. After training in the source task, analyzing the source task data for such features, and then setting the value for a given feature based on the source task data, such features of the game tree were used in a variety of target tasks. This analysis focuses on the effects of actions on the game tree and thus the actions and state variables describing the source and target game can differ without requiring an inter-task mapping). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Wu to incorporate the teachings of Taylor wherein identifying the plurality of machine-learning source domain candidates comprises comparing a feature space for each machine-learning source domain candidate to a feature space of the target domain. One would have been motivated to do this modification because doing so would give the benefit of reducing the overall time required to learn a complex task as taught by Taylor [Page 1636, Paragraph 1]. Claims 23, 31-33, and 35-38 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (Entropy Minimization vs. Diversity Maximization for Domain Adaptation, 2020) in view of Taylor et al (Transfer Learning for Reinforcement Learning Domains: A Survey, 2009) and further in view of Tan et al (US 20180240011 A1). Regarding claim 23: The system of Wu and Taylor teaches: The method of claim 19 (as shown above). Wu further teaches: and wherein said applying comprises applying each of the selected machine-learning source domain candidates to the target domain ([Page 5, Column 2, Section B. VisDA-2017] The Visual Domain Adaption (VisDA) challenge [43] aims to test domain adaptation methods’s ability to transfer source knowledge and adapt it to novel target domains. As the largest domain-adaptation dataset, the VisDA dataset contains 280K images across 12 categories from the training, validation, and testing domains. The training domain (the source domain) is a set of synthetic 2D renderings of 3D models generated from different angles and with different lighting conditions, while the validation domain (the target domain) is a set of realistic photos. The source domain contains 152,397 synthetic images, and the target domain has 55,388 real images). However, the system of Wu and Taylor does not explicitly disclose: wherein said selecting comprises selecting a plurality of machine-learning source domain candidates having respective diversity metrics above a predetermined threshold. Tan teaches, in an analogous system: wherein said selecting comprises selecting a plurality of machine-learning source domain candidates having respective diversity metrics above a predetermined threshold ([0032] As noted, FIG. 2A illustrates a specific example in which the badge reader 212 provides the label 223. It is to be appreciated that labels may be generated in other manners and that the label does not need to be authoritative, or from a different data source. [0038] The sampler 224 forwards correctly predicted data-label pairs where the output entropy value H exceeds a predetermined threshold. Other than entropy, the forwarding decision can also be made based on alternative functions of the probability distribution, such as diversity index). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined system of Wu and Taylor to incorporate the teachings of Tan wherein said selecting comprises selecting a plurality of machine-learning source domain candidates having respective diversity metrics above a predetermined threshold. One would have been motivated to do this modification because doing so would give the benefit of forwarding correctly predicted data-label pairs as taught by Tan [0038]. Regarding claim 31: Tan teaches: A server node, comprising: communication circuitry configured for communication with one or more other nodes in a network; and processing circuitry configured to ([0084] FIG. 13 is a block diagram of a computing device 1380 configured to implement the model merging techniques presented herein. That is, FIG. 13 illustrates one arrangement for a central learning unit (e.g., 116, 616) in accordance with examples presented herein. The computing device 1380 includes a network interface unit 1381 to enable network communications, one or more processors 1382, and memory 1383. The memory 1383 stores software modules that include local model instantiation logic 1350, a central machine learning model 1318, and a data and label generator 1325. These software modules, when executed by the one or more processors 577, causes the one or more processors to perform the operations described herein with reference to a central machine learning unit. Note: Central machine learning model corresponds to server node and processor corresponds to processing circuitry). The rest of the limitations of claim 31 are substantially similar to claim 19 and therefore are rejected on similar grounds as claim 19 as explained above. Regarding claim 32: Claim 32 is substantially similar to claim 20 and therefore is rejected on similar grounds as claim 20. Regarding claim 33: Claim 33 is substantially similar to claim 21 and therefore is rejected on similar grounds as claim 21. Regarding claim 35: Claim 35 is substantially similar to claim 23 and therefore is rejected on similar grounds as claim 23. Regarding claim 36: Claim 36 is substantially similar to claim 24 and therefore is rejected on similar grounds as claim 24. Regarding claim 37: Claim 37 is substantially similar to claims 25-27 and therefore is rejected on similar grounds as claim 25-27. Regarding claim 38: Tan teaches: A non-transitory computer-readable medium comprising, stored thereupon, a computer program comprising instructions configured to cause a server executing the instructions to: ([0055] Thus, in general, the memory 578 may comprise one or more tangible (non-transitory) computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions and when the software is executed (by the controller) it is operable to perform the operations described herein). The rest of the limitations of claim 38 are substantially similar to claim 19 and therefore are rejected on similar grounds as claim 19 as explained above. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (Entropy Minimization vs. Diversity Maximization for Domain Adaptation, 2020) in view of Taylor et al (Transfer Learning for Reinforcement Learning Domains: A Survey, 2009) and further in view of Zhang et al (Preclinical Diagnosis of Magnetic Resonance (MR) Brain Images via Discrete Wavelet Packet Transform with Tsallis Entropy and Generalized Eigenvalue Proximal Support Vector Machine (GEPSVM), 2015). Regarding claim 22: The system of Wu and Taylor teaches: The method of claim 21 (as shown above). However, the system of Wu and Taylor does not explicitly disclose: wherein the one-parameter measure is selected from the following: the Renyi entropy; the Havrda-Charvat entropy; and the Tsallis entropy. Zhang teaches, in an analogous system: wherein the one-parameter measure is selected from the following: the Renyi entropy; the Havrda-Charvat entropy; and the Tsallis entropy ([Abstract] Tsallis entropy (TE) were harnessed). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined system of Wu and Taylor to incorporate the teachings of Zhang wherein the one-parameter measure is selected from the following: the Renyi entropy; the Havrda-Charvat entropy; and the Tsallis entropy. One would have been motivated to do this modification because doing so would give the benefit of harnessing Tsallis entropy to obtain entropy features as taught by Zhang [Abstract]. Claim 34 is rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (Entropy Minimization vs. Diversity Maximization for Domain Adaptation, 2020) in view of Taylor et al (Transfer Learning for Reinforcement Learning Domains: A Survey, 2009) and Tan et al (US 20180240011 A1) and further in view of Zhang et al (Preclinical Diagnosis of Magnetic Resonance (MR) Brain Images via Discrete Wavelet Packet Transform with Tsallis Entropy and Generalized Eigenvalue Proximal Support Vector Machine (GEPSVM), 2015). Regarding claim 34: Claim 34 is substantially similar to claim 22 and therefore is rejected on similar grounds as claim 22. Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (Entropy Minimization vs. Diversity Maximization for Domain Adaptation, 2020) in view of Taylor et al (Transfer Learning for Reinforcement Learning Domains: A Survey, 2009) and further in view of Milton (US 20200017117 A1). Regarding claim 28: The system of Wu and Taylor teaches: The method of claim 24 (as shown above). However, the system of Wu and Taylor does not explicitly disclose: wherein said calculating, selecting, and applying is performed as part of inclusion in a federation for federated machine learning. Milton teaches, in an analogous system: wherein said calculating, selecting, and applying is performed as part of inclusion in a federation for federated machine learning ([0099] Some embodiments may implement transfer learning using a federated framework, which can be labeled as a federated transfer learning (FTL) method). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined system of Wu and Taylor to incorporate the teachings of Milton wherein said calculating, selecting, and applying is performed as part of inclusion in a federation for federated machine learning. One would have been motivated to do this modification because doing so would give the benefit of an architecture that supports active learning to infer things about vehicles, drivers, and places based on relatively high-bandwidth on-board and road-side sensor feeds as taught by Milton [0017]. Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (Entropy Minimization vs. Diversity Maximization for Domain Adaptation, 2020) in view of Taylor et al (Transfer Learning for Reinforcement Learning Domains: A Survey, 2009) and further in view of Deasy et al (WO 2020028382 A1). Regarding claim 30: The system of Wu and Taylor teaches: The method of claim 19 (as shown above). However, the system of Wu and Taylor does not explicitly disclose: wherein the execution environment comprises one or more servers in a telecommunications network and applying the selected machine-learning source domain candidate comprises using the selected machine-learning source domain candidate for management of one or more telecommunications tasks in the telecommunications network. Deasy teaches, in an analogous system: wherein the execution environment comprises one or more servers in a telecommunications network and applying the selected machine-learning source domain candidate comprises using the selected machine-learning source domain candidate for management of one or more telecommunications tasks in the telecommunications network ([0099] Some embodiments may implement transfer learning using a federated framework, which can be labeled as a federated transfer learning (FTL) method). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined system of Wu and Taylor to incorporate the teachings of Deasy wherein the execution environment comprises one or more servers in a telecommunications network and applying the selected machine-learning source domain candidate comprises using the selected machine-learning source domain candidate for management of one or more telecommunications tasks in the telecommunications network. One would have been motivated to do this modification because doing so would give the benefit of allowing more efficient use of server resources as taught by Deasy [00299]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Weiss et al (A survey of transfer learning, 2016) discloses transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments. Torrey et al (Transfer learning, 2009) discloses provides an introduction to the goals, formulations, and challenges of transfer learning. It surveys current research in this area, giving an overview of the state of the art and outlining the open problems. The survey covers transfer in both inductive learning and reinforcement learning, and discusses the issues of negative transfer and task mapping in depth. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAITANYA RAMESH JAYAKUMAR whose telephone number is (571)272-3369. The examiner can normally be reached Mon-Fri 9am-1pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571)272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.R.J./Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Mar 06, 2023
Application Filed
Feb 18, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12293260
GENERATING AND DEPLOYING PACKAGES FOR MACHINE LEARNING AT EDGE DEVICES
2y 5m to grant Granted May 06, 2025
Patent 12147915
SYSTEMS AND METHODS FOR MODELLING PREDICTION ERRORS IN PATH-LEARNING OF AN AUTONOMOUS LEARNING AGENT
2y 5m to grant Granted Nov 19, 2024
Patent 11770571
Matrix Completion and Recommendation Provision with Deep Learning
2y 5m to grant Granted Sep 26, 2023
Patent 11769074
COLLECTING OBSERVATIONS FOR MACHINE LEARNING
2y 5m to grant Granted Sep 26, 2023
Patent 11741693
SYSTEM AND METHOD FOR SEMI-SUPERVISED CONDITIONAL GENERATIVE MODELING USING ADVERSARIAL NETWORKS
2y 5m to grant Granted Aug 29, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
26%
Grant Probability
48%
With Interview (+22.5%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 51 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month