Prosecution Insights
Last updated: April 19, 2026
Application No. 18/066,637

META INPUT METHOD AND SYSTEM AND USER-CENTERED INFERENCE METHOD AND SYSTEM VIA META INPUT FOR RECYCLING OF PRETRAINED DEEP LEARNING MODEL

Final Rejection §101§103
Filed
Dec 15, 2022
Examiner
DEVORE, CHRISTOPHER DILLON
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Korea Advanced Institute Of Science And Technology
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
5 granted / 10 resolved
-5.0% vs TC avg
Strong +42% interview lift
Without
With
+41.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
33 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
39.0%
-1.0% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Remarks page 8, Applicant contends: The supposed abstract idea in claim 1 involving “optimizing a meta input by considering…” is overcome by the amendments to the claim. Response: Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection contain elements that have not been previously examined or does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Remarks page 8-9, Applicant contends: The supposed abstract idea in claim 1 involving “adding the optimized meta input to testing data in a user environment to transform distribution of the testing data into distribution of training data used to build the deep learning model” is overcome by the amendments to the claim. Response: Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection contain elements that have not been previously examined or does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Remarks page 10-11, Applicant contends: The applicant respectfully submits that the claims recite additional elements that integrate any judicial exception into a practical application. Response: The applicant notes some possible improvements the invention can offer in the remarks, however whether the invention in the specification is an improvement is not the issue of the 101 claim rejections. The 101 claim rejections are in regards to whether the improvement is apparent from the claims. MPEP 2106.05(a): "After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology. Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316, 120 USPQ2d 1353, 1359 (Fed. Cir. 2016) (patent owner argued that the claimed email filtering system improved technology by shrinking the protection gap and mooting the volume problem, but the court disagreed because the claims themselves did not have any limitations that addressed these issues). That is, the claim must include the components or steps of the invention that provide the improvement described in the specification. However, the claim itself does not need to explicitly recite the improvement described in the specification (e.g., "thereby increasing the bandwidth of the channel"). The full scope of the claim under the BRI should be considered to determine if the claim reflects an improvement in technology (e.g., the improvement described in the specification). In making this determination, it is critical that examiners look at the claim "as a whole," in other words, the claim should be evaluated "as an ordered combination, without ignoring the requirements of the individual steps." When performing this evaluation, examiners should be "careful to avoid oversimplifying the claims" by looking at them generally and failing to account for the specific requirements of the claims. McRO, 837 F.3d at 1313, 120 USPQ2d at 1100." Numerous claims recite elements that might indicate being related to elements related to the improvements noted as being possible from the invention in the remarks (such as being able to apply an existing classifier to new domain data, thus not requiring retraining the classifier or broadening the utility of the existing classifier from remarks pages 10-11). Claim 1, as amended, notes aspects related to “training a meta input” by considering some form of relation between input data and output prediction. While the meta input is likely relevant to aspects of the invention, and thus possibly related to the improvement argued in the remarks, the claim limitation is stated so broadly or generically that the aspects are seen as “apply it”. MPEP 2106.05(f): "The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In contrast, claiming a particular solution to a problem or a particular way to achieve a desired outcome may integrate the judicial exception into a practical application or provide significantly more. See Electric Power, 830 F.3d at 1356, 119 USPQ2d at 1743." Thus aspects of the current claims under 101 are not seen as satisfying the requirements and integrating abstract ideas into practical applications, but the claims are noted as containing elements, that if amended, are likely to be able to satisfy the requirements under 101 by noting a particular way to achieve a desired outcome. Remarks page 11-12, Applicant contends: The amendments to the claims distinguish the invention over prior art. Response: Applicant’s arguments with respect to claim(s) 1-18 have been considered but are moot because the new ground of rejection contain elements that have not been previously examined or does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2, 8-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed towards an abstract idea without significantly more. Claims that are not rejected under 101 are still present to help give clarification on the interpretation of the claim, especially in regards to claim 1, as claim 1 might not contain an abstract idea, but a dependent claim can have an abstract idea of which whether the elements of claim 1 integrate the dependent claim should be understood. In regards to Claim 1: Step 1: Is the claim directed towards a process, machine, manufacture, or composition of matter? Yes, the claim is directed towards a method, so a process. Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? No, the claim does not recite a(n) abstract idea. Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 1 recites the following additional elements: a meta input method for recycling of a pretrained deep learning model performed by a computer device, the meta input method comprising At a high level of generality, this is an activity of using a computer device as an “apply it” use (see MPEP 2106.05(f)). training a meta input by considering a relation between input data and output prediction of the pretrained deep learning model, that is based on both the input data and the meta input At a high level of generality, this is an activity of using a relation between input data and output prediction as an “apply it” use (see MPEP 2106.05(f)). transforming testing data in a user environment using the trained meta input to transform distribution of the testing data into distribution of training data used to build the deep learning model At a high level of generality, this is an activity of using trained meta input as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 1 recites the following additional elements: a meta input method for recycling of a pretrained deep learning model performed by a computer device, the meta input method comprising At a high level of generality, this is an activity of using a computer device as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a computer device appears to be an implementation of the abstract idea on a computer, so merely using a computer as a tool to perform the abstract idea. training a meta input by considering a relation between input data and output prediction of the pretrained deep learning model, that is based on both the input data and the meta input At a high level of generality, this is an activity of using a relation between input data and output prediction as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “training a meta input” using a relation between input data and output prediction does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”. Additional information on what the relation is or how the relation is used can help give the limitation a more particular interpretation. transforming testing data in a user environment using the trained meta input to transform distribution of the testing data into distribution of training data used to build the deep learning model At a high level of generality, this is an activity of using trained meta input as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “transforming testing data” using trained meta input does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”. Additional information on what the transformation is or how the transformation is performed can help give the limitation a more particular interpretation. In regards to Claim 2: Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 2 recites the following abstract ideas: optimizing the meta input using a gradient-based training algorithm through backpropagation This limitation is directed towards the abstract idea of a mathematical concept (see MPEP 2106.04(a)(2) subsection 1). In regards to Claim 3: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 3 recites the following additional elements: shifting or aligning the distribution of the testing data in the user environment to suit the distribution of the training data by adding the trained meta input to the testing data At a high level of generality, this is a continuation of an activity of transforming testing data “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 3 recites the following additional elements: shifting or aligning the distribution of the testing data in the user environment to suit the distribution of the training data by adding the trained meta input to the testing data At a high level of generality, this is a continuation of an activity of transforming testing data “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “transforming testing data” even using a form of shift or aligning a distribution does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”. Additional information on what the transformation is or how the transformation is performed can help give the limitation a more particular interpretation, as shifting or aligning the distribution does not provide many limits under BRI. In regards to Claim 4: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 4 recites the following additional elements: matching the distribution of the testing data in the user environment to the distribution of the training data through the trained meta input, such that knowledge a pretrained black box deep neural network (DNN) already learned is able to be utilized even under an environment different from training At a high level of generality, this is an activity of using the trained meta input as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 4 recites the following additional elements: matching the distribution of the testing data in the user environment to the distribution of the training data through the trained meta input, such that knowledge a pretrained black box deep neural network (DNN) already learned is able to be utilized even under an environment different from a training environment under which the DNN was trained At a high level of generality, this is an activity of using the trained meta input as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, using the optimized meta input to match the distribution of the testing data does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”, as the use of the trained meta input is recited generically. In regards to Claim 5: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 5 recites the following additional elements: generating the meta input in the distribution of the testing data in the user environment, when there is the pretrained deep learning model, before training the meta input At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 5 recites the following additional elements: generating the meta input in the distribution of the testing data in the user environment, when there is the pretrained deep learning model, before training the meta input At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, generating the meta input in a distribution does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”, as the generating of the meta input is recited generically with no indication as how the meta input is generated or what is generating meta input. Too broad of a generation limitation can result in a possible interpretation of the limitation being an abstract idea. In regards to Claim 6: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 6 recites the following additional elements: generating the meta input through ground truth of a sample of the testing data in the user environment At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 6 recites the following additional elements: generating the meta input through ground truth of a sample of the testing data in the user environment At a high level of generality, this is an activity of generating the meta input using a ground truth as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, generating the meta input using a ground truth does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”, as the use of the ground truth for generating the meta input is recited generically. In regards to Claim 7: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 7 recites the following additional elements: sampling the testing data in the user environment This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). generating the meta input using the output prediction of the deep learning model and the sampled testing data in the user environment At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 7 recites the following additional elements: sampling the testing data in the user environment This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of transmitting data (see MPEP 2106.05(d) example i in computer functions). generating the meta input using the output prediction of the deep learning model and the sampled testing data in the user environment At a high level of generality, this is an activity of generating the meta input using a deep learning model as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, generating the meta input using a ground truth does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”, as the use of the ground truth for generating the meta input is recited generically. In regards to Claim 8: Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 8 recites the following abstract ideas: Inferring an input obtained by adding the trained meta input to the testing data in the user environment This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here the limitation is seen as evaluation. The amendments to both claim 8 and claim 1 altered the interpretation away from being directed towards math, but “inferring an input” leaves the claim still appearing to be directed towards an abstract idea of a mental process. Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 8 recites the following additional elements: Inputting the inferred input to the deep learning model This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 8 recites the following additional elements: Inputting the inferred input to the deep learning model This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of transmitting data (see MPEP 2106.05(d) example i in computer functions). In regards to Claim 9: Step 1: Is the claim directed towards a process, machine, manufacture, or composition of matter? Yes, the claim is directed towards a method, so a process. Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 9 recites the following abstract ideas: Including inferring an input obtained by adding the trained meta input to the testing data in the user environment This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here the limitation is seen as evaluation. The amendments to claim 9 altered the interpretation away from being directed towards math, as is moved to be related more to a transformation rather an “adding” in math, but “inferring an input” leaves the claim still appearing to be directed towards an abstract idea of a mental process. Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 9 recites the following additional elements: A user-centered inference method via a meta input for recycling of a pretrained deep learning model performed by a computer device, the user-centered inference method comprising At a high level of generality, this is an activity of using a computer device as an “apply it” use (see MPEP 2106.05(f)). generating a meta input in distribution of testing data in a user environment, when there is the pretrained deep learning model At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). The generating including training the meta input by considering a relation between input data and output prediction of the pretrained deep learning model, that is based on both the input data and the meta input At a high level of generality, this is an activity of using a relation between input data and output prediction as an “apply it” use (see MPEP 2106.05(f)). transforming the testing data in the user environment using the trained meta input to transform the distribution of the testing data into distribution of training data used to build the deep learning model At a high level of generality, this is an activity of using trained meta input as an “apply it” use (see MPEP 2106.05(f)). Inputting the inferred input to the deep learning to the deep learning model This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 9 recites the following additional elements: A user-centered inference method via a meta input for recycling of a pretrained deep learning model performed by a computer device, the user-centered inference method comprising At a high level of generality, this is an activity of using a computer device as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a computer device appears to be an implementation of the abstract idea on a computer, so merely using a computer as a tool to perform the abstract idea. generating a meta input in distribution of testing data in a user environment, when there is the pretrained deep learning model At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, generating the meta input in a distribution does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”, as the generating of the meta input is recited generically with no indication as how the meta input is generated or what is generating meta input. Too broad of a generation limitation can result in a possible interpretation of the limitation being an abstract idea. The generating including training the meta input by considering a relation between input data and output prediction of the pretrained deep learning model, that is based on both the input data and the meta input At a high level of generality, this is an activity of using a relation between input data and output prediction as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “training a meta input” using a relation between input data and output prediction does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”. Additional information on what the relation is or how the relation is used can help give the limitation a more particular interpretation. transforming the testing data in the user environment using the trained meta input to transform the distribution of the testing data into distribution of training data used to build the deep learning model At a high level of generality, this is an activity of using trained meta input as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “transforming testing data” using trained meta input does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”. Additional information on what the transformation is or how the transformation is performed can help give the limitation a more particular interpretation. Inputting the inferred input to the deep learning to the deep learning model This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of transmitting data (see MPEP 2106.05(d) example i in computer functions). In regards to Claim 10: Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 10 recites the same abstract ideas as claim 2. In regards to Claim 11: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 11 recites the same additional elements as claim 3. Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 11 recites the same additional elements as claim 3. In regards to Claim 12: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 12 recites the same additional elements as claim 4. Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 12 recites the same additional elements as claim 4. In regards to Claim 13: Step 1: Is the claim directed towards a process, machine, manufacture, or composition of matter? No, the claim is directed towards software per se. Claim 13 recites the following software per se: A user-centered inference system via a meta input, the user-centered inference system comprising No hardware is indicated for the system, thus the system can exist solely as software. This is supported by the current specification ([Current Specification 0142]: “The foregoing devices may be realized by hardware elements, software elements and/or combinations thereof. For example, the described systems, devices components illustrated in the exemplary embodiments of the inventive concept may be implemented in one or more general- use computers or special-purpose computers, such as a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPGA), a programmable logic unit (PLU), a microprocessor or any device which may execute instructions and respond.”). Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 13 recites the following abstract ideas: An inference unit configured to infer an input obtained by adding the generated meta input to the testing data in the user environment This limitation is directed towards the abstract idea of a mathematical concept (see MPEP 2106.04(a)(2) subsection 1). This limitation is still interpreted as math, as no amendments to the limitation altered the indication of the “adding”. Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 13 recites the following additional elements: Input the inferred input to the pretrained deep learning model This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). a generator configured to generate a meta input in distribution of testing data in a user environment, when there is a pretrained deep learning model At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). Including training the meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input At a high level of generality, this is an activity of using a relation between input data and output prediction as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 13 recites the following additional elements: Input the inferred input to the pretrained deep learning model This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of transmitting data (see MPEP 2106.05(d) example i in computer functions). a generator configured to generate a meta input in distribution of testing data in a user environment, when there is a pretrained deep learning model At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, generating the meta input in a distribution does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”, as the generating of the meta input is recited generically with no indication as how the meta input is generated or what is generating meta input. Too broad of a generation limitation can result in a possible interpretation of the limitation being an abstract idea. Including training the meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input At a high level of generality, this is an activity of using a relation between input data and output prediction as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “training a meta input” using a relation between input data and output prediction does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”. Additional information on what the relation is or how the relation is used can help give the limitation a more particular interpretation. In regards to Claim 14: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 14 recites the following additional elements: wherein the generator generates the meta input through ground truth of a sample of the testing data in the user environment At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 14 recites the following additional elements: wherein the generator generates the meta input through ground truth of a sample of the testing data in the user environment At a high level of generality, this is an activity of generating the meta input using a ground truth as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, generating the meta input using a ground truth does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”, as the use of the ground truth for generating the meta input is recited generically. In regards to Claim 15: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 15 recites the following additional elements: wherein the generator samples the testing data in the user environment This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). generates the meta input using the output prediction of the pretrained deep learning model and the sampled testing data in the user environment At a high level of generality, this is an activity of generating the meta input as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 15 recites the following additional elements: wherein the generator samples the testing data in the user environment This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of transmitting data (see MPEP 2106.05(d) example i in computer functions). generates the meta input using the output prediction of the pretrained deep learning model and the sampled testing data in the user environment At a high level of generality, this is an activity of generating the meta input using a deep learning model as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, generating the meta input using a ground truth does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”, as the use of the ground truth for generating the meta input is recited generically. In regards to Claim 16: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 16 recites the following additional elements: wherein the generator minimizes a loss function for the meta input to optimize the meta input, in generating the meta input At a high level of generality, this is an activity of using the meta input and a loss function as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 16 recites the following additional elements: wherein the generator minimizes a loss function for the meta input to optimize the meta input, in generating the meta input At a high level of generality, this is an activity of using the meta input and a loss function as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “minimizes a loss function” for the meta input does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”. In regards to Claim 17: Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 17 recites the following abstract ideas: inference unit adds the trained meta input to the testing data in the user environment This limitation is directed towards the abstract idea of a mathematical concept (see MPEP 2106.04(a)(2) subsection 1). Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 17 recites the following additional elements: inputs the input, in which the testing data in the user environment and the trained meta input are combined with each other, to the pretrained deep learning model This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 17 recites the following additional elements: inputs the input, in which the testing data in the user environment and the trained meta input are combined with each other, to the pretrained deep learning model This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of transmitting data (see MPEP 2106.05(d) example i in computer functions). In regards to Claim 18: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 18 recites the following additional elements: wherein the inference unit maintains performance in the distribution of the testing data in the user environment while using parameters of the pretrained deep learning model as they are At a high level of generality, this is an activity of maintaining performance as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 18 recites the following additional elements: wherein the inference unit maintains performance in the distribution of the testing data in the user environment while using parameters of the pretrained deep learning model as they are At a high level of generality, this is an activity of maintaining performance as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a generic recitation of “inference unit maintains performance” in a distribution while a model’s parameters exist in the state the parameters exist in does not incorporate the abstract idea into a practical invention and is seen as a variation of the phrase “apply it”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saenko et al (“Adapting Visual Category Models to New Domains”), referred to as Saenko in this document, and further in view of Krishnan et al (US 20210110306 A1), referred to as Krishnan in this document, and further in view of Clinchant et al (US 20170161633 A1), referred to as Clinchant in this document. Regarding Claim 1: Saenko teaches: A meta input method for recycling of a pretrained deep learning model [Saenko Introduction page 1]: “In this paper, we explore the issue of domain shift in the context of object recognition, and present a novel method that adapts existing classifiers [A meta input method for recycling of a pretrained deep learning model] to new domains where labeled data is scarce.” the meta input method comprising: training a meta input [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn [the meta input method comprising: training a meta input as optimizing is seen as referring to training according to paragraph 24 of the current specification (“The optimizing of the meta input may include optimizing the meta input using a gradient- based training algorithm through backpropagation.”)] a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." and transforming testing data in a user environment using the trained meta input to transform distribution of the testing data into distribution of training data used to build the deep learning model [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points [and transforming testing data in a user environment using the trained meta input to transform distribution of the testing data into distribution of training data used to build the deep learning model]. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." Saenko does not explicitly teach: performed by a computer device training a meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input Krishnan teaches: performed by a computer device, [Krishnan 0037]: “Both the server 20 and the user devices 42 may include hardware components of typical computing devices, including a processor, input devices (e.g., keyboard, pointing device, microphone for voice commands, buttons, touchscreen, etc.), and output devices (e.g., a display device, speakers, and the like). The server 20 and user devices 42 [performed by a computer device] may include computer-readable media, e.g., memory and storage devices (e.g., flash memory, hard drive, optical disk drive, magnetic disk drive, and the like) containing computer instructions that implement the functionality disclosed herein when executed by the processor. The server 20 and the user devices 42 may further include wired or wireless network communication interfaces for communication.” One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Saenko and Krishnan. Saenko and Krishnam are both in the same field of endeavor of machine learning. One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Saenko and Krishnan to incorporate computer elements, such as user devices, processors, memory, etc in order to be able to implement the invention into a device or product ([Krishnan 0037]: “The server 20 and user devices 42 may include computer-readable media, e.g., memory and storage devices (e.g., flash memory, hard drive, optical disk drive, magnetic disk drive, and the like) containing computer instructions that implement the functionality disclosed herein when executed by the processor. The server 20 and the user devices 42 may further include wired or wireless network communication interfaces for communication.”). Clinchant teaches: training a meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input [Clinchant 0013]: "At least a second of the plurality of iterations includes repeating the generating of a set of corrupted samples, learning a transformation, and generating adapted class label predictions. The set of corrupted samples for this iteration are generated from augmented representations that are based on adapted class label predictions [training a meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input] from a preceding iteration. Information based on the adapted class label predictions of one of the plurality of iterations is output." Support for the function of Clinchant being related to the subject aside from paragraph 13 is given by figure 2 of Clinchant. One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Saenko and Clinchant. Saenko and Clinchant are in the same field of endeavor of machine learning. One of ordinary skill in the art would have been motivated to combine Saenko and Clinchant in order to utilize a method of training the meta input that would not be as reliant on data or knowledge of the model ([Clinchant 0006]: "In reality, the assumption of available source instances rarely holds. The source instances may become unavailable for technical reasons, or are disallowed to store for legal and privacy reasons. More realistic are situations where the source domain instances cannot be accessed but the source decision making procedures are available. These procedures are often presented in the form of classification services, which were trained on source data, available for a direct deployment and later reuse."). The idea of the classifier being a blackbox is supported by [Clinchant 0026]: “The first prediction component 50 inputs the target samples 40 to the source classifier 32 to generate input class label predictions 62 for the target samples. This may include sending the target samples 40 to the remote source device for classification with the source classifier or using the source classifier 32 locally, as a black box.” Regarding Claim 2: The method of claim 1 is taught by Saenko, Krishnan, and Clinchant. Primary reference Saenko does not explicitly teach the training methods involve gradient based training through backpropagation. Krishnan is used to help alleviate the gap in the teachings. Krishnan teaches: optimizing the meta input using a gradient-based training algorithm through backpropagation [Krishnan 0097]: “To achieve this organization of the embeddings, the meta-model may backpropagate [optimizing the meta input using a gradient-based training algorithm through backpropagation] the extracted multi-linear context embeddings c.sup.n into the user embedding space and create context conditioned clusters of users for item ranking” [Krishnan 0022]: “In recent times, gradient based [gradient-based] meta-learning has been proposed as a framework to few-shot adapt (e.g., with a small number of samples) a single base learner to multiple semantically similar tasks.” One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Saenko and Krishnan. Saenko and Krishnam are both in the same field of endeavor of machine learning. One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Saenko and Krishnan to incorporate backpropagation and gradient based algorithms in order to help train/adapt a learner ([Krishnan 0022]: “In recent times, gradient based meta-learning has been proposed as a framework to few-shot adapt (e.g., with a small number of samples) a single base learner to multiple semantically similar tasks.”). Regarding Claim 3: The method of claim 1 is taught by Saenko, Krishnan, and Clinchant. Saenko teaches: shifting or aligning the distribution of the testing data in the user environment to suit the distribution of the training data by adding the trained meta input to the testing data [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points [shifting or aligning the distribution of the testing data in the user environment to suit the distribution of the training data by adding the trained meta input to the testing data]. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." Regarding Claim 4: The method of claim 1 is taught by Saenko, Krishnan, and Clinchant. Saenko teaches: matching the distribution of the testing data in the user environment to the distribution of the training data through the trained meta input, such that knowledge a pretrained black box deep neural network (DNN) already learned is able to be utilized even under an environment different from a training environment under which the DNN was trained [Saenko Introduction page 1]: “Supervised classification methods, such as kernel-based and nearest-neighbor classifiers, have been shown to perform very well on standard object recognition tasks (e.g. [4], [17], [3]). However, many such methods expect the test images to come from the same distribution as the training images, and often fail when presented with a novel visual domain. While the problem of domain adaptation has received significant recent attention in the natural language processing community, it has been largely overlooked in the object recognition field. In this paper, we explore the issue of domain shift in the context of object recognition, and present a novel method that adapts existing classifiers [matching the distribution of the testing data in the user environment to the distribution of the training data through the trained meta input, such that knowledge a pretrained black box deep neural network (DNN) already learned is able to be utilized even under an environment different from a training environment under which the DNN was trained] to new domains where labeled data is scarce.” Regarding Claim 5: The method of claim 1 is taught by Saenko, Krishnan, and Clinchant. Saenko teaches: generating the meta input in the distribution of the testing data in the user environment, when there is the pretrained deep learning model, before training the meta input [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation [generating the meta input in the distribution of the testing data in the user environment, when there is the pretrained deep learning model as the meta input is considered the transformation and the training of the transformation implies the transformation existed before training thus can be considered generated before optimizing/training] that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points [before training the meta input]. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." The mapping covering the idea of the meta input being within the distribution of the training data can be supported from the interpretation the meta input being optimized was already trained (at least to some extent) to fit a distribution. In that case, the above quote from Saeko would teach the initial “generating the meta input in the distribution of the testing data in the user environment, when there is the pretrained deep learning model” and the “before optimizing the meta input” would be the training step for unlabeled data being taught in claim 1 (“considering a relation between input data and output prediction of the pretrained deep learning model”) by Clinchant, as any further training to the meta input/transformation can be considered optimization to the meta input. The motivation for this interpretation to combine with Clinchant would be the same motivation provided in claim 1 to combine with Clinchant. Regarding Claim 6: The method of claim 5 is taught by Saenko, Krishnan, and Clinchant. Saenko teaches: generating the meta input through ground truth of a sample of the testing data in the user environment [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs [generating the meta input through ground truth of a sample of the testing data in the user environment as the label would be considered the ground truth] of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." In support of the combination from claim 1 for also using Clinchant for part of the optimization, Clinchant also teaches the use of ground truth on the data ([Clinchant 0059]: “As an illustrative example, suppose that the task is to extract sentiments (positive/negative) from product reviews. The source samples, which the source classifier 32 is trained, may be reviews of books, while the target samples may be reviews of movies, for example on DVDs. The book reviews may include one or more text sequences such as “I really liked the characters in this book” which may be labeled [ground truth] with a “positive” opinion or “This book is a waste of money, which may be associate with a “negative” opinion label. The aim is to be able to label the target reviews, which may include text sequences such as “The acting was very good,” or “This movie was a waste of money.” The book and movie opinions may be represented by bag-of-words representations 36, 40 for example. The system 10 has no access to the source reviews, their bag-of-word representations 36, or their labels on which the source classifier 32 was trained. Nor, it is assumed, are there any labels for the samples 40 generated from the movie opinions in the target domain. It may also be assumed that the system 10 has no access to the parameters of the classifier 32. The classifier 32 can be of any type, such as linear regression, logistic regression, etc.”). Supporting the combination is done to show the combination, with the same motivation to combine as claim 1, still makes sense even in the context of claim 6. Regarding Claim 7: The method of claim 6 is taught by Saenko, Krishnan, and Clinchant. Saenko teaches: sampling the testing data in the user environment [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs [sampling the testing data in the user environment] of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." Clinchant teaches: and generating the meta input using the output prediction of the deep learning model and the sampled testing data in the user environment [Clinchant 0013]: "At least a second of the plurality of iterations includes repeating the generating of a set of corrupted samples, learning a transformation, and generating adapted class label predictions. The set of corrupted samples for this iteration are generated from augmented representations that are based on adapted class label predictions [and generating the meta input using the output prediction of the deep learning model and the sampled testing data in the user environment] from a preceding iteration. Information based on the adapted class label predictions of one of the plurality of iterations is output." and the sampled testing data in the user environment [Clinchant 0059]: “As an illustrative example, suppose that the task is to extract sentiments (positive/negative) from product reviews. The source samples, which the source classifier 32 is trained, may be reviews of books, while the target samples [and the sampled testing data in the user environment thus Clinchant also notes sampling testing data as notes the samples in a target environment] may be reviews of movies, for example on DVDs.” The motivation to combine with Clinchant is the same motivation to combine with Clinchant in claim 1. Regarding Claim 8: The method of claim 1 is taught by Saenko, Krishnan, and Clinchant. Saekno teaches: inferring an input obtained by adding the trained meta input to the testing data in the user environment, and inputting the inferred input to the deep learning model [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points [inferring an input obtained by adding the trained meta input to the testing data in the user environment]. One of the key advantages of our transform-based approach is that it can be applied over novel test samples [and inputting the inferred input to the deep learning model] from categories seen at training time, and can also generalize to new categories which were not present at training time." Regarding Claim 9: Saenko teaches: A user-centered inference method via a meta input for recycling of a pretrained deep learning model performed by a computer device, the user-centered inference method comprising [Saenko Introduction page 1]: “In this paper, we explore the issue of domain shift in the context of object recognition, and present a novel method that adapts existing classifiers [A user-centered inference method via a meta input for recycling of a pretrained deep learning model performed by a computer device, the user-centered inference method comprising] to new domains where labeled data is scarce.” generating a meta input in distribution of testing data in a user environment, when there is the pretrained deep learning model; [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation [generating the meta input in the distribution of the testing data in the user environment, when there is the pretrained deep learning model as the meta input is considered the transformation and the training of the transformation implies the transformation existed before training thus can be considered generated before optimizing/training. Other info is discussed in similar limitation for claim 5 including interpretations.] that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." Including inferring an input obtained by adding the trained meta input to the testing data in the user environment, and inputting the inferred input to the deep learning model [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points [Including inferring an input obtained by adding the trained meta input to the testing data in the user environment]. One of the key advantages of our transform-based approach is that it can be applied over novel test samples [and inputting the inferred input to the deep learning model] from categories seen at training time, and can also generalize to new categories which were not present at training time." The generating including training the meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn [The generating including training the meta input as optimizing is seen as referring to training according to paragraph 24 of the current specification (“The optimizing of the meta input may include optimizing the meta input using a gradient- based training algorithm through backpropagation.”)] a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points [that is based on both the input data and the meta input as notes the transformation(meta input) can be applied to unseen data (input data)]. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." transforming the testing data in the user environment using the trained meta input to transform the distribution of the testing data into distribution of training data used to build the deep learning model [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points [transforming the testing data in the user environment using the trained meta input to transform the distribution of the testing data into distribution of training data used to build the deep learning model]. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." Saenko does not explicitly teach: A user-centered inference method via a meta input for recycling of a pretrained deep learning model performed by a computer device, the user-centered inference method comprising The generating including training the meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input Krishnan teaches: A user-centered inference method via a meta input for recycling of a pretrained deep learning model performed by a computer device, the user-centered inference method comprising [Krishnan 0037]: “Both the server 20 and the user devices 42 may include hardware components of typical computing devices, including a processor, input devices (e.g., keyboard, pointing device, microphone for voice commands, buttons, touchscreen, etc.), and output devices (e.g., a display device, speakers, and the like). The server 20 and user devices 42 [performed by a computer device] may include computer-readable media, e.g., memory and storage devices (e.g., flash memory, hard drive, optical disk drive, magnetic disk drive, and the like) containing computer instructions that implement the functionality disclosed herein when executed by the processor. The server 20 and the user devices 42 may further include wired or wireless network communication interfaces for communication.” The motivation to combine with Krishnan to incorporate computer elements is the same as the motivation to combine with Krishnan for computer elements in claim 1. Clinchant teaches: The generating including training the meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input [Clinchant 0013]: "At least a second of the plurality of iterations includes repeating the generating of a set of corrupted samples, learning a transformation, and generating adapted class label predictions. The set of corrupted samples for this iteration are generated from augmented representations that are based on adapted class label predictions [The generating including training the meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input] from a preceding iteration. Information based on the adapted class label predictions of one of the plurality of iterations is output." Support for the function of Clinchant being related to the subject aside from paragraph 13 is given by figure 2 of Clinchant. The motivation to combine with Clinchant is the same motivation to combine with Clinchant in claim 1. Regarding Claim 10: The method of claim 9 is taught by Saenko, Krishnan, and Clinchant. This claim is analogous to claim 2. Regarding Claim 11: The method of claim 9 is taught by Saenko, Krishnan, and Clinchant. This claim is analogous to claim 3. Regarding Claim 12: The method of claim 9 is taught by Saenko, Krishnan, and Clinchant. This claim is analogous to claim 4. Regarding Claim 13: Saenko teaches: a generator configured to generate a meta input in distribution of testing data in a user environment, when there is a pretrained deep learning model [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation [a generator configured to generate a meta input in distribution of testing data in a user environment, when there is a pretrained deep learning model where more details on interpretation are provided in claim 5 mapping of a similar limitation, but this limitation being broader is seen as covered/taught by Saenko] that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points. One of the key advantages of our transform-based approach is that it can be applied over novel test samples from categories seen at training time, and can also generalize to new categories which were not present at training time." and an inference unit configured to infer an input obtained by adding the generated meta input to the testing data in the user environment, and input the inferred input to the pretrained deep learning model [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points [and an inference unit configured to infer an input obtained by adding the generated meta input to the testing data in the user environment]. One of the key advantages of our transform-based approach is that it can be applied over novel test samples [and input the inferred input to the pretrained deep learning model] from categories seen at training time, and can also generalize to new categories which were not present at training time." Saenko does not explicitly teach: A user-centered inference system via a meta input, the user-centered inference system comprising Including training the meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input Krishnan teaches: A user-centered inference system via a meta input, the user-centered inference system comprising [Krishnan 0037]: “Both the server 20 and the user devices 42 may include hardware components of typical computing devices, including a processor, input devices (e.g., keyboard, pointing device, microphone for voice commands, buttons, touchscreen, etc.), and output devices (e.g., a display device, speakers, and the like). The server 20 and user devices 42 [A user-centered inference system via a meta input, the user-centered inference system comprising] may include computer-readable media, e.g., memory and storage devices (e.g., flash memory, hard drive, optical disk drive, magnetic disk drive, and the like) containing computer instructions that implement the functionality disclosed herein when executed by the processor. The server 20 and the user devices 42 may further include wired or wireless network communication interfaces for communication.” The motivation to combine with Krishnan to incorporate computer elements is the same as the motivation to combine with Krishnan for computer elements in claim 1. Clinchant teaches: Including training the meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input [Clinchant 0013]: "At least a second of the plurality of iterations includes repeating the generating of a set of corrupted samples, learning a transformation, and generating adapted class label predictions. The set of corrupted samples for this iteration are generated from augmented representations that are based on adapted class label predictions [Including training the meta input by considering a relation between input data and output prediction, of the pretrained deep learning model, that is based on both the input data and the meta input] from a preceding iteration. Information based on the adapted class label predictions of one of the plurality of iterations is output." The motivation to combine with Clinchant is the same motivation to combine with Clinchant in claim 1. Regarding Claim 14: The system of claim 13 is taught by Saenko, Krishnan, and Clinchant. The limitations in claim 14 are synonymous with the teachings of claim 6, as claim 6’s limitation is taught by Saenko. Regarding Claim 15: The system of claim 14 is taught by Saenko, Krishnan, and Clinchant. The limitations in claim 15 are synonymous with the teachings of claim 7, as claim 7’s limitation is taught by Saenko. Regarding Claim 16: The system of claim 15 is taught by Saenko, Krishnan, and Clinchant. Saenko teaches: wherein the generator minimizes a loss function for the meta input to optimize the meta input, in generating the meta input [Saenko 3 Domain Adaptation Using Regularized Cross-Domain Transforms page 5]: “We will discuss the exact form of supervision we propose for domain adaptation problems in Section 3.1, but for now assume that it is a function of the learned similarity values simW(x, y) (i.e., a function of the matrix XTWY ), so a general optimization problem would seek to minimize [wherein the generator minimizes a loss function for the meta input to optimize the meta input, in generating the meta input] the regularizer subject to supervision constraints given by functions c” Regarding Claim 17: The system of claim 16 is taught by Saenko, Krishnan, and Clinchant. Saenko teaches: wherein the inference unit adds the trained meta input to the testing data in the user environment and inputs the input, in which the testing data in the user environment and the trained meta input are combined with each other, to the pretrained deep learning model [Saenko Introduction page 2]: "In this paper, we introduce a novel domain adaptation technique based on crossdomain transformations. The key idea, illustrated in Figure 2, is to learn a regularized non-linear transformation that maps points in the source domain (green) closer to those in the target domain (blue), using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar (black lines) or dissimilar (red lines). The output is the learned transformation, which can be applied to previously unseen test data points [wherein the inference unit adds the trained meta input to the testing data in the user environment]. One of the key advantages of our transform-based approach is that it can be applied over novel test samples [and inputs the input, in which the testing data in the user environment and the trained meta input are combined with each other, to the pretrained deep learning model] from categories seen at training time, and can also generalize to new categories which were not present at training time." Regarding Claim 18: The system of claim 17 is taught by Saenko, Krishnan, and Clinchant. Saenko teaches: wherein the inference unit maintains performance in the distribution of the testing data in the user environment while using parameters of the pretrained deep learning model as they are [Saenko Introduction page 1]: “Supervised classification methods, such as kernel-based and nearest-neighbor classifiers, have been shown to perform very well on standard object recognition tasks (e.g. [4], [17], [3]). However, many such methods expect the test images to come from the same distribution as the training images, and often fail when presented with a novel visual domain. While the problem of domain adaptation has received significant recent attention in the natural language processing community, it has been largely overlooked in the object recognition field. In this paper, we explore the issue of domain shift in the context of object recognition, and present a novel method that adapts existing classifiers [wherein the inference unit maintains performance in the distribution of the testing data in the user environment while using parameters of the pretrained deep learning model as they are] to new domains where labeled data is scarce.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20200050825 A1 by Olga Mendoza-Schrock is relevant art, as the reference teaches image processing to aid in recognition involving transforming image data. Zhu et al (“Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”) is relevant art, as the reference teaches the altering of images using machine learning to change the distribution of an image. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER D DEVORE whose telephone number is (703)756-1234. The examiner can normally be reached Monday-Friday 7:30 am - 5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.D.D./Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Dec 15, 2022
Application Filed
Oct 02, 2025
Non-Final Rejection — §101, §103
Nov 06, 2025
Examiner Interview Summary
Nov 06, 2025
Applicant Interview (Telephonic)
Jan 07, 2026
Response Filed
Feb 11, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530603
OBTAINING AND UTILIZING FEEDBACK FOR AGENT-ASSIST SYSTEMS
2y 5m to grant Granted Jan 20, 2026
Patent 12505355
GENERAL FORM OF THE TREE ALTERNATING OPTIMIZATION (TAO) FOR LEARNING DECISION TREES
2y 5m to grant Granted Dec 23, 2025
Patent 12468978
Reinforcement Learning In A Processing Element Method And System Thereof
2y 5m to grant Granted Nov 11, 2025
Patent 12412069
COOKIE SPACE DOMAIN ADAPTATION FOR DEVICE ATTRIBUTE PREDICTION
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
92%
With Interview (+41.7%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month