Prosecution Insights
Last updated: April 19, 2026
Application No. 18/262,717

OUT-OF-DISTRIBUTION DETECTION FOR PERSONALIZING NEURAL NETWORK MODELS

Non-Final OA §101§102§103
Filed
Jul 24, 2023
Examiner
MCINTOSH, ANDREW T
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
393 granted / 511 resolved
+21.9% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
538
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 511 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This action is responsive to communications filed on July 24, 2023 . This action is made Non-Final . Claims 1-26 are pending in the case. Claims 1, 9, 15, and 21 are independent claims. Claims 1 -26 are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statement (IDS(s)) submitted on 07/24/2023 and 12/30/2025 is/are in compliance with the provisions of 37 C.F.R. 1.97. Accordingly, the IDS(s) is/are being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 -26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Independent claims 1, 9, 15, and 21 are directed towards a method, apparatus, means, and non-transitory medium , respectively. Therefore, these claims, as well as their dependent claims, are directed towards one of the four statutory categories (process, machine (i.e. apparatus), manufacture, or composition of matter. With respect to claim 1 : 2A Prong 1: Claim 1 recites the following judicial exceptions: processing the input to extract a set of intermediate features (mental process –can be performed in the human mind, or by a human using a pen and paper (e.g. a person may analyze and determine particular features or characteristics). determining if the input is out-of-distribution relative to a dataset for training the first artificial neural network (mental process –can be performed in the human mind, or by a human using a pen and paper (e.g. a person may decide particular input is out-of-distribution compared to a dataset ). based at least in part on the out-of-distribution determination (mental process –can be performed in the human mind, or by a human using a pen and paper (e.g. a person may decide whether particular input is out-of-distribution compared to a dataset and perform further actions based on the decision). 2A Prong 2: The additional elements recited in the claim do not integrate the judicial exception into a practical application. Additional elements: recieving an input (mere instructions to apply the exception or implement the exception on a computer (e.g. – using a computer and sending and receiving data; see MPEP §2106.05(f).). The additional elements do not effectively integrate the abstract idea into a practical application. 2B: revisiting the additional elements , the additional elements do not amount to significantly more than the judicial exception – recited high level of generality and corresponds to storing and retrieving information in memory and performing calculations). at a first artificial intelligence neural network (generally linking the use of a judicial exception to a particular technological environment or field of use (e.g. using neural networks as part of a machine learning framework; see MPEP §2106.05(h).). The additional elements do not effectively integrate the abstract idea into a practical application. 2B: revisiting the additional elements , the additional elements do not amount to significantly more than the judicial exception – recited high level of generality and corresponds to storing and retrieving information and performing calculations.). providing the intermediate features corresponding to the input (mere instructions to apply the exception or implement the exception on a computer (e.g. – using a computer and sending and receiving data; see MPEP §2106.05(f).). The additional elements do not effectively integrate the abstract idea into a practical application. 2B: revisiting the additional elements , the additional elements do not amount to significantly more than the judicial exception – recited high level of generality and corresponds to storing and retrieving information in memory and performing calculations). to a second artificial neural network (generally linking the use of a judicial exception to a particular technological environment or field of use (e.g. using neural networks as part of a machine learning framework; see MPEP §2106.05(h).). The additional elements do not effectively integrate the abstract idea into a practical application. 2B: revisiting the additional elements , the additional elements do not amount to significantly more than the judicial exception – recited high level of generality and corresponds to storing and retrieving information and performing calculations.). With respect to claim 2 : 2A Prong 2: The additional elements recited in the claim do not integrate the judicial exception into a practical application. Additional elements: in which the second artificial neural network is trained on a mobile device based at least in part on the intermediate features (generally linking the use of a judicial exception to a particular technological environment or field of use (e.g. using neural networks as part of a machine learning framework; see MPEP §2106.05(h).). The additional elements do not effectively integrate the abstract idea into a practical application. 2B: revisiting the additional elements , the additional elements do not amount to significantly more than the judicial exception – recited high level of generality and corresponds to storing and retrieving information and performing calculations.). With respect to claim 3 : 2A Prong 1: Claim 3 recites the following judicial exceptions: determines a classification based on the intermediate features (mental process –can be performed in the human mind, or by a human using a pen and paper (e.g. a person may decide whether particular input is out-of-distribution compared to a dataset and perform further actions based on the decision). 2A Prong 2: The additional elements recited in the claim do not integrate the judicial exception into a practical application. Additional elements: in which the second artificial neural network (generally linking the use of a judicial exception to a particular technological environment or field of use (e.g. using neural networks as part of a machine learning framework; see MPEP §2106.05(h).). The additional elements do not effectively integrate the abstract idea into a practical application. 2B: revisiting the additional elements , the additional elements do not amount to significantly more than the judicial exception – recited high level of generality and corresponds to storing and retrieving information and performing calculations.). With respect to claim 4 : 2A Prong 2: The additional elements recited in the claim do not integrate the judicial exception into a practical application. Additional elements: in which the intermediate features are supplied to a server based at least in part on the out-of-distribution determination (mere instructions to apply the exception or implement the exception on a computer (e.g. – using a computer and sending and receiving data; see MPEP §2106.05(f).). The additional elements do not effectively integrate the abstract idea into a practical application. 2B: revisiting the additional elements , the additional elements do not amount to significantly more than the judicial exception – recited high level of generality and corresponds to storing and retrieving information in memory and performing calculations). With respect to claim 5 : 2A Prong 1: Claim 5 recites the following judicial exceptions: in which resources for performing the training and inference tasks of the first artificial neural network and the second artificial neural network are allocated according to a computational complexity of the training and inference tasks and a power consumption of the resources (mental process –can be performed in the human mind, or by a human using a pen and paper (e.g. a person may assign performance of training an inference tasks to available resources based on complexity and availability. ). With respect to claim 6 : 2A Prong 2: The additional elements recited in the claim do not integrate the judicial exception into a practical application. Additional elements: in which the first artificial neural network is a user-dependent classifier and the second artificial neural network is a user-dependent classifier (generally linking the use of a judicial exception to a particular technological environment or field of use (e.g. using neural networks as part of a machine learning framework; see MPEP §2106.05(h).). The additional elements do not effectively integrate the abstract idea into a practical application. 2B: revisiting the additional elements , the additional elements do not amount to significantly more than the judicial exception – recited high level of generality and corresponds to storing and retrieving information and performing calculations.). With respect to claim 7 : 2A Prong 1: Claim 7 recites the following judicial exceptions: determining if the second artificial neural network has been trained based on the out-of-distribution input; receiving a label for the out-of-distribution input if the second artificial network has not been trained based on the out-of-distribution input ... if the second artificial neural network has been trained based on the out-of-distribution input (mental process –can be performed in the human mind, or by a human using a pen and paper (e.g. a person may decide whether particular input is out-of-distribution compared to a dataset and perform further actions based on the decision . ). 2A Prong 2: The additional elements recited in the claim do not integrate the judicial exception into a practical application. Additional elements: operating the second artificial neural network to generate an inference (generally linking the use of a judicial exception to a particular technological environment or field of use (e.g. using neural networks as part of a machine learning framework; see MPEP §2106.05(h).). The additional elements do not effectively integrate the abstract idea into a practical application. 2B: revisiting the additional elements , the additional elements do not amount to significantly more than the judicial exception – recited high level of generality and corresponds to storing and retrieving information and performing calculations.). With respect to claim 8 : 2A Prong 1: Claim 8 recites the following judicial exceptions: comparing an extreme-value signature of the input to a class prototype; and detecting that the input is out-of-distribution if the extreme-value signature has greater activations in a different set of dimensions than the class prototype (mental process –can be performed in the human mind, or by a human using a pen and paper (e.g. a person may decide whether particular input is out-of-distribution compared to a dataset and perform further actions based on the decision . Further, a person may compare characteristics of the input to class characteristics and make determinations based thereon ). Claim 9: Claim 9 corresponds to claim 1 and is rejected using the same rationale. Claim 10: Claim 10 corresponds to claim 2 and is rejected using the same rationale. Claim 11: Claim 1 1 corresponds to claim 5 and is rejected using the same rationale. Claim 12: Claim 12 corresponds to claim 6 and is rejected using the same rationale. Claim 13: Claim 13 corresponds to claim 7 and is rejected using the same rationale. Claim 14: Claim 14 corresponds to claim 8 and is rejected using the same rationale. Claim 15: Claim 15 corresponds to claim 1 and is rejected using the same rationale. Claim 16: Claim 16 corresponds to claim 2 and is rejected using the same rationale. Claim 17: Claim 17 corresponds to claim 5 and is rejected using the same rationale. Claim 18: Claim 18 corresponds to claim 6 and is rejected using the same rationale. Claim 19: Claim 19 corresponds to claim 7 and is rejected using the same rationale. Claim 20: Claim 20 corresponds to claim 8 and is rejected using the same rationale. Claim 21: Claim 21 corresponds to claim 1 and is rejected using the same rationale. Claim 22: Claim 22 corresponds to claim 2 and is rejected using the same rationale. Claim 23: Claim 23 corresponds to claim 5 and is rejected using the same rationale. Claim 24: Claim 24 corresponds to claim 6 and is rejected using the same rationale. Claim 25: Claim 25 corresponds to claim 7 and is rejected using the same rationale. Claim 26: Claim 26 corresponds to claim 8 and is rejected using the same rationale. 2B continued : A fter considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 , 3, 7, 9, 12, 13, 15, 19, 21, and 25 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Stimpson et al., US Publication 2022/0111874 (“Stimpson”). Claim 1: Stimpson discloses a method for generating a personalized artificial neural network (ANN) model, comprising: receiving an input at a first artificial intelligence neural network (see Fig. 7, 8; para. 0003 - processing the plurality of images using a neural network to obtain a plurality of feature vectors ; para. 0081 – input from the set of feature vectors resulting from a neural network model, such as a convolutional neural network (CNN) model, and creates K clusters ; para. 0090 - a first filter 835 configured to obtain the feature vectors 830 from the CNN ; para. 0091 - k-means clustering algorithm may use input from the set of feature vectors 830 resulting from a neural network model, such as CNN 825 .); processing the input to extract a set of intermediate features (see Fig. 7, 8; para. 0003 - processing the plurality of images using a neural network to obtain a plurality of feature vectors ; para. 0081 – input from the set of feature vectors resulting from a neural network model, such as a convolutional neural network (CNN) model, and creates K clusters ; para. 0090 - a first filter 835 configured to obtain the feature vectors 830 from the CNN ; para. 0091 - k-means clustering algorithm may use input from the set of feature vectors 830 resulting from a neural network model, such as CNN 825 .); determining if the input is out-of-distribution relative to a dataset for training the first artificial neural network (see Fig. 7, 8; para. 0079 – method 700 for out-of-distribution detection ; para. 0087 - OOD data includes data that was not included in the training distribution. In other words, the OOD data may include data from a different distribution than the training distribution ; para. 0090 - a first filter 835 configured to obtain the feature vectors 830 from the CNN ; para. 0091 - k-means clustering algorithm may use input from the set of feature vectors 830 resulting from a neural network model, such as CNN 825 ; para. 0093 - Using a suitable Euclidian distance as a threshold may reduce the acceptance of false positive results . determination 840 is made as to whether the feature vector of the image has a Euclidian distance value that is above a threshold. The threshold may be referred to as an OOD threshold .); providing the intermediate features corresponding to the input to a second artificial neural network at least in part on the out-of-distribution determination (see Fig. 7, 8; para. 0091 - k-means clustering algorithm may use input from the set of feature vectors 830 resulting from a neural network model, such as CNN 825 ; para. 0093 - Using a suitable Euclidian distance as a threshold may reduce the acceptance of false positive results . determination 840 is made as to whether the feature vector of the image has a Euclidian distance value that is above a threshold. The threshold may be referred to as an OOD threshold ; para. 0094 - determination that the Euclidian distance value is below a threshold indicates that further analysis is required . the subset of images that are determined to have a Euclidian distance below the threshold are filtered using a second filter . classification model may be based on the OOD data and the non-OOD data. The second filter 850 may be a supervised learning algorithm . the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction. In an example, the supervised learning algorithm may be a backpropagation (BP) algorithm .). Claim(s) 9, 15, and 21 : Claim(s) 9, 15, and 21 correspond to Claim 1, and thus, Stimpson discloses the limitations of claim(s) 9, 15, and 21 as well. Claim 3: Stimpson further teaches or suggests in which the second artificial neural network determines classification based on the intermediate features (see Fig. 7, 8; para. 0091 - k-means clustering algorithm may use input from the set of feature vectors 830 resulting from a neural network model, such as CNN 825 ; para. 0093 - Using a suitable Euclidian distance as a threshold may reduce the acceptance of false positive results . determination 840 is made as to whether the feature vector of the image has a Euclidian distance value that is above a threshold. The threshold may be referred to as an OOD threshold ; para. 0094 - determination that the Euclidian distance value is below a threshold indicates that further analysis is required . the subset of images that are determined to have a Euclidian distance below the threshold are filtered using a second filter . subset of the images may be determined based on a Euclidian distance value of the feature vector of the image. The second filter 850 may determine 855 a classification model classification model may be based on the OOD data and the non-OOD data. The second filter 850 may be a supervised learning algorithm . the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction. In an example, the supervised learning algorithm may be a backpropagation (BP) algorithm .). Claim 7: Stimpson further discloses determining if the second artificial neural network has been trained based on the out-of-distribution input; receiving a label for the out-of-distribution input if the second artificial network has not been trained based on the out-of-distribution input; and operating the second artificial neural network to generate an inference, if the second artificial neural network has been trained based on the out-of-distribution input (see Fig. 7, 8; para. 0079 - method 700 may be implemented using a hybrid architecture that includes both unsupervised and supervised portions to manage distributional uncertainty ; para. 0081 - first filter may be an unsupervised learning algorithm, and may filter any number of feature vectors to obtain any number of clusters ; para. 0084 – second filter may be a supervised learning algorithm. In an example, the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction ; para. 0087 - architecture 800 may be a hybrid architecture that includes both unsupervised and supervised portions to manage distributional uncertainty . OOD data includes data that was not included in the training distribution. In other words, the OOD data may include data from a different distribution than the training distribution ; para. 0091 - first filter 835 may be an unsupervised learning algorithm, and may filter any number of feature vectors to obtain any number of clusters . k-means clustering algorithm may use input from the set of feature vectors 830 resulting from a neural network model, such as CNN 825 ; para. 0093 - Using a suitable Euclidian distance as a threshold may reduce the acceptance of false positive results . determination 840 is made as to whether the feature vector of the image has a Euclidian distance value that is above a threshold. The threshold may be referred to as an OOD threshold ; para. 0094 - determination that the Euclidian distance value is below a threshold indicates that further analysis is required . the subset of images that are determined to have a Euclidian distance below the threshold are filtered using a second filter . subset of the images may be determined based on a Euclidian distance value of the feature vector of the image. The second filter 850 may determine 855 a classification model classification model may be based on the OOD data and the non-OOD data. The second filter 850 may be a supervised learning algorithm . the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction. In an example, the supervised learning algorithm may be a backpropagation (BP) algorithm .). Claim(s) 13, 19, and 25 : Claim(s) 13, 19, and 25 correspond to Claim 7, and thus, Stimpson discloses the limitations of claim(s) 13, 19, and 25 as well. Claim 12: Stimpson further teaches or suggests in which the first artificial neural network is a user-independent classifier and the second artificial neural network is a user-dependent classifier (see para. 0079 - method 700 may be implemented using a hybrid architecture that includes both unsupervised and supervised portions to manage distributional uncertainty ; para. 0081 - first filter may be an unsupervised learning algorithm, and may filter any number of feature vectors to obtain any number of clusters ; para. 0084 – second filter may be a supervised learning algorithm. In an example, the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction ; para. 0087 - architecture 800 may be a hybrid architecture that includes both unsupervised and supervised portions to manage distributional uncertainty ; para. 0091 - first filter 835 may be an unsupervised learning algorithm, and may filter any number of feature vectors to obtain any number of clusters ; para. 0094 - the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction. In an example, the supervised learning algorithm may be a backpropagation (BP) algorithm .). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2 , 4, 10, 16, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stimpson, and further in view of Gu et al., US Publication 2020/0082272 (“Gu”). Claim 2: Stimpson further teaches or suggest the second artificial neural network is trained ... based at least in part on the intermediate features (see Fig. 7, 8; para. 0091 - k-means clustering algorithm may use input from the set of feature vectors 830 resulting from a neural network model, such as CNN 825 ; para. 0093 - Using a suitable Euclidian distance as a threshold may reduce the acceptance of false positive results . determination 840 is made as to whether the feature vector of the image has a Euclidian distance value that is above a threshold. The threshold may be referred to as an OOD threshold ; para. 0094 - determination that the Euclidian distance value is below a threshold indicates that further analysis is required . the subset of images that are determined to have a Euclidian distance below the threshold are filtered using a second filter . data resulting from an analysis of images using the determined classification model may be used to refine the second filter . classification model may be based on the OOD data and the non-OOD data. The second filter 850 may be a supervised learning algorithm . the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction. In an example, the supervised learning algorithm may be a backpropagation (BP) algorithm .). Stimpson does not explicitly disclose on a mobile device . Gu teaches or suggests on a mobile device (see Fig. 1; para. 0035 - transmission of data between the client side 110 and the DL service side 150 is facilitated by one or more data communication networks (not shown) which may be wired, wireless, or any combination of wired and wireless data communication networks ; para. 0037 - the client side autoencoder 120 is a neural network, as shown, executing on one or more client-side computing devices (not shown). The autoencoder 120 is trained using raw input data 105 at the client-side computing devices.); para. 0038 - raw input data X 105, e.g., raw training data, is input to the autoencoder 120 whose characteristics are such that the autoencoder 120 neural network will minimize the difference between the input X 105 and output X' 132 through a forward propagation, backpropagation and weight updating based training process ; para. 0041 – generated by the client side trained autoencoder 120 based on the input data 105 input to the trained autoencoder 120, and a ground truth or correct label (class or category) for the corresponding IRs ; para. 0044 - s maintained secure according to their own client side security mechanisms . the raw input data is not exposed outside the client side 110, and the training of the autoencoder 120 is not exposed outside the client side 110, the ability of an attacker to deduce the raw input data is greatly diminished ; para. 0046 - achieve these benefits by utilizing a client side autoencoder 120 whose training is maintained at the client side and is not exposed outside the client side computing device(s). ). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Stimpson , to include on a mobile device for the purpose of efficiently implementing nn training at the client side and keeping the client-side data private, improving nn training and security , as taught by Gu (0035, 0041, and 0046) . Claim(s) 10, 16, and 22 : Claim(s) 10, 16, and 22 correspond to Claim 2 , and thus, Stimpson and Gu teach or suggest the limitations of claim(s) 10, 16, and 22 as well. Claim 4: Stimpson further teaches or suggests in which the intermediate features are supplied ... based at least in part on the out-of-distribution determination (see Fig. 7, 8; para. 0091 - k-means clustering algorithm may use input from the set of feature vectors 830 resulting from a neural network model, such as CNN 825 ; para. 0093 - Using a suitable Euclidian distance as a threshold may reduce the acceptance of false positive results . determination 840 is made as to whether the feature vector of the image has a Euclidian distance value that is above a threshold. The threshold may be referred to as an OOD threshold ; para. 0094 - determination that the Euclidian distance value is below a threshold indicates that further analysis is required . the subset of images that are determined to have a Euclidian distance below the threshold are filtered using a second filter . subset of the images may be determined based on a Euclidian distance value of the feature vector of the image. The second filter 850 may determine 855 a classification model classification model may be based on the OOD data and the non-OOD data. The second filter 850 may be a supervised learning algorithm . the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction. In an example, the supervised learning algorithm may be a backpropagation (BP) algorithm .). Gu further teaches or suggests to a server (see Fig. 1; para. 0035 - transmission of data between the client side 110 and the DL service side 150 is facilitated by one or more data communication networks (not shown) which may be wired, wireless, or any combination of wired and wireless data communication networks ; para. 0039 – trained autoencoder 120 is then used to generate IRs 142 for input data 105 (input training data during the training phase) which are used to train the remotely located DLmodel 160 for the user . selected IR, such as the IR generated by the intermediate layer 126 just prior to the decoding intermediate layer 128, i.e., the last encoding intermediate layer, is transmitted as training data input to the remotely located DL ; para. 0040 - DL cloud service provider computing system, which again may comprise one or more server computing devices facilitating the training and inference operation of the DL model 160, is trained using the selected IR as the input training data for the DL model 160 .). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Stimpson , to include to a server for the purpose of efficiently training a server model and keeping the client-side data private by using intermediate representations instead of raw client data , improving server model training and data security , as taught by Gu ( 0040 and 0041. ) . Claim(s) 5 , 6, 11, 17, 18, 23, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stimpson, and further in view of Srinivasan et al., US Publication 2018/0300653 (“Srinivasan”). Claim 5: Stimpson teaches or suggests perf orming the training and inference tasks of the first artificial neural network and the second artificial neural network (see para. 0081 - first filter may be an unsupervised learning algorithm, and may filter any number of feature vectors to obtain any number of clusters . input from the set of feature vectors resulting from a neural network model, such as a convolutional neural network (CNN) model, and creates K clusters ; para. 0087 - OOD data includes data that was not included in the training distribution. In other words, the OOD data may include data from a different distribution than the training distribution ; para. 0089 - CNN 825 is configured to map raw images to a feature vector space of fixed dimensionality to obtain feature vectors 830. The best performance may be achieved on determining a classification model when a large amount of data is used for training ; para. 0094 - classification model may be based on the OOD data and the non-OOD data. The second filter 850 may be a supervised learning algorithm . the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction. In an example, the supervised learning algorithm may be a backpropagation (BP) algorithm .). Srinivasan further teaches or suggests in which resources for performing training and inference tasks ... are allocated according to a computational complexity of the training and inference tasks and power consumption of the resources (see para. 0026 - assigns training and/or prediction tasks to various resources of the DML system 200. For instance, some training tasks may require a graphical processing unit (GPU) while other training tasks may have higher priority than other tasks. The scheduler maintains a queue of tasks and also monitors the availability of the computing resources of the DML system 200. The scheduler assigns tasks to available resources based on the priority of a respective task, the minimum requirements of the respective task, and the available resources at the time ; para. 0031 - scheduler of the DML system 200 may assign the serving container to one or more computing resources based on the priority of the task, the available resources, and other suitable factors ; para. 0053 - priority queue dictates the order by which a container is executed relative to other containers. Furthermore each task may indicate the minimum required resources for the task. For instance, a training task to train an image classifier may require one or more GPUs 214, while a prediction task relating to a document classifier may only require a single CPU 212. The scheduler orders the priority queue based on a number of factors, including the order in which a task is received, the resources required by the task, the urgency of a task, or other considerations ; para. 0055 - a training job that operates on images may require at least one GPU 214. Furthermore, if the number of convolution layers in the model, as define by the hyper parameters in this example, is relatively high (e.g., greater than 20), the scheduler 216 may determine that the minimum number of GPUs 214 is two .). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Stimpson , to include in which resources for performing training and inference tasks ... are allocated according to a computational complexity of the training and inference tasks and power consumption of the resources for the purpose of efficiently assigning training and prediction tasks based on task characteristics and available resources , model performance , as taught by Srinivasan (00 26, 0053, 0055 .) . Claim(s) 11, 17, and 23 : Claim(s) 11, 17, and 23 correspond to Claim 5, and thus, Stimpson and Srinivasan teach or suggest the limitations of claim(s) 11, 17, and 23 as well. Claim 6: Stimpson further teaches or suggests in which the first artificial neural network is a user-independent classifier and the second artificial neural network is a user-dependent classifier (see para. 0079 - method 700 may be implemented using a hybrid architecture that includes both unsupervised and supervised portions to manage distributional uncertainty ; para. 0081 - first filter may be an unsupervised learning algorithm, and may filter any number of feature vectors to obtain any number of clusters ; para. 0084 – second filter may be a supervised learning algorithm. In an example, the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction ; para. 0087 - architecture 800 may be a hybrid architecture that includes both unsupervised and supervised portions to manage distributional uncertainty ; para. 0091 - first filter 835 may be an unsupervised learning algorithm, and may filter any number of feature vectors to obtain any number of clusters ; para. 0094 - the supervised learning algorithm may be a multi-layer perceptron may that is used to train a model to generate a prediction. In an example, the supervised learning algorithm may be a backpropagation (BP) algorithm .). Claim(s) 18 and 24 : Claim(s) 18 and 24 correspond to Claim 6, and thus, Stimpson and Srinivasan teach or suggest the limitations of claim(s) 18 and 24 as well. Allowable Subject Matter Claims 8, 14, 20, and 26 are objected to as being dependent upon a rejected base claim, but would be allowable in view of the prior art if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT Andrew T McIntosh whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7790 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-Th 8:00am-5:30pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Tamara Kyle can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-4241 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW T MCINTOSH/ Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602534
Method and System to Display Content from a PDF Document on a Small Screen
2y 5m to grant Granted Apr 14, 2026
Patent 12596757
NATIVE INTEGRATION OF ARBITRARY DATA SOURCES
2y 5m to grant Granted Apr 07, 2026
Patent 12572617
SYSTEM AND METHOD FOR THE GENERATION AND EDITING OF TEXT CONTENT IN WEBSITE BUILDING SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12561191
TRAINING METHOD AND APPARATUS FOR FAULT RECOGNITION MODEL, FAULT RECOGNITION METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12547874
DEPLOYING PARALLELIZABLE DEEP LEARNING MODELS BY ADAPTING TO THE COMPUTING DEVICES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 511 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month