Prosecution Insights
Last updated: April 19, 2026
Application No. 17/469,590

METHOD AND DEVICE WITH CLASSIFICATION VERIFICATION

Non-Final OA §103§Other
Filed
Sep 08, 2021
Examiner
JIANG, HAIMEI
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
51%
Grant Probability
Moderate
3-4
OA Rounds
4y 3m
To Grant
82%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
210 granted / 415 resolved
-4.4% vs TC avg
Strong +32% interview lift
Without
With
+31.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
30 currently pending
Career history
445
Total Applications
across all art units

Statute-Specific Performance

§101
16.4%
-23.6% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103 §Other
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Applicant’s arguments filed on 12/29/2025, with respect to Final rejection have been fully considered and are persuasive. The Final rejection of 10/28/2025 has been withdrawn. This action is responsive to the Amendment filed on 12/29/2025. Claims 1, 3, 23, 27, 29, 33 and 35 have been amended. Claims 4, 30 and 36 have been canceled. Claims 1-3, 5-29, 31-35 and 37-43 are pending in the case. Claims 1, 14, 23, 25, 27 and 33 are independent claims. Response to Arguments Applicant's arguments filed 12/29/2025 have been fully considered. 101 Abstract idea argument is persuasive, and rejection withdrawn. In regard to 103 arguments, Banakar is used to disclose the architect of the claimed subject matter, as shown in Fig. 4 of Banakar, the training data is inputted into “train ANN 401”/intermediate layer, then output classification results at step 408, and therefore inputted into verification NN at 406, which also takes input from the intermediate layer 401 and [0019] of Banakar, “The ANN training module 201 may build and train an ANN with a training dataset for a target application. The target application may include, but may not be limited to, computer vision, image recognition, natural language processing, speech recognition, and decision making. The ANN training module 201 may then generate a weight matrix of all the neural nodes in a given layer, for each layer of the ANN. Thus, once the ANN has been trained for the training dataset, the weight matrix may include values of all the neural nodes in the given layer for each layer of the ANN. Therefore Banakar in combination with Smith and Souche disclose the claimed subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-43 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US 20180322327 A1) in view of Souche et al (US 20180012110 A1) and in further view of Subhaschandra Banakar et al (US 20200202197 A1), hereinafter “Banakar”. Referring to claim 1, Smith discloses a processor-implemented method, the method comprising: implementing a classification neural network to generate a classification result of data input to the classification neural network by: generating, with respect to the input data, intermediate hidden values of one or more hidden layers of the classification neural network; (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values) and generating the classification result of the input data based on the generated intermediate hidden values; (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers) and (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN). Smith does not specifically disclose “generating a determination of a reliability of the classification result by implementing a verification neural network, input the intermediate hidden values; and selectively controlling performance of operations of a computing device based on whether the classification result is verified”. However, Banakar discloses generating a determination of a reliability of the classification result by implementing a verification neural network, input the intermediate hidden values, to generate the determination of the reliability (as shown in Fig. 4 of Banakar, the training data is inputted into “train ANN 401”/intermediate layer, then output classification results at step 408, and therefore inputted into verification NN at 406, which also takes input from the intermediate layer 401 and [0019] of Banakar, “The ANN training module 201 may build and train an ANN with a training dataset for a target application. The target application may include, but may not be limited to, computer vision, image recognition, natural language processing, speech recognition, and decision making. The ANN training module 201 may then generate a weight matrix of all the neural nodes in a given layer, for each layer of the ANN. Thus, once the ANN has been trained for the training dataset, the weight matrix may include values of all the neural nodes in the given layer for each layer of the ANN.”). Further, Banakar discloses selectively controlling performance of operations of a computing device based on whether the classification result is verified. ([0015] of Banakar, “The ANN validation engine may analyze and verify the decisions made by the ANN (i.e., the classifications performed by the ANN) so as to minimize false positives and to improve performance. In particular, the system 100 may include an ANN validation device (for example, server, desktop, laptop, notebook, netbook, tablet, smartphone, mobile phone, or any other computing device) that may implement the ANN validation engine. It should be noted that, in some embodiments, the ANN validation engine may help in understanding the reason for the decisions taken by the ANN and, therefore, improve its performance by reducing the number of false positives in the outcome.”) Smith and Banakar are analogous art because both references concern verification of CNN. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Smith’s verifying classification NN data with inputting intermediate layer into verification NN as taught by Banakar. The motivation for doing so would have been to help accurately classify different types of images for pattern recognition purposes. Even though Smith in view of Banakar discloses of determination of reliability of the classification, but they do not specifically disclose input values “to generate the determination of the reliability”. However, Souche discloses input images through CNN builder then determines if the values meets a validation threshold. ([0037] of Souche). Smith and Banakar and Souche are analogous art because both references concern verification of CNN. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Smith’s verifying classification NN data with inputting intermediate layer into verification NN as taught by Banakar and CNN validator as taught by Souche. The motivation for doing so would have been to help accurately classify different types of images for pattern recognition purposes. [0002] of Souche. Referring to claim 2, Smith in view of Souche and Banakar disclose the method of claim 1, wherein the intermediate hidden values include hidden values of two or more hidden layers, as respective outputs of the two or more hidden layers, among a plurality of hidden layers of the classification neural network. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 3, Smith in view of Souche and Banakar disclose the method of claim 1, further comprising: perform the verifying of the classification result of the input data when the generated determination of the reliability of the classification result meets a predetermined verification threshold. ([0037] of Souche, CNN validator determine whether to flag the intermediate CNN as meeting a designated validation threshold) Referring to claim 5, Smith in view of Souche and Banakar disclose the method of claim 1, wherein the verification neural network comprises at least five hidden layers. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 6, Smith in view of Souche and Banakar disclose the method of claim 1, wherein the classification neural network comprises at least five hidden layers. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 7, Smith in view of Souche and Banakar disclose the method of claim 1, further comprising training a temporary verification neural network, to become the verification neural network, by inputting to the temporary verification neural network hidden values of the classification neural network corresponding to a training data and a known reliable classification result of the classification neural network corresponding to the training data, and adjusting parameters of the temporary verification neural network toward generating an accurate verification of the known reliable classification result. ([0045] of Souche, “At 503, the target image 150 and the meta data is applied to the image extractor CNN 120 to extract an object from the target image 150 if the target image 150 includes an object in a particular class of objects that the image extractor CNN 120 was trained to identify. FIG. 3A-B describe an example of the image extractor CNN 120 identifying an object from the target image 150 and generating the extracted image 151 which includes the object. At 504, the extracted image 151 is applied to the image attribute CNN 121 to determine attributes for the extracted image 151. For example, a binarized vector of image features are determined by the image attribute CNN 121 for the extracted image 151. At 505, the attributes of the extracted image 151 are compared to attributes of other images. For example, the binarized vector of image features for the extracted image 151 is compared to binarized vector of image features for other images to find images that are similar to the extracted image 151. At 506, visually similar images are identified based on the comparisons performed at 505. Visual similarity may be based on similarity of features of images visible to the human eye.”) Referring to claim 8, Smith in view of Souche and Banakar disclose the method of claim 7, further comprising generating sample data from the training data, wherein the temporary verification neural network is trained based on attribute information of the training data and a distance between the training data and the sample data using a reliability model. ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) Referring to claim 9, Smith in view of Souche and Banakar disclose the method of claim 8, wherein, in the reliability model, a reliability decreases when a distance between a central point corresponding to the training data and a sample point corresponding to sample data increases. ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) Referring to claim 10, Smith in view of Souche and Banakar disclose the method of claim 7, wherein the training of the temporary verification neural network further comprises using either: a first reliability model that determines a reliability of a sample point corresponding to sample data with an attribute similar to the training data based on a distance between the sample point and a first central point corresponding to the training data; ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) or a second reliability model that determines a reliability of the sample point corresponding to the sample data with another attribute similar to the training data based on a distance between the sample point and a second central point corresponding to the training data and based on a gradient direction of the central point that is attribute information of the training data. Referring to claim 11, Smith in view of Souche and Banakar disclose the method of claim 7, further comprising obtaining a score of a central point corresponding to the hidden values of the classification neural network corresponding to the training data, determining a score of a sample point corresponding to sample data randomly generated around the central point, and performing the training of the temporary verification neural network based on the score of the central point and the score of the sample point. ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) Referring to claim 12, Smith in view of Souche and Banakar disclose the method of claim 1, further comprising training a temporary classification neural network, with respect to a training input, to become the classification neural network, and training a temporary verification neural network, to become the verification neural network, wherein the training of the temporary verification neural network includes inputting intermediate hidden values of the temporary classification neural network, with respect to the training input, to the temporary verification neural network with respect to a training classification result of the temporary classification neural network for the training input. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 13, Smith in view of Souche and Banakar disclose a non-transitory computer readable medium comprising instructions, which when executed by at least one processor, configure the at least one processor to perform the method of claim 1. ([0084] of Smith, processors) Referring to claim 14, Smith in view of Souche and Banakar disclose a processor-implemented method, the method comprising: generating, with respect to training data input to a classification neural network, intermediate hidden values of one or more hidden layers of the classification neural network; and training a verification neural network, based on a reliability model and the intermediate hidden values input to the verification neural network, to indicate a reliability of a classification result of the classification neural network. (see citations of claim 1) Referring to claim 15, Smith in view of Souche and Banakar disclose the method of claim 14, further comprising: implementing the trained classification neural network to generate a corresponding classification result of input data through respective processes of the one or more hidden layers of the trained classification neural network; and implementing the trained verification neural network, input corresponding hidden values of the respective processes of the one or more of hidden layers, to generate a determination of a reliability of the corresponding classification result. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 16, Smith in view of Souche and Banakar disclose the method of claim 14, wherein the training of the verification neural network is based on attribute information of the training data and a distance between the training data and sample data generated, from the training data, using the reliability model. ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) Referring to claim 17, Smith in view of Souche and Banakar disclose the method of claim 14, further comprising obtaining a score of a central point corresponding to the intermediate hidden values, determining a score of a sample point corresponding to sample data randomly generated around the central point, and performing the training of the verification neural network based on the score of the central point and the score of the sample point. ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) Referring to claim 18, Smith in view of Souche and Banakar disclose the method of claim 14, wherein, in the reliability model, a reliability decreases when a distance between a central point corresponding to the training data and a sample point corresponding to sample data increases. ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) Referring to claim 19, Smith in view of Souche and Banakar disclose the method of claim 14, wherein the training of the verification neural network further comprises using either: a first reliability model that determines a reliability of a sample point corresponding to sample data with an attribute similar to the training data based on a distance between the sample point and a first central point corresponding to the training data; ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) or a second reliability model that determines a reliability of a sample point corresponding to the sample data with another attribute similar to the training data based on a distance between the sample point and a second central point corresponding to the training data and based on a gradient direction of the central point that is attribute information of the training data. Referring to claim 20, Smith in view of Souche and Banakar disclose the method of claim 14, wherein the verification neural network comprises at least five hidden layers. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 21, Smith in view of Souche and Banakar disclose the method of claim 14, wherein the classification neural network comprises at least five hidden layers. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 22, Smith in view of Souche and Banakar disclose a non-transitory computer readable medium comprising instructions, which when executed by at least one processor, configure the at least one processor to perform the method of claim 14. ([0084] of Smith, processors) Referring to claim 23, Smith in view of Souche and Banakar disclose a non-transitory computer readable medium comprising: instructions, which when executed by at least one processor, control the at least one processor to implement a classification neural network and a verification neural network, and selectively control performance of operations of a computing device based on whether a classification result is verified; the classification neural network configured to generate the classification result of data input to the classification neural network; and the verification neural network configured to generate a reliability determination of the classification result based on intermediate hidden values, of one or more hidden layers of the classification neural network, generated within the classification neural network by the generation of the classification result and input to the verification neural network. (see citations of claim 1) Referring to claim 24, Smith in view of Souche and Banakar disclose the medium of claim 23, further comprising another verification neural network configured to generate another reliability determination of the classification result based on hidden values of at least one hidden layer of the classification neural network generated within the classification neural network by the generation of the classification result, and wherein the instructions further include instructions, which when executed by the at least one processor, control the at least one processor to implement the classification neural network, implement the verification neural network and the other verification neural network to determine respective reliabilities of the classification result, and determine a final reliability of the classification result based on the determined respective reliabilities. ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) Referring to claim 25, Smith in view of Souche and Banakar disclose a computing device, comprising: at least one processor; and a memory comprising: a classification neural network configured to generate a classification result of data input to the classification neural network; and a verification neural network configured to generate a reliability determination of the classification result based on intermediate hidden values, of one or more hidden layers of the classification neural network, generated within the classification neural network in the generation of the classification result and input to the verification neural network, wherein, through execution of instruction stored in the computing device, the at least one processor is configured to selectively control operations of the computing device based on whether a result, of an implementation of the classification neural network for the input data and implementation of the verification neural network with respect to the classification result, verified the classification result. (see citations of claim 1 and further [0037] of Souche, “The image pre-processor may crop and enhance particular content in the images from the training set to input into intermediate CNN builder 215. The intermediate CNN builder 215 may select various architectures and parameters to train an intermediate CNN 225. The intermediate CNN 225 may be then be evaluated on digital images 232 in a validation set. CNN validator 235 may determine whether to flag the intermediate CNN 225 as meeting a designated validation threshold. If the intermediate CNN 225 does not meet the validation threshold, the intermediate CNN 225 is not flagged and continues to be trained on the digital images 202 from the training set by the intermediate CNN builder 215. Shared weights of the CNN may be adjusted in an iterative process until the validation threshold is met. When the intermediate CNN 225 meets the validation threshold, the intermediate CNN 225 may be selected as the classifier 255. The classifier 255 may be used to classify digital images, such as digital image 252, into a class or category at 275. The classification may be a prediction of whether the digital image belongs to the class or category. The prediction may be accompanied by a confidence value that indicates accuracy of the classification.”) Referring to claim 26, Smith in view of Souche and Banakar disclose the device of claim 25, wherein the memory further comprises another verification neural network configured to generate another reliability determination of the classification result based on hidden values of at least one hidden layer of the classification neural network generated within the classification neural network by the generation of the classification result, and wherein the selective control of the operations of the computing device are based on whether combined results, of respective implementations of the verification neural network and the other verification neural network with respect to the classification result, verified the classification result. ([0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) Referring to claim 27, Smith in view of Souche and Banakar disclose a computing device, the computing device comprising: at least one processor; and memory comprising one or more non-transitory storage media storing instructions that when executed by the at least one processor, cause the device to: implement a classification neural network to generate a classification result of data input to the classification neural network by: generation of, with respect to the input data, intermediate hidden values of one or more hidden layers of the classification neural network; and generation of the classification result of the input data based on the generated intermediate hidden values; and generate a determination of a reliability of the classification result by implementing a verification neural network, input the intermediate hidden values, to generate the determination of the reliability, wherein, through the execution of the instructions, the at least one processor is further configured to selectively control performance of operations of the device based on whether the classification result is verified. (see citations of claim 1) Referring to claim 28, Smith in view of Souche and Banakar disclose the device of claim 27, wherein the intermediate hidden values include hidden values of two or more hidden layers, as respective outputs of the two or more hidden layers, among a plurality of hidden layers of the classification neural network. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 29, Smith in view of Souche and Banakar disclose the device of claim 27, wherein, through the execution of the instructions, the at least one processor is further configured to perform the verifying of the classification result of the input data when the generated determination of the reliability of the classification result meets a predetermined verification threshold. ([0037] of Souche, CNN validator determine whether to flag the intermediate CNN as meeting a designated validation threshold) Referring to claim 31, Smith in view of Souche and Banakar disclose the device of claim 27, wherein the verification neural network comprises at least five hidden layers. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 32, Smith in view of Souche and Banakar disclose the device of claim 27, wherein the classification neural network comprises at least five hidden layers. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 33, Smith in view of Souche and Banakar disclose a computing device comprising: a classification neural network configured to generate a classification result of data input to the classification neural network; a verification neural network configured to, after the generation of the classification result, generate a reliability determination of the classification result using intermediate hidden values, of one or more hidden layers of the classification neural network, generated within the classification neural network by the generation of the classification result; and at least one processor configured, through execution of instructions stored in the computing device, to implement the classification neural network and the verification neural network, wherein, through the execution of the instructions, the at least one processor is further configured to selectively control performance of operations of the device based on whether the classification result is verified. (see citations of claim 1, here, as shown in Fig. 4 of Banakar, the training data is inputted into “train ANN 401”/intermediate layer, then output classification results at step 408, and therefore inputted into verification NN at 406, which also takes input from the intermediate layer 401, hence verification happens AFTER classification NN has produced classification results after passing input through intermediate layer) Referring to claim 34, Smith in view of Souche and Banakar disclose the device of claim 33, further comprising another verification neural network configured to generate another reliability determination of the classification result based on hidden values at least one hidden layer of the classification neural network, generated within the classification neural network by the generation of the classification result, wherein, through the execution of the instructions, the at least one processor is further configured to implement the other verification neural network, and to verify the classification result of the input data based on the reliability determination and the other reliability determination. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN and [0045] of Souche, “Similarity may be determined based on a mathematical comparison of attributes (e.g., visual features), such as based on calculated Hamming distances. For example. “similar” images may include images that are a closest match when comparing the attributes of the extracted image 151 and the attributes of the stored images. If Hamming distance is used for the comparisons, then images with the smallest Hamming distances may be considered similar images. A Hamming distance threshold may be set such that if an image has a Hamming distance greater than the threshold, then the image is not considered a similar image to the extracted image 151.”) Referring to claim 35, Smith in view of Souche and Banakar disclose the device of claim 33, wherein, through the execution of the instructions, the at least one processor is configured to perform the verifying of the classification result of the input data when the reliability determination meets a predetermined verification threshold. ([0037] of Souche, CNN validator determine whether to flag the intermediate CNN as meeting a designated validation threshold) Referring to claim 37, Smith in view of Souche and Banakar disclose the computing device of claim 33, wherein the classification neural network is a convolutional neural network (CNN). (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 38, Smith in view of Souche and Banakar disclose the computing device of claim 37, wherein the classification neural network comprises at least five hidden layers. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 39, Smith in view of Souche and Banakar disclose the computing device of claim 33, wherein the verification neural network is a CNN. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 40, Smith in view of Souche and Banakar disclose the computing device of claim 39, wherein the verification neural network comprises at least five hidden layers. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) Referring to claim 41, Smith in view of Souche and Banakar disclose the computing device of claim 33, wherein the input data is image data. (as shown in Figs. 7-8 of Smith, the input data is an image) Referring to claim 42, Smith in view of Souche and Banakar disclose the computing device of claim 33, wherein the input data is audio data. ([0025] of Souche, video input which contains audio). Referring to claim 43, Smith in view of Souche and Banakar disclose the computing device of claim 33, wherein the classification neural network comprises an input layer, an output layer, and a plurality of hidden layers, and the intermediate hidden values include outputs of a hidden layer closer to the output layer than the input layer. (as shown in Figs. 7-8 and [0077]-[0079] of Smith, input data/image is inputted into multiple layers and as shown in Figs. 7-8 of at least 5 layers, such as convolutional layer, activation layer, pooling layer, and generate intermediate hidden values and classification result has been generated after the intermediate hidden layers and further, Fig. 9 and [0080]-[0085] of Smith, the final NN classifier is the verification neural network that determines the reliability of the classification outputted from the initial classification NN) The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: KR 2017-0039783 A “Multi-Head Multi-Layer Attention to Deep Language Representations for Grammatical Error Detection”, Kaneko et al, 4/15/2019. JP 2020-35103A Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). In the interests of compact prosecution, Applicant is invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicant may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicant may wish to request an interview using the Interview Practice website: http://;www.uspto.gov/patent/laws-and-regulations/interview-practice. Applicant is reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicant to the USPTO via Internet e- mail. If such a reply is submitted by Applicant via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAIMEI JIANG whose telephone number is (571)270-1590. The examiner can normally be reached M-F 9-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at 571-270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAIMEI JIANG/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Sep 08, 2021
Application Filed
May 31, 2025
Non-Final Rejection — §103, §Other
Aug 27, 2025
Response Filed
Oct 26, 2025
Final Rejection — §103, §Other
Dec 29, 2025
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103, §Other (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587552
TIME SERIES ANOMALY DETECTION METHOD USING GRU-BASED MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12579209
Devices, Methods, and Graphical User Interfaces for Interacting with a Web-Browser
2y 5m to grant Granted Mar 17, 2026
Patent 12541991
AUTOMATICALLY CLASSIFYING HETEROGENOUS DOCUMENTS USING MACHINE LEARNING TECHNIQUES
2y 5m to grant Granted Feb 03, 2026
Patent 12511563
QUANTUM COMPUTING TASK PROCESSING METHOD AND SYSTEM AND COMPUTER DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12468880
METHODS AND SYSTEMS FOR PRESENTING DROP-DOWN, POP-UP OR OTHER PRESENTATION OF A MULTI-VALUE DATA SET IN A SPREADSHEET CELL
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
51%
Grant Probability
82%
With Interview (+31.9%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month