Prosecution Insights
Last updated: April 19, 2026
Application No. 18/105,729

METHOD FOR CLASSIFIER LEARNING FROM A STREAM OF DATA ON A RESOURCE-CONSTRAINED DEVICE

Non-Final OA §102§103
Filed
Feb 03, 2023
Examiner
KIM, DAVID
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
STMicroelectronics
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
9 currently pending
Career history
9
Total Applications
across all art units

Statute-Specific Performance

§101
29.2%
-10.8% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
4.2%
-35.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 2/3/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 4, 10, 11, 14, 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cho (VLSI Implementation of Restricted Coulomb Energy Neural Network with Improved Learning Scheme). Regarding claim 1, Cho discloses “sampling a sensor data stream to generate a plurality of sensor data samples;” (See [Section 3.2, paragraph 1]; a sensor data stream is sampled from gas detection sensors and hand motion-capture sensors) “extracting, via a feature extractor, a plurality of extracted features from the sensor data samples;” (See [Section 5, [paragraph 2]; a feature extraction unit is used to extract features from sensor data samples) “determining, via a classifier and based [on] the extracted features, a detection of a new feature of the one or more of the extracted features, wherein the classifier comprises at least an input layer of input neurons, a hidden layer of hidden neurons, and an output layer of output neurons, wherein each hidden neuron is associated with one output neuron, wherein each output neuron is associated with one class;” (See [Section 1, paragraph 2], [Section 2, paragraph 1]; RCE-NN is used as a classifier that consists of an input, hidden, and output layer. The output layer is associated with the hidden layer’s response and outputs a label that matches the input.) “training the classifier based on the plurality of extracted features comprising:” (See [Section 2, paragraph 5]; the RCE-NN classifier correctly recognizes feature vectors if a neuron with a different label than the input feature is activated) “adding a new hidden neuron, wherein the new hidden neuron is associated with the new feature;” (See [Section 2, paragraph 4]; a new neuron is generated for the new feature) “determining an age for each of the hidden neurons;” (See [Section 2, paragraph 6]; an algorithm is used to determine the activation count (age) for each of the neurons by counting the number of activations for each neuron during a period to estimate the reliability of a neuron) “removing one or more hidden neurons based on the age of each the hidden neurons.” (See [Section 1, paragraph 2], [Section 1, paragraph 6], [Section 3.1, paragraph 2]; Neurons are removed based on their activation count (age) by comparing them to a threshold value and removing neurons from a set of neurons if they are below that threshold value). Regarding claim 4, Cho discloses “determining an age for each of the hidden neurons is based on the activation of each of the hidden neurons, wherein an activation is based on a distance associated with one or more extracted features from one or more of the plurality of hidden neurons being less than a radius of the one or more of the plurality of hidden neurons.” (See [Section 1, paragraph 2], [Section 2, paragraph 6]; an algorithm is used to determine the activation count (age) for each of the neurons by counting the number of activations for each neuron during a period to estimate the reliability of a neuron. Generating the ages also depends on a distance that can be computed between the input feature and the stored hypersphere center by determining if the distance is less than a neuron’s radius, which activates a neuron). Regarding claim 10, Cho discloses “operating, after training the classifier, the classifier on one or more features extracted from a second sensor data stream.” (See [Section 2, paragraph 5]; the RCE-NN classifier is trained and then operated on the features that are extracted from a second data stream during the recognition process.) Regarding Claim 11, this claim is similar in scope to claim 1. Regarding Claim 14, this claim is similar in scope to claim 4. Regarding Claim 20, this claim is similar in scope to claim 10. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 3, 8, 12, 13, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Cho (VLSI Implementation of Restricted Coulomb Energy Neural Network with Improved Learning Scheme), in view of Yu (NISP). Regarding claim 2, Cho discloses “wherein each sensor data sample of the sensor data samples includes a first plurality of dimensions;” (See [Section 3.2, paragraph 1]; the sensor dataset with samples includes a plurality of dimensions) “wherein extracting a plurality of extracted features from the sensor data samples includes extracting a plurality of extracted features that includes a second plurality of dimensions;” (See [Section 5, paragraph 2]; data is extracted from each sensor and converted into feature data) Cho fails to explicitly disclose, “wherein the second plurality of dimensions is reduced from the first plurality of dimensions”. Yu teaches “wherein the second plurality of dimensions is reduced from the first plurality of dimensions.” (See [Section 1, paragraph 1]; features from the original set are reduced by pruning unimportant neurons from the original dataset (first plurality of dimensions) to generate a new dataset (second plurality of dimensions)). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Cho and Yu before them to modify Cho to further modify the extracted feature data from the first plurality of dimensions. One would be motivated to do so in order to clean up the first plurality of dimensions for a simpler plurality that is free of redundant information, see e.g., [Section 1, paragraph 1], where Yu describes pruning the unimportant neurons in a network to cut down on redundancy. Regarding claim 3, Cho fails to explicitly disclose, “removing one or more hidden neurons based on the age of each of the hidden neurons includes removing at least one hidden neuron for each class”. Yu teaches “removing one or more hidden neurons based on the age of each of the hidden neurons includes removing at least one hidden neuron for each class.” (See [Section 3.3, paragraph 1]; neurons are removed based on their importance score (age) and can be done by examining each neuron in each layer (class) to remove neurons). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Cho and Yu before them to modify Cho to remove neurons based on a neuron’s age/importance score. One would be motivated to do so in order to remove irrelevant or unimportant neurons, see e.g., [Section 3.3, paragraph 1], where Yu prunes neurons that have an importance score of 0, which indicates that they are unimportant neurons that can be removed. Regarding claim 8, Cho discloses “the feature extractor” (See [Section 5, paragraph 2]; the feature extractor hardware unit uses RCE-NN as its neural network) Cho fails to explicitly disclose, “the feature extractor” comprises a “convolutional neural network”. Yu teaches “the feature extractor comprises a convolutional neural network” (See [Abstract], [Section 1, paragraph 4]; A CNN (Convolutional Neural Network) is used for its NISP algorithm and NISP is not hardware specific). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Cho and Yu before them to modify Cho to use a convolutional neural network. One would be motivated to do so because convolutional neural networks are better at working with datasets with large pluralities of information at a faster pace than some other neural network models, see e.g., [Section 1, paragraph 4], where Yu explains that using a convolutional neural network offers greater acceleration and compression with a minimal decrease in accuracy. Regarding Claim 12, this claim is similar in scope to claim 2. Regarding Claim 13, this claim is similar in scope to claim 3. Regarding Claim 18, this claim is similar in scope to claim 8. Claim Rejections - 35 USC § 103 Claims 5, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Cho (VLSI Implementation of Restricted Coulomb Energy Neural Network with Improved Learning Scheme), in view of Van Der Made (US 20200143229 A1). Regarding claim 5, Cho discloses “determining an age for each of the hidden neurons comprises:” (See [Section 2, paragraph 6]; an algorithm is used to determine the activation count (age) for each of the neurons by counting the number of activations for each neuron during a period to estimate the reliability of a neuron) “determining, for each feature extracted, one or more activated hidden neurons;” (See [Section 1, paragraph 2]; the activation of hidden neurons is determined by calculating the distance between the input feature and the stored hypersphere center and deciding if the calculated distance is less than the neuron’s distance, then the neuron is activated) “the age of each of the hidden neurons activated that are associated with an output neuron of an incorrect class;” (See [Section 2, paragraph 6]; the activation count (age) is changed when a neuron is activated) “incrementing the age of each of the hidden neurons activated that are associated an output neuron of a correct class.” (See [Section 2, paragraph 6]; the activation count (age) is incremented when a neuron is activated) Cho fails to explicitly disclose, “the age of each of the hidden neurons activated that are associated with an output neuron of an incorrect class;”” is “decrementing the age”. Van Der Made teaches “decrementing the age” (See [0232]; decrementing a counter is disclosed when a certain condition is met by a weight). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Cho and Van Der Made before them to modify Cho to decrement the age instead of incrementing. One would be motivated to do so in order to decrement ages in the case of encountering a hidden neuron that is associated with an output neuron of an incorrect class, the opposite of a correct class. Regarding Claim 15, this claim is similar in scope to claim 5. Claim Rejections - 35 USC § 103 Claims 6, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Cho (VLSI Implementation of Restricted Coulomb Energy Neural Network with Improved Learning Scheme), in view of Lee (US 20230169357 A1). Regarding claim 6, Cho discloses “removing one or more hidden neurons is further based on a total number of hidden neurons exceeding a threshold.” (See [Section 1, paragraph 2], [Section 1, paragraph 6]; if a neuron is below a threshold, it should be removed from a set of neurons) Cho fails to explicitly disclose, “requesting, prior to removing one or more hidden neurons, a threshold from a user via a user interface;”. Cho fails to further explicitly disclose, “receiving, via a user interface, the threshold;”. Lee teaches “requesting, prior to removing one or more hidden neurons, a threshold from a user via a user interface;” (See [0011]; a threshold is given as a result of a user inputting data in response to a request that would have been given at some point). Lee further teaches “receiving, via a user interface, the threshold;” (See [0011]; a threshold is given by a user through a user interface and is received after the user inputs the threshold through the interface). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Cho and Lee before them to modify Cho to request and obtain a threshold value from a user interface that takes in a user’s input. One would be motivated to do so in order to modify the threshold value if a higher or lower value for determining if a neuron should be removed is desired by the user for the purpose of fine-tuning the model. Regarding Claim 16, this claim is similar in scope to claim 6. Claim Rejections - 35 USC § 103 Claims 9, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Cho (VLSI Implementation of Restricted Coulomb Energy Neural Network with Improved Learning Scheme), in view of Burke (US 20240045928 A1). Regarding claim 9, Cho fails to explicitly disclose, “a plurality of coefficients of the convolutional neural network of the feature extractor is randomly initialized”. Burke teaches “a plurality of coefficients of the convolutional neural network of the feature extractor is randomly initialized.” (See [0257]; Burke discloses initializing weights (coefficients) of the convolutional neural network of the feature extractor to randomly assign the weights when the feature extractor is initialized). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Cho and Burke before them to modify Cho to randomly initialize the coefficients/weights of the convolutional neural network. One would be motivated to do so because randomly initializing a plurality of coefficients/weights in the convolutional neural network of a feature extractor is a standard practice in deep learning and serves as the starting point for training, allowing the model to learn specific features from data rather than relying on pre-trained representations. Regarding Claim 19, this claim is similar in scope to claim 9. Claim Rejections - 35 USC § 103 Claims 7, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Cho (VLSI Implementation of Restricted Coulomb Energy Neural Network with Improved Learning Scheme), in view of Burke (US 20240045928 A1), and further view of Mishra (Training, validation, and test datasets). Regarding claim 7, Cho discloses “sampling the sensor data stream for a plurality of sensor data samples comprises:” (See [Section 3.2, paragraph 1]; a sensor data stream is sampled by clustering feature vectors of known malware and clean samples). Cho fails to explicitly disclose, “generating training data samples from the sensor data stream, wherein the training data samples are a first portion of the sensor data stream for a first period of time, and wherein each of the training data samples are associated with a classification label;”. Cho fails to further explicitly disclose, “generating testing data samples from the sensor data stream, wherein the testing data samples are a second portion of the sensor data stream for the first period of time;”. Cho fails to further explicitly disclose, “the plurality of sensor data samples is comprised of sensor data from the training data samples.”. Mishra teaches “generating training data samples from the sensor data stream, wherein the training data samples are a first portion of the sensor data stream for a first period of time” (See [Paragraph 1]; training data samples are generated from the sensor data stream by splitting the data stream into parts so that one part is used for training data) “generating testing data samples from the sensor data stream, wherein the testing data samples are a second portion of the sensor data stream for the first period of time;” (See [Paragraph 1]; testing data samples are generated from the sensor data stream by using the remaining part of the split data stream after using one part for the training data) “the plurality of sensor data samples is comprised of sensor data from the training data samples.” (See [Picking the size of the validation and test datasets]; a commonly used data sample split would typically allocate most of the data as training data, and mentions using 60% for the training data as an example). Mishra fails to explicitly disclose, “generating training data samples from the sensor data stream” consists of “each of the training data samples are associated with a classification label;”. Burke teaches “each of the training data samples are associated with a classification label” (See [0018]; the training data samples that are disclosed are all labeled with classification labels). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Cho and Mishra before them to modify Cho to generate training data and testing data from the results of sampling the sensor data stream. One would be motivated to do so in order to generate training data and testing data for the model using a collected dataset, like the sensor data stream’s data samples, see e.g., [paragraph 1], where Mishra explains that it is common practice to split a data set into parts, with one part being training data while another part is testing data. Additionally, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Mishra and Burke before them to modify Mishra to use training data samples that are labeled with classifications. One would be motivated to do so to properly split a dataset without a majority of one type of data in the training set, which could cause the model to exhibit a bias towards a certain type of data. A training set with samples that are all labeled with a classification can help with preventing one type of classification from dominating a training data set. Regarding Claim 17, this claim is similar in scope to claim 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID KIM whose telephone number is (571)272-4331. The examiner can normally be reached 7:30 AM - 4:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.K./ Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Feb 03, 2023
Application Filed
Feb 05, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month