DETAILED ACTION
This office action is in response to amendments filed on 12/09/2025.
Claims 1, 3, 6-7, 9-10, 12, 15-16, and 18-19 have been amended. Claims 5, 8, 14, 17, and 20 have been canceled. Claims 21-24 have been added. Claims 1-4, 6-7, 9-13, 15-16, 18-19, and 21-24 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Rejections Under 35 U.S.C. § 101:
Applicant's arguments regarding the rejections under 35 U.S.C. § 101 (pg. 9-11) have been fully considered but they are not persuasive. Applicant argues that the claimed invention is directed to a technical improvement in the field of machine learning because the claimed method of model compression reduces the size of a machine learning model, resulting in enhanced computational efficiency.
In response, examiner first points to MPEP 2106.05(a), paragraph 5, which reads: “An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome.” In regard to the asserted improvement, the independent claim recites a high-level model compression process of determining model compression parameters and identifying a subset of features based on the model compression parameters, with no detail as to how the model compression parameters are determined or how they are used to identify the feature subset. At such a high level of generality, this amounts to merely claiming the idea of a solution or outcome, rather than a particular way to achieve the outcome.
Examiner additionally points to MPEP 2106.05(a), paragraph 6, which reads: “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements… In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception.” As can be seen below in the rejection under 35 U.S.C. § 101, the limitations which provide the steps for model compression (determining model compression parameters and identifying a feature subset, i.e. the limitations which provide the asserted improvement) are considered mental processes (the judicial exception), not additional elements.
Finally, examiner points to MPEP 2106.05(a), paragraph 3, which reads: “An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim…” The claimed technical solution of model compression via feature selection is not unconventional, but rather well-understood, routine, and conventional in the field, per Marcílio: “While offering great opportunities to discover patterns and tendencies, dealing with high-dimensional data can be complicated due to the so-called curse of dimensionality… Other approaches to deal with high dimensionality is to use feature selection algorithms, which select a subset of variables that can describe the input data while proving good results in prediction” (Marcílio, “From explanations to feature selection: assessing SHAP values as feature selection mechanism”, pg. 340, section I).
The rejections under 35 U.S.C. § 101 are maintained, and have been updated to include the amended limitations and to clarify the reasoning given for the limitations that were not amended.
Prior Art Rejections:
Applicant's arguments regarding the prior art rejections (pg. 12-14) have been fully considered but they are not persuasive.
Applicant argues that the cited references fail to teach the amended independent claim limitation directed to “determine, for each feature of the set of features based on the set of feature tracking data, a respective model compression parameter of a set of model compression parameters; and compress a machine learning model to obtain a compressed model by identifying a subset of the set of features based on the set of model compression parameters”, because while Vaishnav teaches generating feature tracking data, and Marcílio teaches model compression via feature selection using SHAP values as model compression parameters, neither Vaishnav nor Marcílio (nor any of the other cited references) teach or suggest determining model compression parameters based on feature tracking data. However, examiner respectfully notes that Vaishnav’s feature tracking data, once provided to the classifier, functions no differently than any other classifier input feature data. Marcílio’s methods for feature selection and model compression are model agnostic, and therefore applicable to the input features of any classification or regression model. For this reason, examiner is not aware of any reason, nor has applicant provided a reason, why Marcílio’s feature selection methodology would not be straightforwardly applicable to Vaishnav’s feature tracking data. One of ordinary skill in the art would have been motivated to modify Vaishnav’s human activity classification framework with Marcílio’s feature selection mechanism in order to reduce the dimensionality of the input data and thus avoid complications arising from the “curse of dimensionality” (Marcílio, pg. 340, section I).
The prior art rejections have been updated to include the amended limitations and to clarify
the reasoning given for the limitations that were not amended.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 6-7, 9-13, 15-16, 18-19, and 21-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1:
Step 1: The claim is directed to a system, which falls within the statutory category of a machine/manufacture.
Step 2A Prong 1: The claim is directed to an abstract idea. Specifically, the claim recites:
extract a set of features using the input signal and a feature extractor […], wherein the set of features comprises a set of confidence features and a set of uncertainty features; (Abstract idea – mental process. Extracting confidence and uncertainty features from input data can practically be performed in the human mind or with the aid of pen and paper, for example, by viewing images on a display and mentally identifying the positions of objects, along with a degree of uncertainty associated with the identification. The courts have recognized that claims can recite a mental process even if they are claimed as being performed on a computer. See MPEP 2106.04(a)(2)(III).)
perform classification gating to generate a classification gating output; (Abstract idea – mental process. Classification gating can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by mentally comparing uncertainty to a threshold and only tracking feature data with uncertainty which is below the threshold. See MPEP 2106.04(a)(2)(III).)
generate a set of feature tracking data by recursively tracking the set of features and associated uncertainty based on the classification gating output, wherein the set of feature tracking data comprises a set of confidence feature tracking data and a set of uncertainty feature tracking data; (Abstract idea – mental process. Generating feature tracking data based on confidence and uncertainty features can practically be performed in the human mind or with the aid of pen and paper, for example, by recording successive confidence and uncertainty feature values by hand on a sheet of paper. See MPEP 2106.04(a)(2)(III).)
determine, for each feature of the set of features based on the set of feature tracking data, a respective model compression parameter of a set of model compression parameters; (Abstract idea – mental process. Determining a model compression parameter for each feature based on feature tracking data can practically be performed in the human mind or with the aid of pen and paper, for example, by mentally assigning a value to each feature indicating an estimation of the degree to which its tracking data influences the classifier output. See MPEP 2106.04(a)(2)(III).)
compress a machine learning model to obtain a compressed model by identifying a subset of the set of features based on the set of model compression parameters; and (Abstract idea – mental process. Identifying a subset of features based on model compression parameters can practically be performed in the human mind or with the aid of pen and paper, for example, by viewing the features and model compression parameters on a sheet of paper and mentally identifying the features associated with model compression parameters above a threshold. See MPEP 2106.04(a)(2)(III).)
Step 2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination. Specifically, the claim recites the additional elements:
A system comprising: memory; and a processing device, operatively coupled to the memory to: (This limitation is interpreted as a generic computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
obtain an input signal corresponding to data obtained from a data source; (Obtaining data from a data source amounts to adding insignificant extra-solution activity (necessary data gathering) to the judicial exception – see MPEP2106.05(g).)
wherein the feature extractor is a triplet-loss based-feature extractor or a quadruplet-loss based feature extractor, (Performing feature extraction using triplet-loss or quadruplet-loss amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g).)
use the compressed model to make an activity prediction associated with an object. (Using a generic machine learning model to make a prediction is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Limiting the prediction to an object activity prediction amounts to generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h).)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Specifically, the claim recites the additional elements:
A system comprising: memory; and a processing device, operatively coupled to the memory to: (This limitation is interpreted as a generic computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
obtain an input signal corresponding to data obtained from a data source; (Obtaining data from a data source amounts to adding insignificant extra-solution activity (necessary data gathering) to the judicial exception – see MPEP2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network, which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d).)
wherein the feature extractor is a triplet-loss based-feature extractor or a quadruplet-loss based feature extractor, (Performing feature extraction using triplet-loss or quadruplet-loss amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g). Further, feature extraction using triplet-loss and quadruplet-loss is well-understood, routine, and conventional in the field of machine learning, per Khaertdinov: “Deep Metric Learning (DML), also known as similarity learning, is a paradigm of learning deep feature embeddings which are extensively used in various problems, mostly coming from the Computer Vision domain… This approach requires specific loss functions which are based on distances between certain data points such as triplet loss [15], quadruplet loss [16] or contrastive loss [17]” (Khaertdinov et al., “Deep Triplet Networks with Attention for Sensor-based Human Activity Recognition”, pg. 1, section I) – see MPEP 2106.05(d).)
use the compressed model to make an activity prediction associated with an object. (Using a generic machine learning model to make a prediction is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Limiting the prediction to an object activity prediction amounts to generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h).)
Claims 2-4, 6-7, 9-13, 15-16, 18-19, and 21-24:
Claim 2 recites The system of claim 1, wherein, to obtain the input signal, the processing device is to: receive raw data from the data source; and generate the input signal from the raw data. Receiving raw data from the data source amounts to adding insignificant extra-solution activity (necessary data gathering) to the judicial exception – see MPEP2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network, which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d). Generating the input signal from the raw data can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing raw image data on a display and mentally deciding which frames to process. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 3 recites The system of claim 1, wherein the data source comprises a sensor device comprising one or more sensors. Limiting the data source to a sensor device amounts to generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 4 recites The system of claim 1, wherein: the set of confidence features comprises a set of mean-based features; the set of uncertainty features comprises a set of variance-based features; the set of confidence feature tracking data comprises a set of mean-based feature tracking data; and the set of uncertainty feature tracking data comprises a set of variance-based feature tracking data. This claim qualifies confidence features as mean-based and uncertainty features as variance-based. Mean-based and variance-based features can still be extracted and tracked mentally. Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 6 recites The system of claim 1, wherein, to use the compressed model to make the activity prediction, the processing device is to train the compressed model during a training stage to obtain a trained model. Training a generic machine learning model during a training stage is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).
Claim 7 recites The system of claim 1, wherein, to use the compressed model to make the activity prediction, the processing device is to make the activity prediction during an inference stage. Using a generic machine learning model to make a prediction during an inference stage is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).
Claim 9 recites The system of claim 1, wherein the set of model compression parameters comprises a set of Shapley values, and wherein, to identify the subset of the set of features, the processing device is to generate the subset of the set of features by: determining, for each feature of the set of features, whether a respective Shapley value for the feature satisfies a threshold condition; and in response to determining that the respective Shapley value for the feature satisfies the threshold condition, adding the feature to the subset of the set of features. Generating a subset of features based on Shapley values can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the features and associated Shapley values on a display, mentally determining whether to include each feature in the subset based on whether its Shapley value meets a threshold, and then adding each selected feature to a list of selected features by hand. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 8, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claims 10-13, 15-16, 18 are method claims containing substantially the same elements as system claims 1-4, 6-7, and 9, respectively, and are rejected on the same grounds under 35 U.S.C. 101 as claims 1-4, 6-7, and 9, respectively, mutatis mutandis.
Claims 19 and 21-24 are product claims containing substantially the same elements as system claims 1-4 and 9, respectively, and are rejected on the same grounds under 35 U.S.C. 101 as claims 1-4 and 9, respectively, mutatis mutandis. The additional components of A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to: are interpreted as a general-purpose computer and mere instructions to apply the judicial exception on the computer. Therefore, the claims do not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-7, 9-13, 15-16, 18-19, and 21-24 are rejected under 35 U.S.C. 103 as being unpatentable over
Akbari et al. (hereinafter Akbari), “A Deep Learning Assisted Method for Measuring Uncertainty in Activity Recognition with Wearable Sensors” in view of
Khaertdinov et al. (hereinafter Khaertdinov), “Deep Triplet Networks with Attention for Sensor-based Human Activity Recognition”,
Vaishnav et al. (hereinafter Vaishnav), “Continuous Human Activity Classification With Unscented Kalman Filter Tracking Using FMCW Radar”, and
Marcílio et al. (hereinafter Marcílio), “From explanations to feature selection: assessing SHAP values as feature selection mechanism”.
Regarding Claim 1,
Akbari teaches A system comprising:
obtain an input signal corresponding to data obtained from a data source; (Pg. 4, section V.A: “We used 3D acceleration and gyroscope sensors that results in 18 axis of data and segmented the data into windows of length 100 (one second as the sampling rate of the sensors is 100Hz) with 50% overlap.” Windows of data (i.e. an input signal) correspond to data obtained from 3D acceleration and gyroscope sensors (i.e. a data source).)
extract a set of features using the input signal and a feature extractor, […] wherein the set of features comprises a set of confidence features and a set of uncertainty features; (Pg. 5, section IV.B: “In Figure 2, the encoder, which serves as feature extractor, estimates the mean and standard deviation of a Gaussian distribution that is the approximation of the posterior of the features given data…” Mean and standard deviation features are extracted from each input using a feature extractor to obtain a set of mean (i.e. confidence) and standard deviation (i.e. uncertainty) features.)
make an activity prediction associated with an object. (Pg. 2, section I: “We design a unified framework for automatic feature extraction, classification, and estimation of uncertainty of the classifier for human activity recognition.” The framework is used to classify human activity (i.e. predict an activity associated with an object).)
Akbari does not appear to explicitly disclose wherein the feature extractor is a triplet-loss based-feature extractor or a quadruplet-loss based feature extractor,
However, Khaertdinov teaches wherein the feature extractor is a triplet-loss based-feature extractor or a quadruplet-loss based feature extractor, (Pg. 1, section I: “Deep Metric Learning (DML), also known as similarity learning, is a paradigm of learning deep feature embeddings which are extensively used in various problems, mostly coming from the Computer Vision domain… This approach requires specific loss functions which are based on distances between certain data points such as triplet loss [15], quadruplet loss [16] or contrastive loss [17]. In this paper, we are focused on the triplet loss function and its variations. This study aims to apply the DML concept to sensor-based HAR [human activity recognition]. The main motivation for exploiting DML is its powerful property of extracting robust deep feature embeddings.” Feature extraction is performed using triplet loss.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Akbari and Khaertdinov. Akbari teaches measuring uncertainty in human activity recognition via a variational autoencoder which extracts a Gaussian distribution representing latent features of sensor data. Khaertdinov teaches human activity recognition where feature extraction is performed using triplet loss. One of ordinary skill would have motivation to combine Akbari and Khaertdinov because “triplet networks not only improve the quality of [human activity] recognition but also are capable of constructing robust feature representations less affected by subject heterogeneity and inter-class similarities” (Khaertdinov, pg. 9, section V).
Akbari and Khaertdinov do not appear to explicitly disclose
perform classification gating to generate a classification gating output;
generate a set of feature tracking data by recursively tracking the set of features and associated uncertainty based on the classification gating output, wherein the set of feature tracking data comprises a set of confidence feature tracking data and a set of uncertainty feature tracking data;
However, Vaishnav teaches perform classification gating to generate a classification gating output; (Pg. 3, section III.E: “Gating is used to remove noisy outlier data from being associated to the states of the tracker.” The data that is not removed by the classification gating is the classification gating output.)
generate a set of feature tracking data by recursively tracking the set of features and associated uncertainty based on the classification gating output, wherein the set of feature tracking data comprises a set of confidence feature tracking data and a set of uncertainty feature tracking data; (Pg. 1, section 1: “The classification output is fed into the tracker through classification gating, where the activity class probabilities are updated.” Pg. 2, section III.A: “The UKF [unscented Kalman filter] assumes a Gaussian random variable for the distribution of the state vector. Thus, the integration of the classifier output into the tracker facilitates to obtain not only the value of the current state of the classification but also the uncertainty associated with the state.” Pg. 2, section III.C: “The UKF is based on unscented transformation that tries to approximate the distribution of a random variable that undergoes a nonlinear transformation. Considering a Gaussian random variable
η
with mean
μ
and covariance
Ω
, on performing a nonlinear transformation
ψ
=
ϕ
(
η
)
also leads to another Gaussian distribution.” Tracking is performed on the data that is fed into the tracker through classification gating (i.e. based on the classification gating output). As shown by the cyclical arrows between the blocks labeled ‘Tracker’, ‘Feature Extraction’, ‘Classifier’, and ‘Classification Gating’ in figure 1(b), this process is recursive (pg. 1). The UKF tracker generates feature tracking data based on the state vector, which is defined by a Gaussian random variable and thus necessarily includes mean (i.e. confidence) and variance (i.e. uncertainty) features.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Akbari, Khaertdinov, and Vaishnav. Akbari teaches measuring uncertainty in human activity recognition via a variational autoencoder which extracts a Gaussian distribution representing latent features of sensor data. Khaertdinov teaches human activity recognition where feature extraction is performed using triplet loss. Vaishnav teaches measuring uncertainty in human activity classification by tracking the distribution of a Gaussian random variable using an unscented Kalman filter. One of ordinary skill would have motivation to combine Akbari, Khaertdinov, and Vaishnav because “The proposed integration of classifier and tracker improves the classification accuracy by smoothening several misclassifications arising due to the mentioned artifacts. Furthermore, the UKF provides the state estimation along with its associated uncertainty, thus providing a simple mechanism for Bayesian classification in terms of estimating the uncertainty associated with a predicted class probabilities. Furthermore, the integration of classification probabilities into the tracker enables rejection of ghost targets from nonhuman Doppler sources and better target association” (Vaishnav, pg. 1, section 1).
Akbari, Khaertdinov, and Vaishnav do not appear to explicitly disclose
memory; and a processing device, operatively coupled to the memory, to:
determine, for each feature of the set of features based on the set of feature tracking data, a respective model compression parameter of a set of model compression parameters;
compress a machine learning model to obtain a compressed model by identifying a subset of the set of features based on the set of model compression parameters; and
use the compressed model to make an activity prediction associated with an object.
However, Marcílio teaches
memory; and a processing device, operatively coupled to the memory, to: (Pg. 343, section IV: “The experiments were performed in a computer with the following configuration: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz, 32GB RAM, Windows 10 64 bits.”)
determine, for each feature of the set of features based on the set of feature tracking data, a respective model compression parameter of a set of model compression parameters; (Pg. 340, section I: “The approach assigns SHAP values, which are contribution values for a model’s output, for each feature of each data point. These SHAP values encode the importance that a model gives for a feature, so that, we use the contribution information of each feature to order the features based on its importance.” Each feature is assigned a SHAP value (i.e. a model compression parameter) representing its contribution to model output.)
compress a machine learning model to obtain a compressed model by identifying a subset of the set of features based on the set of model compression parameters; and (Pg. 340, section I: “In this case, selecting a subset of
d
features based on SHAP values means to select the first
d
features after ordering them based on the feature contributions to the model’s prediction.” The subset of features is selected based on their contributions to the model’s prediction, as measured by SHAP values (i.e. model compression parameters).)
use the compressed model to make an [activity] prediction [associated with an object]. (Pg. 342, section IV: “The algorithms were evaluated upon eight publicly available datasets, described in Table I, and based on the Keep Absolute metric [33], which computes a model score on varying number of features kept for classification/regression.” The model with varying number of kept features (i.e. the compressed model) is evaluated on classification and regression tasks (i.e. makes a prediction). Making an activity prediction associated with an object is taught by Akbari, as shown above.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Akbari, Khaertdinov, Vaishnav, and Marcílio. Akbari teaches measuring uncertainty in human activity recognition via a variational autoencoder which extracts a Gaussian distribution representing latent features of sensor data. Khaertdinov teaches human activity recognition where feature extraction is performed using triplet loss. Vaishnav teaches measuring uncertainty in human activity classification by tracking the distribution of a Gaussian random variable using an unscented Kalman filter. Marcílio teaches reducing the dimensionality of a dataset via feature selection based on explanatory SHAP values. One of ordinary skill would have motivation to combine Akbari, Khaertdinov, Vaishnav, and Marcílio because “dealing with high-dimensional data can be complicated due to the so-called curse of dimensionality… Other approaches to deal with high dimensionality is to use feature selection algorithms,” but “One problem with traditional feature selection algorithms is related to their explainability issues” (Marcílio, pg. 340, section I). Marcílio solves these problems by providing “a methodology and assessment for feature selection based on model agnostic explanations” (Marcílio, pg. 340, section I) which “demonstrated to be superior to other common feature selection mechanisms” (Marcílio, pg. 346, section VI).
Regarding Claim 2, Akbari, Khaertdinov, Vaishnav, and Marcílio teach The system of claim 1, as shown above.
Akbari also teaches wherein, to obtain the input signal, the processing device is to:
receive raw data from the data source; and generate the input signal from the raw data. (Pg. 2, section IV: “The network receives raw signal
x
as input and maps it to a latent variable
z
.” Pg. 4, section V.A: “We used 3D acceleration and gyroscope sensors that results in 18 axis of data and segmented the data into windows of length 100 (one second as the sampling rate of the sensors is 100Hz) with 50% overlap.” Raw data is received from the 3D acceleration and gyroscope sensors (i.e. data source) and segmented into windows (i.e. the input signal is generated).)
Regarding Claim 3, Akbari, Khaertdinov, Vaishnav, and Marcílio teach The system of claim 1, as shown above.
Akbari also teaches wherein the data source comprises a sensor device comprising one or more sensors. (Pg. 4, section V.A: “We used 3D acceleration and gyroscope sensors that results in 18 axis of data and segmented the data into windows of length 100 (one second as the sampling rate of the sensors is 100Hz) with 50% overlap.”)
Regarding Claim 4, Akbari, Khaertdinov, Vaishnav, and Marcílio teach The system of claim 1, as shown above.
Akbari also teaches wherein:
the set of confidence features comprises a set of mean-based features; the set of uncertainty features comprises a set of variance-based features; (Pg. 5, section IV.B: “In Figure 2, the encoder, which serves as feature extractor, estimates the mean and standard deviation of a Gaussian distribution that is the approximation of the posterior of the features given data…” Mean and standard deviation features are extracted from each input to obtain a set of mean (i.e. mean-based) and standard deviation (i.e. variance-based) features.)
Vaishnav also teaches wherein:
the set of confidence feature tracking data comprises a set of mean-based feature tracking data; and the set of uncertainty feature tracking data comprises a set of variance-based feature tracking data. (Pg. 2, section III.A: “The UKF [unscented Kalman filter] assumes a Gaussian random variable for the distribution of the state vector. Thus, the integration of the classifier output into the tracker facilitates to obtain not only the value of the current state of the classification but also the uncertainty associated with the state.” Pg. 2, section III.C: “The UKF is based on unscented transformation that tries to approximate the distribution of a random variable that undergoes a nonlinear transformation. Considering a Gaussian random variable
η
with mean
μ
and covariance
Ω
, on performing a nonlinear transformation
ψ
=
ϕ
(
η
)
also leads to another Gaussian distribution.” The UKF tracker generates feature tracking data based on the state vector, which is defined by a Gaussian random variable and thus necessarily includes mean (i.e. mean-based) and variance/covariance (i.e. variance-based) features.)
Regarding Claim 6, Akbari, Khaertdinov, Vaishnav, and Marcílio teach The system of claim 1, as shown above.
Marcílio also teaches wherein, to use the compressed model to make the activity prediction, the processing device is to train the compressed model during a training stage to obtain a trained model. (Pg. 343, figure 3: “To evaluate how well a feature selection technique can select important features, the model is retrained with
d
features kept for classification and
m
-
d
features masked, where
d
is the number of features to select and
m
is the dimensionality of the dataset.” Before classification (i.e. during the training stage), the model with
d
features kept (i.e. the compressed model) is retrained (i.e. trained).)
Regarding Claim 7, Akbari, Khaertdinov, Vaishnav, and Marcílio teach The system of claim 1, as shown above.
Marcílio also teaches wherein, to use the compressed model to make the activity prediction, the processing device is to make the activity prediction during an inference stage. (Pg. 342, section IV: “The algorithms were evaluated upon eight publicly available datasets, described in Table I, and based on the Keep Absolute metric [33], which computes a model score on varying number of features kept for classification/regression.” The model with varying number of kept features (i.e. the compressed model) is evaluated on classification and regression tasks (i.e. makes predictions during the inference stage).)
Regarding Claim 9, Akbari, Khaertdinov, Vaishnav, and Marcílio teach The system of claim 1, as shown above.
Marcílio also teaches wherein the set of model compression parameters comprises a set of Shapley values, and (Pg. 342, section III.A: “SHAP values [1] is a model addictive explanation approach, in which each prediction is explained by the contribution of the features of the dataset to the model’s output. More specifically, SHAP approximate Shapley values…”)
wherein, to identify the subset of the set of features, the processing device is to generate the subset of the set of features by:
determining, for each feature of the set of features, whether a respective Shapley value for the feature satisfies a threshold condition; and (Pg. 340, section I: “In this case, selecting a subset of
d
features based on SHAP values means to select the first
d
features after ordering them based on the feature contributions to the model’s prediction.” For each feature, it is determined whether that feature’s SHAP value falls within the top
d
SHAP values (i.e. satisfies a threshold condition).)
in response to determining that the respective Shapley value for the feature satisfies the threshold condition, adding the feature to the subset of the set of features. (Pg. 340, section I: “In this case, selecting a subset of
d
features based on SHAP values means to select the first
d
features after ordering them based on the feature contributions to the model’s prediction.” For each feature, if it is determined that its SHAP value falls within the top
d
SHAP values (i.e. it satisfies the threshold condition), it is selected for (i.e. added to) the subset of features.)
Claims 10-13, 15-16, 18 are method claims containing substantially the same elements as system claims 1-4, 6-7, and 9, respectively. Akbari, Khaertdinov, Vaishnav, and Marcílio teach the elements of claims 1-4, 6-7, and 9, as shown above.
Claims 19 and 21-24 are product claims containing substantially the same elements as system claims 1-4 and 9, respectively. Akbari, Khaertdinov, Vaishnav, and Marcílio teach the elements of claims 1-4 and 9, as shown above.
Marcílio also teaches A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to: (Examiner notes that this limitation is interpreted as implementation of the disclosed process in a generic computing environment. Pg. 343, section IV: “The experiments were performed in a computer with the following configuration: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz, 32GB RAM, Windows 10 64 bits.”)
Conclusion
Claims 1-4, 6-7, 9-13, 15-16, 18-19, and 21-24 are rejected.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN M ROHD whose telephone number is (571)272-6445. The examiner can normally be reached Mon-Thurs 8:00-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.M.R./Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147