Prosecution Insights
Last updated: April 19, 2026
Application No. 17/744,565

Systems and Methods for Human Activity Recognition Using Analog Neuromorphic Computing Hardware

Non-Final OA §103
Filed
May 13, 2022
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Polyn Technology Limited
OA Round
3 (Non-Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 8/27/2025 has been entered. Claims 1, 9, 14 have been amended. Claims 23 and 24 have been added. Claims 1-14, 17-20, and 22-24 remain pending in the application. Response to Arguments 3. Applicant’s arguments with respect to claims have been considered but are moot in view of new ground of rejection. See rejections below for details. Claim Rejections – 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1, 6, 8-10, 12, 14, 20, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Reisman et al. (U.S. Patent Application Pub. No. US 20200310541 A1) in view of Hu et al. (U.S. Patent Application Pub. No. US 20180364785 A1). Claim 1: Reisman teaches a method of recognizing human activities, the method comprising: using one or more sensors (i.e. one IMU and a plurality of neuromuscular sensors, the IMU(s) and neuromuscular sensors may be arranged to detect movement of different parts of the human body. For example, the IMU(s) may be arranged to detect movements of one or more body segments proximal to the torso (e.g., an upper arm), whereas the neuromuscular sensors may be arranged to detect movements of one or more body segments distal to the torso (e.g., a forearm or wrist); para. [0158]) including at least one wearable sensor (i.e. an IMU sensor and a plurality of EMG sensors are arranged on an armband system configured to be worn around the lower arm or wrist of a user; para. [0158]), to track activity of a user (i.e. the IMU sensor may be configured to track movement information (e.g., positioning and/or orientation over time) associated with one or more arm segments, to determine, for example whether the user has raised or lowered their arm, whereas the EMG sensors may be configured to determine movement information associated with wrist or hand segments to determine, for example, whether the user is holding an open or closed hand; para. [0158]), including obtaining a plurality of electrical signals from the one or more sensors (i.e. Neuromuscular sensors may include one or more electromyography (EMG) sensors, one or more mechanomyography (MMG) sensors, one or more sonomyography (SMG) sensors, one or more electrical impedance tomography (EIT) sensors, a combination of two or more types of EMG sensors, MMG sensors, SMG, and EIT sensors, and/or one or more sensors of any suitable type that configured to detect signals derived from neuromuscular activity; para. [0156]); forming a feature vector by extracting a plurality of features from the plurality of electrical signals (i.e. “feature space” can comprise one or more vectors or data points that represent one or more parameters or metrics associated with neuromuscular signals such as electromyography (“EMG”) signals. As an example, an EMG signal possesses certain temporal, spatial, and temporospatial characteristics, as well as other characteristics such as frequency, duration, and amplitude, for example. A feature space can generated based on one or more of such characteristics or parameters; para. [0038, 0042, 0163]), wherein the features correspond to inputs for a neural network model trained to generate a plurality of descriptors for a plurality of predefined human activities (i.e. a neural network can be trained to discriminate a finite number of poses of the hand (e.g., seven different poses of the hand). In this embodiment, the latent representation can be constrained to a lower-dimensional space (e.g., a two-dimensional space) before generating the actual classification of the data set. Any suitable loss function may be associated with the neural network, provided that the loss function remains constant across the various mappings in the latent space and classifications of processed neuromuscular input during any given user session. In one embodiment, the network used to generate the latent space and latent vectors is implemented using an autoencoder comprising a neural network; para. [0071]); applying a neurocomputing hardware device (i.e. System 2900 also includes one or more computer processors 2904 programmed to communicate with sensors 2902. For example, signals recorded by one or more of the sensors may be provided to the processor(s), which may be programmed to process signals output by the sensors 2902 to train one or more inference models 2906, the trained (or retrained) inference model(s) 2906 may be stored for later use in identifying/classifying gestures and generating control/command signals; para. [0164, 0192-0194]) to the feature vector to generate an embedding vector that specifies a descriptor (i.e. The generalized model can comprise a generated feature space model including multiple vectors representing processed neuromuscular signal data. Such neuromuscular signal data can be acquired from users using a wrist/armband with EMG sensors as described herein. The vectors can be represented as latent vectors in a latent space model … The discrete classifications in the latent space can be defined and represented by the system in various ways. The latent vectors can correspond to various parameters, including discrete poses or gestures (e.g., fist, open hand), finite events (e.g., snapping or tapping a finger), and/or continuous gestures performed with varying levels of force (e.g., loose first versus tight fist); para. [0069, 0071]), wherein the neurocomputing hardware device implements the trained neural network model (i.e. the network used to generate the latent space and latent vectors is implemented using an autoencoder comprising a neural network and has a network architecture comprising a user embedding layer followed by a temporal convolution, followed by a multi-layer perceptron in order to reach the two-dimensional latent space; para. [0071]); and applying a trained machine learning classifier to the embedding vector to classify the activity of the user as one of the predefined human activities (i.e. an events model that has been trained across multiple users (e.g., a generalized model) can be implemented to process and classify neuromuscular signal data (e.g., sEMG data) from a user into discrete events. The various latent vectors can be mapped within latent classification regions in the lower-dimensional space, and the latent vectors can be associated with discrete classifications or classification identifiers; para. [0069, 0070]). Reisman does not explicitly teach neurocomputing hardware device is an analog neurocomputing hardware device. However, Hu teaches an analog neurocomputing hardware device (i.e. the crossbar array comprises a passive analog processor that functions as an efficient neural network to recognize patterns in analog sensor data. For example, one or more sensors may generate analog sensor data, e.g., analog voltage signals, that are fed to a crossbar array as an input vector. The crossbar array may receive an input vector of a first set of analog voltage signals. The crossbar array may generate an output vector comprising a second set of analog voltage signals that is based upon a dot product of the input vector and a matrix comprising resistance values of the plurality of memristors; para. [0011, 0012, 0041, 0042]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Reisman to include the feature of Hu. One would have been motivated to make this modification because it reduces power consumption and processing load in a wearable setting by implementing the trained neural network on analog memristor crossbar hardware. Claim 6: Reisman and Hu teach the method of claim 1. Reisman further teaches wherein the one or more sensors include one or more of IMUs, cameras, microphones, and biofeedback devices (i.e. Sensors 2902 may include one or more Inertial Measurement Units (IMUs); para. [0157, 0161, 0181]). Claim 8: Reisman and Hu teach the method of claim 1. Reisman further teaches wherein the trained machine learning classifier is implemented using one or more digital components (i.e. at least one physical processor and a physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to; para. [0153, 0164]) and the trained machine learning classifier can be retrained (i.e. System 2900 also includes one or more computer processors 2904 programmed to communicate with sensors 2902. For example, signals recorded by one or more of the sensors may be provided to the processor(s), which may be programmed to process signals output by the sensors 2902 to train one or more inference models 2906, the trained (or retrained) inference model(s) 2906 may be stored for later use in identifying/classifying gestures and generating control/command signals; para. [0164]) for new users (i.e. This embedding layer can be determined via one or more personalized training procedures, which can tailor a generalized model by adjusting one or more of its weights based on processed EMG data as collected from the user during the performance of certain activities; para. [0071-0073]). Claim 9: Reisman teaches a method of recognizing human activities, the method comprising: obtaining a sequence of electrical signals from one or more sensors (i.e. neuromuscular signal data can be acquired from users using a wrist/armband with EMG sensors; para. [0069], claims 11, 16), including at least one wearable sensor (i.e. an IMU sensor and a plurality of EMG sensors are arranged on an armband system configured to be worn around the lower arm or wrist of a user; para. [0158]), that track activity of a user (i.e. the IMU sensor may be configured to track movement information (e.g., positioning and/or orientation over time) associated with one or more arm segments, to determine, for example whether the user has raised or lowered their arm, whereas the EMG sensors may be configured to determine movement information associated with wrist or hand segments to determine, for example, whether the user is holding an open or closed hand; para. [0158]); forming a plurality of feature vectors by extracting features from the sequence of electrical signals (i.e. the neuromuscular signal data inputs from a user can be processed into their corresponding latent vectors, and the latent vectors can be presented in a lower-dimensional space. The various latent vectors can be mapped within latent classification regions in the lower-dimensional space, and the latent vectors can be associated with discrete classifications or classification identifiers; para. [0069, 0070]), wherein the features correspond to inputs for a neural network model trained to generate a plurality of descriptors for a plurality of predefined human activities (i.e. a neural network can be trained to discriminate a finite number of poses of the hand (e.g., seven different poses of the hand). In this embodiment, the latent representation can be constrained to a lower-dimensional space (e.g., a two-dimensional space) before generating the actual classification of the data set. Any suitable loss function may be associated with the neural network, provided that the loss function remains constant across the various mappings in the latent space and classifications of processed neuromuscular input during any given user session. In one embodiment, the network used to generate the latent space and latent vectors is implemented using an autoencoder comprising a neural network; para. [0071]); applying the neurocomputing hardware device (i.e. System 2900 also includes one or more computer processors 2904 programmed to communicate with sensors 2902. For example, signals recorded by one or more of the sensors may be provided to the processor(s), which may be programmed to process signals output by the sensors 2902 to train one or more inference models 2906, the trained (or retrained) inference model(s) 2906 may be stored for later use in identifying/classifying gestures and generating control/command signals; para. [0164, 0192-0194]) to the plurality of feature vectors to generate a plurality of embedding vectors that each specify a corresponding descriptor (i.e. The generalized model can comprise a generated feature space model including multiple vectors representing processed neuromuscular signal data. Such neuromuscular signal data can be acquired from users using a wrist/armband with EMG sensors as described herein. The vectors can be represented as latent vectors in a latent space model … The discrete classifications in the latent space can be defined and represented by the system in various ways. The latent vectors can correspond to various parameters, including discrete poses or gestures (e.g., fist, open hand), finite events (e.g., snapping or tapping a finger), and/or continuous gestures performed with varying levels of force (e.g., loose first versus tight fist); para. [0069, 0071]); and using the plurality of embedding vectors for classifying the activity of the user as one of the predefined human activities (i.e. an events model that has been trained across multiple users (e.g., a generalized model) can be implemented to process and classify neuromuscular signal data (e.g., sEMG data) from a user into discrete events. The various latent vectors can be mapped within latent classification regions in the lower-dimensional space, and the latent vectors can be associated with discrete classifications or classification identifiers; para. [0069, 0070]). Reisman does not explicitly teach the neurocomputing hardware device is the analog neurocomputing hardware device. However, Hu teaches the analog neurocomputing hardware device (i.e. the crossbar array comprises a passive analog processor that functions as an efficient neural network to recognize patterns in analog sensor data. For example, one or more sensors may generate analog sensor data, e.g., analog voltage signals, that are fed to a crossbar array as an input vector. The crossbar array may receive an input vector of a first set of analog voltage signals. The crossbar array may generate an output vector comprising a second set of analog voltage signals that is based upon a dot product of the input vector and a matrix comprising resistance values of the plurality of memristors; para. [0011, 0012, 0041, 0042]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Reisman to include the feature of Hu. One would have been motivated to make this modification because it reduces power consumption and processing load in a wearable setting by implementing the trained neural network on analog memristor crossbar hardware. Claim 10: Reisman and Hu teach the method of claim 9. Reisman further teaches comprising: receiving, from the user, a set of descriptors that describes specific physical activities (i.e. Each of the 6 subjects performed one of seven hand poses sequentially, namely: (1) a resting hand (the active null state); (2) a closed fist; (3) an open hand; (4) an index finger to thumb pinch (“index pinch”); (5) a middle finger to thumb pinch (“middle pinch”); (6) a ring finger to thumb pinch (“ring pinch”); and (7) a pinky finger to thumb pinch (“pinky pinch”). The EMG signal data associated with those hand poses was collected, processed using a generalized model trained from data acquired from multiple users, and associated latent vectors were displayed onto a 2D representational latent space; para. [0072]); and using the set of descriptors and the plurality of embedding vectors to classify the activity of the user as one of the specific physical activities (i.e. The EMG signal data associated with those hand poses was collected, processed using a generalized model trained from data acquired from multiple users, and associated latent vectors were displayed onto a 2D representational latent space as shown in the top rows of FIG. 6A and FIG. 6B. Each of the seven classifications of poses can be seen based on different coloring in the 7 latent spaces; para. [0072]). Claim 12: Reisman and Hu teach the method of claim 9. Reisman further teaches comprising: storing (i.e. the trained (or retrained) inference model(s) 2906 may be stored for later use in identifying/classifying gestures and generating control/command signals; para. [0164]), for the user, the plurality of embedding vectors as describing a specific activity (i.e. The latent vectors can correspond to various parameters, including discrete poses or gestures (e.g., fist, open hand), finite events (e.g., snapping or tapping a finger), and/or continuous gestures performed with varying levels of force (e.g., loose first versus tight fist); para. [0070]); and using the plurality of embedding vectors for classifying subsequent activities of the user as the specific activity (i.e. updating the visualization of the lower-dimensional latent space in real-time as new signal data is received by plotting the new signal data as one or more latent vectors within the lower-dimensional latent space; claims 16, 20). Claim 14: Reisman teaches a human activity recognition device, comprising: an integrated circuit for human activity recognition i.e. the IMU sensor may be configured to track movement information (e.g., positioning and/or orientation over time) associated with one or more arm segments, to determine, for example whether the user has raised or lowered their arm, whereas the EMG sensors may be configured to determine movement information associated with wrist or hand segments to determine, for example, whether the user is holding an open or closed hand; para. [0158]), the integrated circuit comprising a network of components configured to implement a trained neural network model that is trained to generate a plurality of descriptors for a plurality of predefined human activities (i.e. a neural network can be trained to discriminate a finite number of poses of the hand (e.g., seven different poses of the hand). In this embodiment, the latent representation can be constrained to a lower-dimensional space (e.g., a two-dimensional space) before generating the actual classification of the data set. Any suitable loss function may be associated with the neural network, provided that the loss function remains constant across the various mappings in the latent space and classifications of processed neuromuscular input during any given user session. In one embodiment, the network used to generate the latent space and latent vectors is implemented using an autoencoder comprising a neural network; para. [0071]) based on a plurality of features extracted from a plurality of electrical signals from one or more sensors (i.e. Neuromuscular sensors may include one or more electromyography (EMG) sensors, one or more mechanomyography (MMG) sensors, one or more sonomyography (SMG) sensors, one or more electrical impedance tomography (EIT) sensors, a combination of two or more types of EMG sensors, MMG sensors, SMG, and EIT sensors, and/or one or more sensors of any suitable type that configured to detect signals derived from neuromuscular activity; para. [0156]), the one or more sensors including at least one wearable sensor (i.e. an IMU sensor and a plurality of EMG sensors are arranged on an armband system configured to be worn around the lower arm or wrist of a user; para. [0158]); and one or more digital components (i.e. at least one physical processor and a physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to; para. [0153, 0164]) configured to classify human activity as one of the plurality of predefined human activities according to the plurality of descriptors generated by the integrated circuit (i.e. an events model that has been trained across multiple users (e.g., a generalized model) can be implemented to process and classify neuromuscular signal data (e.g., sEMG data) from a user into discrete events. The various latent vectors can be mapped within latent classification regions in the lower-dimensional space, and the latent vectors can be associated with discrete classifications or classification identifiers; para. [0069, 0070, 0071]). Reisman does not explicitly teach the integrated circuit comprising an analog network of analog components. However, Hu teaches the integrated circuit comprising an analog network of analog components (i.e. the crossbar array comprises a passive analog processor that functions as an efficient neural network to recognize patterns in analog sensor data. For example, one or more sensors may generate analog sensor data, e.g., analog voltage signals, that are fed to a crossbar array as an input vector. The crossbar array may receive an input vector of a first set of analog voltage signals. The crossbar array may generate an output vector comprising a second set of analog voltage signals that is based upon a dot product of the input vector and a matrix comprising resistance values of the plurality of memristors; para. [0011, 0012, 0041, 0042]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Reisman to include the feature of Hu. One would have been motivated to make this modification because it reduces power consumption and processing load in a wearable setting by implementing the trained neural network on analog memristor crossbar hardware. Claim 20 is similar in scope to Claim 6 and is rejected under a similar rationale. Claim 22: Reisman and Hu teach the method of claim 1. Reisman further teaches wherein the embedding vector is an encoded representation of features (i.e. The latent space can be generated such that any higher dimensioned data space can be visualized in a lower-dimensional space, e.g., by using any suitable encoder appropriate to the machine learning problem at hand; para. [0071]) of the user's movement (i.e. The generalized model can comprise a generated feature space model including multiple vectors representing processed neuromuscular signal data. Such neuromuscular signal data can be acquired from users using a wrist/armband with EMG sensors as described herein. The vectors can be represented as latent vectors in a latent space model … The discrete classifications in the latent space can be defined and represented by the system in various ways. The latent vectors can correspond to various parameters, including discrete poses or gestures (e.g., fist, open hand), finite events (e.g., snapping or tapping a finger), and/or continuous gestures performed with varying levels of force (e.g., loose first versus tight fist); para. [0069, 0071]). Claim 23: Reisman and Hu teach the method of claim 1. Reisman further teaches wherein: the one or more sensors include an inertial measurement unit (i.e. Sensors 2902 may include one or more Inertial Measurement Units (IMUs); para. [0157]); and the plurality of analog electrical signals includes accelerometer data (i.e. IMU device (e.g., comprising an accelerometer, gyroscope, magnetometer, etc.); para. [0040]). 6. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu and further in view of Rahimi et al. (U.S. Patent Application Pub. No. US 20180293736 A1). Claim 2: Reisman and Hu teach the method of claim 1. Reisman further teaches wherein the trained neural network model is an autoencoder (i.e. the network used to generate the latent space and latent vectors is implemented using an autoencoder comprising a neural network and has a network architecture; para. [0071]) that includes an encoder (i.e. the encoder(s) can be derived from a classification problem (e.g., classifying specific hand gestures) and a neural network can be trained to discriminate a finite number of poses of the hand (e.g., seven different poses of the hand); para. [0071]). Reisman does not explicitly teach a decoder. However, Rahimi teaches wherein the trained neural network model is an autoencoder that includes an encoder and a decoder (i.e. the convolutional neural network comprises an encoder part of a first team autoencoder and a decoder part of a second team autoencoder; para. [0017]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman and Hu to include the feature of Rahimi. One would have been motivated to make this modification because autoencoder effectively reduce the dimensionality of the feature vector without significant information loss. This particularly useful for managing large sets of sensor data. 7. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu and further in view of Limonad et al. (U.S. Patent Application Pub. No. US 20170193395 A1). Claim 3: Reisman and Hu teach the method of claim 1. Reisman further teaches wherein the trained machine learning classifier is a classifier (i.e. the systems and methods described herein may detect transitions from one subregion to another by using a binary classifier, a multinomial classifier, a regressor (to estimate distance between user inputs and subregions), and/or support vector machines; para. [0048]). Reisman does not explicitly teach a KNN (K-Nearest Neighbors) classifier. However, Limondad teaches wherein the trained machine learning classifier is a KNN (K-Nearest Neighbors) classifier (i.e. Numerous machine learning techniques, such as decision trees, support vector machines, k-Nearest Neighbors (k-NN), neural networks, Bayesian networks and the like, may be used at this final stage. Their output may provide classification of the considered interval into one of two or more activity classes; para. [0015]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman and Hu to include the feature of Limondad. One would have been motivated to make this modification because KNN is simple and intuitive to understand and implement, making it a popular choice for activity recognition tasks. 8. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu, Limonad, and further in view of Canavan et al. (U.S. Patent Application Pub. No. US 20230363703 A1). Claim 4: Reisman, Hu, and Limondad teach the method of claim 3. Reisman does not explicitly teach wherein a number of neighbors for the KNN classifier equals five. However, Canavan teaches wherein a number of neighbors for the KNN classifier equals five (i.e. In the case of KNN, the number of neighbor parameter was set to 5; para. [0068]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman, Hu, and Limondad to include the feature of Canavan. One would have been motivated to make this modification because it provides more stable and accurate predictions compared to using a very small or very large number of neighbors. 9. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu and further in view of Schwab et al. (U.S. Patent Application Pub. No. US 20240404053 A1). Claim 5: Reisman and Hu teach the method of claim 1. Reisman does not explicitly wherein the trained machine learning classifier is trained separately for each of the predefined human activities using binary classification. However, Schwab teaches wherein the trained machine learning classifier is trained separately for each of the predefined human activities using binary classification (i.e. Each classifier trainer 202 separately trains a corresponding binary classifier 204 (e.g., binary classifiers 204-1, 204-2, 204-3) to map an input video frame to a binary probability that represents a likelihood of the UC severity depicted in that input video frame being greater than the configured threshold for that classifier 204; para. [0026]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman and Hu to include the feature of Schwab. One would have been motivated to make this modification because it improves prediction. 10. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu, and further in view of Park (U.S. Patent Application Pub. No. US 20180338223 A1). Claim 7: Reisman and Hu teach the method of claim 1. Reisman does not explicitly smoothing an output of the trained machine learning classifier to obtain a basic class of activity. However, Park teaches smoothing an output of the trained machine learning classifier to obtain a basic class of activity (i.e. The output of the transportation mode segment classifier, a sequence 22 of transportation mode prediction labels 24 (each label identifying a transportation mode) and probabilities 26 (each representing the probability that a given label is correct), is smoothed using a temporal smoothing model 28 based on a Hidden Markov Model (HMM). Since individual segment classification results can be noisy and erroneous, we smooth the series of transportation mode labels using a temporal model that captures continuity of transportation modes and switching behavior between modes; para. [0037, 0079]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman and Hu to include the feature of Park. One would have been motivated to make this modification because smoothing improves the reliability of activity classification. 11. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu, and further in view of Molettiere et al. (U.S. Patent Application Pub. No. US 20140164611 A1). Claim 11: Reisman and Hu teach the method of claim 10. Reisman does not explicitly generating statistics of personal daily routines of the user based on classifying the activity of the user as one of the specific physical activities. However, Molettiere teaches generating statistics of personal daily routines of the user (i.e. Other totals of data are selectively provided to the user in a similar manner including, but not limited to calories burned, distance of walks and/or runs, floors climbed, hours asleep, hours in bed, hours of a specific activity or combination of activities such as walking, biking, running, and/or swimming; para. [0070, 0109, 0151]) based on classifying the activity of the user as one of the specific physical activities (i.e. FIG. 9 is a diagram illustrating data that could be collected during a person's daily routine. The one or more tracking devices may capture different types of data and identify different states of the user. For example, the system may identify when the user is sleeping, commuting, working, outdoors, running, at the gym, etc; para. [0088]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman and Hu to include the feature of Molettiere. One would have been motivated to make this modification because it provides routine analytics/feedback to the user. 12. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu, and further in view of Nejezchleb et al. (U.S. Patent Application Pub. No. US 20180161623 A1). Claim 13: Reisman and Hu teach the method of claim 9. Reisman does not explicitly receiving, from a trainer distinct from the user, a set of descriptors that describes a specific activity; and providing feedback to the user if the activity matches the specific activity. However, Nejezchleb teaches receiving, from a trainer distinct from the user (i.e. A remote secondary sensor that is in communication with the belt may be attached to a conveyance associated with the physical activity, or may be attached to another belt worn by an instructor or trainer of the physical activity; para. [0043]), a set of descriptors that describes a specific activity (i.e. The belt is constructed and programmed to compare the actual position and orientation of the hips with an expectation of the hip position and orientation when a comparison is performed against an ideal or perfect hip position and orientation, or when the comparison is performed against the position of the secondary sensor associated with a conveyance or associated with a trainer, instructor, or other expert; para. [0045]); and providing feedback to the user if the activity matches the specific activity (i.e. The controller can detect deviations from expected hip position and orientation and, if the deviations are significant enough, provide immediate feedback to the user so that the user can make a timely correction; para. [0045, 0048]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman and Hu to include the feature of Nejezchleb. One would have been motivated to make this modification because it provides real-time feedback based on comparation to that reference. 13. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu, and further in view of Papel et al. (U.S. Patent Application Pub. No. US 20230148326 A1). Claim 17: Reisman and Hu teach a human activity recognition device of claim 14. Reisman further teaches wherein the one or more digital components implement a trained machine learning classifier that is a classifier (i.e. the systems and methods described herein may detect transitions from one subregion to another by using a binary classifier, a multinomial classifier, a regressor (to estimate distance between user inputs and subregions), and/or support vector machines; para. [0048]) which can be retrained (i.e. System 2900 also includes one or more computer processors 2904 programmed to communicate with sensors 2902. For example, signals recorded by one or more of the sensors may be provided to the processor(s), which may be programmed to process signals output by the sensors 2902 to train one or more inference models 2906, the trained (or retrained) inference model(s) 2906 may be stored for later use in identifying/classifying gestures and generating control/command signals; para. [0164]). Reisman does not explicitly teach a KNN (K-Nearest Neighbors) classifier. However, Papel teaches wherein the one or more digital components implement a trained machine learning classifier that is a KNN (K-Nearest Neighbors) classifier which can be retrained (i.e. the token matching data can be used as features to train a machine learning model (e.g., a classification algorithm such as a decision tree, naive Bayes classifier, artificial neural network, or k-nearest neighbor algorithm). The machine learning model can be trained to determine the combination of token pairs and/or weight parameters that yields the most accurate patient match prediction. In some cases, the output of the machine learning model can be assessed for accuracy and the results can be used to re-train one or more models based on these results; para. [0071]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman and Hu to include the feature of Papel. One would have been motivated to make this modification because it provides active learning techniques to enable the output of each trained model to inform and improve the training of future iterations of a corresponding model. Accordingly, the models employed by the disclosed system can improve over time based on feedback from the training itself. 14. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu, Papel, and further in view of Schwab. Claim 18: Reisman, Hu, and Papel teach a human activity recognition device of claim 17. Reisman does not explicitly teach wherein the trained machine learning classifier is trained separately for each of the predefined human activities using binary classification. However, Schwab teaches wherein the trained machine learning classifier is trained separately for each of the predefined human activities using binary classification (i.e. Each classifier trainer 202 separately trains a corresponding binary classifier 204 (e.g., binary classifiers 204-1, 204-2, 204-3) to map an input video frame to a binary probability that represents a likelihood of the UC severity depicted in that input video frame being greater than the configured threshold for that classifier 204; para. [0026]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman, Hu, and Papel to include the feature of Schwab. One would have been motivated to make this modification because it improves prediction. 15. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu, Papel, and further in view of Park. Claim 19: Reisman, Hu, and Papel teach a human activity recognition device of claim 17. Reisman does not explicitly teach smooth the output of the trained machine learning classifier to obtain a basic class of activity. However, Park teaches smooth the output of the trained machine learning classifier to obtain a basic class of activity (i.e. The output of the transportation mode segment classifier, a sequence 22 of transportation mode prediction labels 24 (each label identifying a transportation mode) and probabilities 26 (each representing the probability that a given label is correct), is smoothed using a temporal smoothing model 28 based on a Hidden Markov Model (HMM). Since individual segment classification results can be noisy and erroneous, we smooth the series of transportation mode labels using a temporal model that captures continuity of transportation modes and switching behavior between modes; para. [0037, 0079]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman, Hu, and Papel to include the feature of Park. One would have been motivated to make this modification because smoothing improves the reliability of activity classification. 16. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Reisman in view of Hu, and further in view of Rossi et al. (U.S. Patent Application Pub. No. US 20170340292 A1). Claim 24: Reisman and Hu teach the method of claim 1. Reisman further teaches wherein: the one or more sensors include an inertial measurement unit (i.e. Sensors 2902 may include one or more Inertial Measurement Units (IMUs); para. [0157]); the plurality of analog electrical signals includes accelerometer data (i.e. IMU device (e.g., comprising an accelerometer, gyroscope, magnetometer, etc.); para. [0040]) and heart rate data (i.e. sensors such as a heart-rate monitor; para. [0161]). Reisman does not explicitly teach electrocardiogram sensor; and to detect abnormal patterns of human activity based on the accelerometer data and the heart rate data. However, Rossi teaches electrocardiogram sensor; and to detect abnormal patterns of human activity based on the accelerometer data and the heart rate data (i.e. a system comprises: a sensor configured to generate heartbeat signals; an accelerometer configured to generate one or more acceleration signals; and signal processing circuitry configured to: detect a beat of a test heartbeat signal; associate a heart rate and an energy of acceleration with the detected beat of the test heartbeat signal; selectively include the detected beat of the test heartbeat signal in a set of test beats based on the heart rate and energy of acceleration associated with the detected beat of the test heartbeat signal; and detect anomalous beats in the set of test beats using a dictionary of a sparse approximation model; para. [0007]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Reisman and Hu to include the feature of Rossi. One would have been motivated to make this modification because it enables reliable detection of anomalies while accounting of the user’s movement context. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Cai et al. (Pub. No. US 11126897 B2), the techniques include a transformation of features extracted from sensor data on a first device platform to more closely statistically, with features extracted from sensor data on a second device platform. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

May 13, 2022
Application Filed
Jan 31, 2025
Non-Final Rejection — §103
Apr 29, 2025
Applicant Interview (Telephonic)
May 01, 2025
Examiner Interview Summary
May 05, 2025
Response Filed
Jul 14, 2025
Final Rejection — §103
Aug 12, 2025
Applicant Interview (Telephonic)
Aug 12, 2025
Examiner Interview Summary
Aug 12, 2025
Response after Non-Final Action
Aug 27, 2025
Request for Continued Examination
Sep 04, 2025
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month