Prosecution Insights
Last updated: April 19, 2026
Application No. 18/328,500

APPARATUS AND METHOD FOR DETECTING AND RECOGNIZING HUMAN ACTIVITIES AND MEASURING ATTENTION LEVEL

Non-Final OA §101§103§112
Filed
Jun 02, 2023
Examiner
HODGE, LAURA NICOLE
Art Unit
3792
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
AI Mnemonic Limited
OA Round
1 (Non-Final)
42%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
86%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
40 granted / 95 resolved
-27.9% vs TC avg
Strong +44% interview lift
Without
With
+43.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
58 currently pending
Career history
153
Total Applications
across all art units

Statute-Specific Performance

§101
24.0%
-16.0% vs TC avg
§103
32.3%
-7.7% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
27.1%
-12.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 95 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/5/23 and 9/9/24 are being considered by the examiner. Claim Objections Claims 1-2 and 8 are objected to because of the following informalities: the claims recite “multi-model,” however the specification discloses “multi-modal.” Appropriate correction is required. Claim 6 is objected to because of the following informalities: the 5th to last line of the claims recites “the e “ and instead should recite –the--. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 3-7 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. For the limitations of in claim 3 “a ML prediction model to predict the activity type of the participating activity from the representative pattern of the brainwave plot”; in claim 3 “a ML prediction model based on frequency analysis on the representative pattern of the brainwave plot to predict the attention level”; in claim 4 “a ML prediction model to predict the activity type of the participating activity from the motion artifact information”; in claim 4 “a first ML prediction model based on pulse frequency and heart rate variability analysis to predict the attention level from the PPG signal generated and received through only a single channel of the PPG sensors”; in claim 4 “a second ML prediction model based on functional near-infrared spectroscopy (fNIRS) analysis to predict the attention level from the PPG signal generated and received through multiple channels of the PPG sensors”; in claim 5 “ML-based objection detection using a trained neural network to detect objects in the image/video signal”; in claim 5 ”a ML prediction model to predict the activity type of the participating activity from the selected-detected objects”; in claim 5 “a ML prediction model based on analysis of image characteristics, the selected-detected objects, and frame-to-frame changes to predict the attention level from the image/video signal”; in claim 6 “a ML prediction model to predict the activity type of the participating activity from the context, intents, and entities of the speech contents”; in claim 6 “a ML prediction model to predict the activity type of the participating activity from the extracted features of the audio signal”; in claim 6 “a ML prediction model to predict the attention level from the degree of relevance of the subject’s dialogue and the response speed of the subject”; in claim 6 “a ML prediction model to predict the attention level from the extracted features of the audio signal”; and in claim 7 ”a ML prediction model to predict the activity type of the participating activity from the inertial measurement signal” these are computer implemented functional limitations due to being ML prediction models. ¶22, ¶24, ¶28, ¶30, ¶33, ¶35, ¶38, ¶41, ¶48, ¶49, ¶51, and ¶57 of the specification fail to disclose how these machine learning algorithms are used to recognize a participating activity and compute an attention level of a subject from multi-modal signals. MPEP 2161.01(I) states the following: Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP §§ 2163.02 and 2181, subsection IV.” Applicant has claimed black-box algorithms without any clear description of what is inside the boxes to determine activity type, attention level, and detect objects in the image/video signal. These are functional results and the description is lacking information, for instance, how are these model trained, what are the inputs to each model, what is the order of the inputs into each model, and how are the weights determined for each model? Therefore, the claims with the limitations of “ML prediction model” are rejected under 112(a) for failing to meet the written description requirement. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claims 1, 2, and 8, the limitation of “the multi-modal signals” seems unclear. Claim 1 only requires at least one data type for the multi-modal signals. In this scenario of one data type, how would it be multi-modal signals? Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, specifically an abstract idea. Step 1 The claimed invention in claims 1-8 are directed to statutory subject matter as the claims recite a method for recognizing a participating activity and computing an attention level of a subject. Step 2A, Prong One Regarding claim 1, the recited steps are directed to a mental process of performing concepts in a human mind or by a human using a pen and paper (see MPEP 2106.04(a)(2) subsection (III)). Regarding claim 1, the limitations of “predict an activity type of a participating activity being performed by the subject using one or more of the multi-modal signals; and predict the subject’s attention level in performing the participating activity using one or more of the multi-modal signals” are a process, as drafted, covers performance of the limitation that can be performed by a human mind (including an observation, evaluation, judgment, opinion) under the broadest reasonable standard. For example, these limitations are nothing more than a medical professional receiving print outs of one or more multi-modal signals and making a judgement on the activity type and attention level of the subject. Step 2A, Prong Two For claim 1, the judicial exception is not integrated into a practical application. In particular, claim 1 recites “one or more of one or more EEG electrodes, one or more PPG sensors, an optical sensor, an audio receiver, an inertial measurement unit (IMU), and a signal receiving and processing device.” The one or more of one or more EEG electrodes, one or more PPG sensor, optical sensor, audio receiver, and IMU amount to nothing more than pre-solution activity of data gathering. The signal receiving and processing device is recited at a high-level of generality and amount to nothing more than parts of a generic computer. Merely including instructions to implement an abstract idea on a computer does not integrate a judicial exception into practical application. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of the one or more of one or more EEG electrodes, one or more PPG sensor, optical sensor, audio receiver, and IMU amount to nothing more than mere pre-solution activity of data gathering, which does not amount to an inventive concept. Moreover, the one or more of one or more EEG electrodes, one or more PPG sensor, optical sensor, audio receiver, and IMU are recited at a high level of generality and are well-understood, routine, and conventional structures as evidenced by: US 20220134000 (¶23-electrodes of a conventional EEG); US 20170055860 (¶10-conventional PPG sensors); US 20180138329 (¶26-convetnional optical sensor); US 20060247811 (¶19-conventional digital audio receiver); and US 20160054355 (¶2-conventional inertial measurement units (IMUs)). Further, simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)). Regarding dependent claims 2-8, the limitations of claim 1 further define the limitations already indicated as being directed to the abstract idea. Regarding claim 2, the limitations of “pre-processing the multi-modal signals before the executions of the activity recognition and the attention level computation, the pre-processing comprising: discarding one or more signal segments in the multi-modal signals having amplitudes below a minimum signal amplitude threshold or having continuous active durations shorter than a minimum signal active duration threshold; reducing AC electrical frequency interferences in the EEG signal and the PPG signal using one or more notch filters; discarding one or more of the PPG signal segments in the multi-modal signals generated and received when physical movement of the PPG sensor exceeds a maximum change of movement threshold; discarding one or more of the image/video signal segments in the multi-modal signals generated and received when physical movement on the optical sensor exceeds a maximum change of movement threshold; and filtering out background ambient noise of the audio signal” are a process, as drafted, covers performance of the limitation that can be performed by a human mind (including an observation, evaluation, judgment, opinion) under the broadest reasonable standard. For example, these limitations are nothing more than a medical professional discarding one or more signal segments in the multi-modal signals and PPG signal segments based on a simple comparison for each and filtering the noise from the data on paper. Regarding claim 3, Applicant includes multiple ML prediction models which are nothing more than the computer implementation/automation of an abstract mental process of analyzing the activity type from a brainwave plot and further analyzing the brainwave plot to predict the attention level. Regarding claim 4, Applicant includes multiple ML prediction models which are nothing more than the computer implementation/automation of an abstract mental process of analyzing activity type from the activity data and artifact information, analyzing pulse frequency and HRV data to predict the attention level, and analyzing fNIRS data to predict the attention level. Regarding claim 5, Applicant includes multiple ML prediction models which are nothing more than the computer implementation/automation of an abstract mental process of analyzing activity type from image/video data and analyzing attention level from image/video data. Regarding claim 6, Applicant includes multiple ML prediction models which are nothing more than the computer implementation/automation of an abstract mental process of analyzing activity type from speech contents and audio data, as well as analyzing the attention level from audio data. Regarding claim 7, Applicant includes a ML prediction model which is nothing more than the computer implementation/automation of an abstract mental process of analyzing the activity type from inertial measurement data and analyzing the activity type to evaluate the attention level. Regarding claim 8, the limitations of “fusing all prediction results of activity recognitions from the multi-modal signals under decision fusion strategy; and fusing all prediction results of attention level computations from the multi-modal signals under decision fusion strategy” are a process, as drafted, covers performance of the limitation that can be performed by a human mind (including an observation, evaluation, judgment, opinion) under the broadest reasonable standard. For example, these limitations are nothing more than a medical professional receiving print outs of the multi-modal signals, making judgements about activity recognitions and attention levels, combining the judgments about activity recognitions, and combining the judgement about attention levels. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Siddharth (NPL “On Assessing Driver Awareness of Situational Criticalities: Multi-modal Bio-Sensing and Vision-Based Analysis, Evaluations, and Insights” published in 2020) in view of Chua (US 20190092337 filed on 7/11/18). Regarding claim 1, Siddharth teaches a method for recognizing a participating activity and computing an attention level of a subject from multi-modal signals, comprising: receiving the multi-modal signals comprising one or more of an electroencephalogram (EEG) signal generated and received through one or more EEG electrodes (page 4, ¶4-EEG electrodes) and a photoplethysmography (PPG) signal generated and received through one or more PPG sensors (page 6, ¶2-the PPG signal was recorded using an armband (Biovotion) that measures PPG); an activity recognition to predict an activity type of a participating activity being performed by the subject using one or more of the multi-modal signals (page 2, last ¶-test if the modalities with low-temporal resolution (but easily wearable) namely PPG and GSR can work as well as EEG and vision modality for assessing driver’s attention; page 12, ¶2-annotated by two annotators for low/high driver attention); an attention level computation to predict the subject’s attention level in performing the participating activity using one or more of the multi-modal signals (page 2, last ¶-test if (and when) the fusion of features from different sensor modalities boost the classification performance over using each modality independently for attention). While Siddharth recites signal processing (page 2, last ¶), Siddharth does not explicitly recite executing, by the signal receiving and processing device. Chua relates to vehicle-based operator monitoring systems, methods, and apparatuses. In particular, systems, methods, and apparatuses capture information regarding the operator's physical and/or physiological characteristics, analyze the information, determine a level of operator fatigue or health state, and/or provide warnings based at least in part on the information (¶2). executing, by the signal receiving and processing device (¶90-the data is combined and time synchronized by the core processor 102). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Siddharth to include executing, by the signal receiving and processing device of Chua in order to determine a level of operator fatigue or health state, and/or provide warnings based at least in part on the information (Chua, ¶2). Regarding claim 5, the combination of Siddharth and Chua teaches the method of claim 1, wherein the activity recognition comprises an image/video activity recognition (Siddharth, page 4, right col., 2nd to last ¶-extract the face region from the frontal body image of the person captured by the camera for each frame) and the attention level computation comprises an image/video attention level computation (Siddharth, Fig. 9 on page 7-single modality attention classification, face); wherein the image/video activity recognition comprising: performing one of feature-based object detection, attribute-based object detection, and ML-based objection detection using a trained neural network to detect objects in the image/video signal (Siddharth, page 5, ¶1-these face localized points are then used to calculate 30 different features based on the distances such as between center of the eyebrow to the midpoint of the eye, between the midpoint of nose and corners of the lower lip, between the midpoints of two eyebrows, etc. and angles between such line segments); selecting the detected objects using an object detection confidence system (Siddharth, page 4, right col., 2nd to last ¶-Viola-Jones object detector with Haar-like features [44] to detect the most likely face candidate); and employing a ML prediction model to predict the activity type of the participating activity from the selected-detected objects (Siddharth, page 5, ¶1-map the variation in these features across a trial (which may directly correspond to driver’s attention and driving condition)); wherein the image/video attention level computation comprising: for static activity type, employing a ML prediction model based on analysis of image characteristics, the selected-detected objects, and frame-to-frame changes to predict the attention level from the image/video signal (Siddharth, page 5, left col., ¶1-code the facial expressions and map them to different emotional states [47]. Our goal was to use face localized points similar to the ones used in FACS without identifying the facial expression such as anger, happiness, etc. since they are not highly relevant in driving domain and short time intervals); and for dynamic activity type, comparing the image/video signal to an image scene model for the activity type of the participating activity to estimate the attention level (Siddharth, page 5, ¶1-map the variation in these features across a trial (which may directly correspond to driver’s attention and driving condition)). Regarding claim 8, the combination of Siddharth and Chua teaches the method of claim 1, further comprising: fusing all prediction results of activity recognitions from the multi-modal signals under decision fusion strategy (Siddharth, page 8, ¶4-extract relevant features from the faces for driver attention and hazardous conditions detection; page 2, last ¶-test if (and when) the fusion of features from different sensor modalities boost the classification performance over using each modality independently for attention and hazardous/non-hazardous event classification); and fusing all prediction results of attention level computations from the multi-modal signals under decision fusion strategy (Siddharth, page 2, last ¶-test if (and when) the fusion of features from different sensor modalities boost the classification performance over using each modality independently for attention and hazardous/non-hazardous event classification). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Siddharth in view of Chua as applied to claim 1 above, and further in view of Adams (US 20160317097 filed on 12/17/15), Murray (NPL “Adaptive Filtering Methods for Identifying Cross-Frequency Couplings in Human EEG” published in 2013), and Hübner (US 20210153756 filed on 5/17/18). Regarding claim 2, the combination of Siddharth and Chua teaches the method of claim 1, further comprising: pre-processing the multi-modal signals before the executions of the activity recognition and the attention level computation (Siddharth, page 2, right col., Research Methods section-pre-process the data and extract features from each of the modalities used in this study). However, the combination of Siddharth and Chua does not teach the pre-processing comprising: discarding one or more signal segments in the multi-modal signals having amplitudes below a minimum signal amplitude threshold or having continuous active durations shorter than a minimum signal active duration threshold; reducing AC electrical frequency interferences in the EEG signal and the PPG signal using one or more notch filters; discarding one or more of the PPG signal segments in the multi-modal signals generated and received when physical movement of the PPG sensor exceeds a maximum change of movement threshold; discarding one or more of the image/video signal segments in the multi-modal signals generated and received when physical movement on the optical sensor exceeds a maximum change of movement threshold; and filtering out background ambient noise of the audio signal. Adams teaches reducing AC electrical frequency interferences in the PPG signal using one or more notch filters (¶70-the ALP filter 700 may be used as an adaptive notch filter for filtering the optical signal detected by an optical sensor, such as the sensor 104 described above, from artifacts caused by the motion, which motion is detected by an accelerometer, e.g. the accelerometer 110 described above; ¶70); discarding one or more of the PPG signal segments in the multi-modal signals generated and received when physical movement of the PPG sensor exceeds a maximum change of movement threshold (¶124-the signal quality is very poor, or where the analog-to-digital converter (ADC) may be saturated, or any other condition that might indicate low confidence in the heart-rate estimate, where it might be desirable to freeze the heart-rate display or indicate an error; ¶39-discard data or freeze the heart-rate readout when the accelerometer 110 senses too much motion); and filtering out background ambient noise of the audio signal (¶91-while the examples herein are described with one or more input signals provided by one or more optical sensors, it is envisioned that the method can be used to filter the input signals generated by other types of sensors, including but not limited to…audio sensor). Adams relates to the field of digital signal processing, in particular to digital signal processing for tracking a heartbeat frequency in a noisy environment (¶2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Siddharth to include reducing AC electrical frequency interferences in the PPG signal using one or more notch filters; discarding one or more of the PPG signal segments in the multi-modal signals generated and received when physical movement of the PPG sensor exceeds a maximum change of movement threshold; and filtering out background ambient noise of the audio signal of Adams in order for more accurate heart rate measurements (Adams, ¶5). While the combination of Siddharth, Chua, and Adams teaches pre-processing the data for EEG-based feature extraction (Siddharth, page 2, right col., Research Methods and EEG-based Feature Extraction sections), the combination fails to teach the pre-processing comprising: discarding one or more signal segments in the multi-modal signals having amplitudes below a minimum signal amplitude threshold or having continuous active durations shorter than a minimum signal active duration threshold; and reducing AC electrical frequency interferences in the EEG signal using one or more notch filters. Murray teaches the pre-processing comprising: discarding one or more signal segments in the multi-modal signals having amplitudes below a minimum signal amplitude threshold or having continuous active durations shorter than a minimum signal active duration threshold (page 3, left col., ¶1-a threshold of 680 mV for artifact rejection was used); and reducing AC electrical frequency interferences in the EEG signal using one or more notch filters (page 3, left col., ¶2-then, signals from these electrodes were re-sampled from 500 Hz to 250 Hz and the power line interference at 60 Hz was canceled with a narrow notch filter). Murray relates to adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations (Abstract). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Siddharth to include the pre-processing comprising: discarding one or more signal segments in the multi-modal signals having amplitudes below a minimum signal amplitude threshold or having continuous active durations shorter than a minimum signal active duration threshold; and reducing AC electrical frequency interferences in the EEG signal using one or more notch filters of Murray in order for more reliable analysis (Murray, page 2, left col., ¶2). While the combination of Siddharth, Chua, Adams, and Murray teaches collecting videos for the dataset (Siddharth, page 5, Dataset Description section), the combination fails to teach discarding one or more of the image/video signal segments in the multi-modal signals generated and received when physical movement on the optical sensor exceeds a maximum change of movement threshold. Hübner teaches discarding one or more of the image/video signal segments in the multi-modal signals generated and received when physical movement on the optical sensor exceeds a maximum change of movement threshold (¶79-completion of the recording/transmitting can be indicated to the subject, for example, by an acoustic and/or optical signal emitted by an audio and/or video component of device 500; ¶82-discarding the pulse wave signal for a respective time period, if the reliability signal indicative of the reliability of the pulse wave signal for the respective time period is not within a predetermined range (or below a predetermined threshold value). The pulse wave signal being discarded would, thus, indicate that the pulse wave signal is not regarded as reliable and/or indicate that there is a high probability of the pulse wave signal containing artifacts or otherwise being inaccurate; ¶54-if the value of the acceleration exceeds the predetermined threshold value, a respective reliability value for the time point corresponding to the time stamp of the accelerometer data exceeding the predetermined value is set to a value associated with the status “unreliable” (e.g. a numerical value, such as “0”)). Hübner relates to reliable acquisition of photoplethysmographic data representative of vital signals of a subject. The processing includes determining whether a recorded pulse wave fulfills pre-determined quality requirements. Based on subsequent pulse waveform analysis, data pertaining to, for example, the heart rhythm, heart rate, respiratory rate, and/or blood pressure of a human subject can be determined and processed (¶1). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Siddharth to include discarding one or more of the image/video signal segments in the multi-modal signals generated and received when physical movement on the optical sensor exceeds a maximum change of movement threshold of Hübner in order to filter out outliers or measurement artifacts (Hübner, ¶55). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Siddharth in view of Chua as applied to claim 1 above, and further in view of Newlon (WO 2020037332 filed on 10/1/19). Regarding claim 3, the combination of Siddharth and Chua teaches the method of claim 1, wherein the activity recognition comprises an EEG activity recognition (Siddharth, page 2, right col., last ¶-cognitive processes pertaining to attention and mental load such as while driving, processed EEG data, we employed two distinct and novel methods to extract EEG features that capture the interplay between various brain regions to map human cognition) and the attention level computation comprises an EEG attention level computation (Siddharth, page 8, left col., ¶1-EEG performs the best among the four modalities for driver attention classification). However, the combination of Siddharth and Chua does not teach wherein the EEG activity recognition comprising: converting the EEG signal to a brainwave plot; identifying a representative pattern of the brainwave plot; and employing one of a trained neural network, a Support Vector Machine (SVM), a Random Forest classifier, and a ML prediction model to predict the activity type of the participating activity from the representative pattern of the brainwave plot; and wherein the EEG attention level computation comprising: employing a ML prediction model based on frequency analysis on the representative pattern of the brainwave plot to predict the attention level. Newlon teaches wherein the EEG activity recognition comprising: converting the EEG signal to a brainwave plot (¶11-the brainwave signal may be indicative of an electrical activity of a brain of the learner, and may comprise an electroencephalography (EEG) signal); identifying a representative pattern of the brainwave plot (¶14-analyzing the at least one characteristic of the brainwave signal comprises analyzing one of a waveform, a frequency, a frequency distribution, an amplitude, and a periodicity of the brainwave signal); and employing one of a trained neural network, a Support Vector Machine (SVM), a Random Forest classifier, and a ML prediction model (¶677-the algorithm may be based on or driven by advanced machine learning techniques or artificial intelligence-based techniques) to predict the activity type of the participating activity from the representative pattern of the brainwave plot (¶79-analyzing one or more outputs of EEG algorithm's) that measure different cognitive states such as focus or relaxation, such that the neurofeedback training reinforces one or more of these states. The algorithm(s) to measure these states may be developed by generating machine learning based models of EEG signals that predict the likelihood that a user is in one of these states); and wherein the EEG attention level computation comprising: employing a ML prediction model based on frequency analysis on the representative pattern of the brainwave plot to predict the attention level (¶79-generating machine learning based models of EEG signals that predict the likelihood that a user is in one of these states; ¶78-analyzing one or more frequency band(s) of the brainwaves. For example, the lower frequency bands may be associated with relaxation and daydreaming, the middle frequency bands may be associated with focused thinking and problem solving, and the higher frequency bands may be indicative of anxiety, hyper vigilance, and agitation). Newlon relates generally to a brain-machine interface, and more particularly, to neuro-feedback training systems and methods for personalized learning and teaching experience using biometric data of a user (¶2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Siddharth to include wherein the EEG activity recognition comprising: converting the EEG signal to a brainwave plot; identifying a representative pattern of the brainwave plot; and employing one of a trained neural network, a Support Vector Machine (SVM), a Random Forest classifier, and a ML prediction model to predict the activity type of the participating activity from the representative pattern of the brainwave plot; and wherein the EEG attention level computation comprising: employing a ML prediction model based on frequency analysis on the representative pattern of the brainwave plot to predict the attention level of Newlon in order for personalized learning and teaching experience using biometric data of a user (Newlon, ¶2). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Siddharth in view of Chua as applied to claim 1 above, and further in view of Li (CN 111345800 published on 6/30/20) and Olivier (US 20170238875 filed on 8/6/15). Regarding claim 4, the combination of Siddharth and Chua teaches the method of claim 1, wherein the activity recognition comprises an PPG activity recognition (Siddharth, page 4, left col., ¶2- heart-rate variability (HRV) has shown to be a good measure for classifying cognitive states such as emotional valence and stress, the PPG data so obtained was then scaled between 0 and 1 and then a peak-detection algorithm [38] was applied to find the inter-beat intervals (RR) for the calculation of HRV) and the attention level computation comprises an PPG attention level computation (Siddharth, page 7, Fig. 9-single modality classification performance, PPG); wherein the PPG activity recognition comprising: extracting motion artifact information from the PPG signal (Siddharth, page 4, ¶2-a moving-average filter with a window length of 0.25 seconds for filtering the noise in the PPG data). However, the combination of Siddharth and Chua does not teach employing a ML prediction model to predict the activity type of the participating activity from the motion artifact information; and wherein the PPG attention level computation comprising: employing a first ML prediction model based on pulse frequency and heart rate variability analysis to predict the attention level from the PPG signal generated and received through only a single channel of the PPG sensors; or employing a second ML prediction model based on functional near-infrared spectroscopy (fNIRS) analysis to predict the attention level from the PPG signal generated and received through multiple channels of the PPG sensors. Li teaches wherein the PPG attention level computation comprising: employing a first ML prediction model based on pulse frequency and heart rate variability analysis to predict the attention level from the PPG signal generated and received through only a single channel of the PPG sensors (page 4, ¶3-use the input feature matrix and output feature matrix of multiple PPG signal sample sequences to build a sample set of random forest decision tree; build a random forest decision tree model, the input of the random forest decision tree model is the input feature matrix, and the output of the random forest decision tree model To predict the value of attention, use the sample set for machine learning to obtain a trained decision tree model; page 2, ¶10-preprocess the PPG signal sequence to obtain multiple PPG signal subsequences, use the time domain, frequency, and nonlinear characteristics of the PPG signal subsequence to construct the feature vector of the PPG signal subsequence; page 5, ¶5-the current ECG signal measurement attention is mainly based on heart rate variability (HRV), which considers the changes in the heartbeat cycle, while the PPG signal is formed by the heartbeat (vibration) propagating along the arteries and blood flow to the outer periphery. When the blood pressure is relatively normal, the PPG fluctuations and ECG fluctuations have different waveforms, but the frequencies are close. Therefore, the time domain characteristics of the PPG signal can also collect such characteristics); or employing a second ML prediction model based on functional near-infrared spectroscopy (fNIRS) analysis to predict the attention level from the PPG signal generated and received through multiple channels of the PPG sensors. Li relates to the field of learning attention detection, and specifically relates to a learning attention detection method and system in a MOOC environment (page 1, ¶2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Siddharth to include wherein the PPG attention level computation comprising: employing a first ML prediction model based on pulse frequency and heart rate variability analysis to predict the attention level from the PPG signal generated and received through only a single channel of the PPG sensors; or employing a second ML prediction model based on functional near-infrared spectroscopy (fNIRS) analysis to predict the attention level from the PPG signal generated and received through multiple channels of the PPG sensors of Li in order to reflects the change of attention through the change of similar continuous pulse period intervals (Li, page 3, ¶6). While the combination of Siddharth, Chua, and Li teaches a PPG activity recognition (Siddharth, page 4, left col., ¶2- heart-rate variability (HRV) has shown to be a good measure for classifying cognitive states such as emotional valence and stress, the PPG data so obtained was then scaled between 0 and 1 and then a peak-detection algorithm [38] was applied to find the inter-beat intervals (RR) for the calculation of HRV), the combination fails to teach employing a ML prediction model to predict the activity type of the participating activity from the motion artifact information. Olivier teaches employing a ML prediction model to predict the activity type of the participating activity from the motion artifact information (¶14-a model that predicts HR changes based on an inferred activity level (typically from an accelerometer channel) to predict a likely HR trajectory under conditions where the HR signal can not be accurately separated from the motion artifact signal, allowing for a smooth crossing of the predicted HR and motion frequencies during exercise). Olivier relates to the field of non-invasive monitoring of physiological parameters. More specifically, a system and method is introduced by which the accuracy of a heart rate prediction from sensor data can be improved under conditions where movement distorts the signal (¶1). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Siddharth to include employing a ML prediction model to predict the activity type of the participating activity from the motion artifact information of Olivier in order to provide more accurate heart rate predictions or how it can be utilized to infer the physiological load for different exercise or rest states (Olivier, ¶43). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Siddharth in view of Chua as applied to claim 1 above, and further in view of Bi (CN 112869743 published 6/1/21). Regarding claim 7, the combination of Siddharth and Chua teaches the method of claim 1, wherein the activity recognition comprises an inertial measurement activity recognition (Chua, ¶47- motion tracking of arm movements is also used as indicators of operator activity). However, the combination of Siddharth and Chua does not teach the attention level computation comprises an inertial measurement attention level computation; wherein the inertial measurement activity recognition comprising: employing a ML prediction model to predict the activity type of the participating activity from the inertial measurement signal; wherein the inertial measurement attention level computation comprising: comparing the inertial measurement signal to a movement model for the activity type of the participating activity to estimate the attention level. Bi teaches the attention level computation comprises an inertial measurement attention level computation (page 2, ¶1-motion intent analysis models in the two attention states to predict whether there is motion intent in the two attention states. Recognizing the state of attention during the exercise task through the EEG signals can obtain real-time feedback of the neurological attention state); wherein the inertial measurement activity recognition comprising: employing a ML prediction model to predict the activity type of the participating activity from the inertial measurement signal (page 9, ¶3-a new adaptive system model is proposed, which firstly judges the current concentration state of the person before estimating the initial intention of the movement); wherein the inertial measurement attention level computation comprising: comparing the inertial measurement signal to a movement model for the activity type of the participating activity to estimate the attention level (page 2, ¶4-S1. Recognize the attention state during the motor task, and judge whether there is cognitive distraction in the attention state). Bi relates to the technical field of neuroscience, and particularly relates to a method for neural analysis of the initiation intention of a movement considering cognitive distraction (page 1, ¶2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Siddharth to include the attention level computation comprises an inertial measurement attention level computation; wherein the inertial measurement activity recognition comprising: employing a ML prediction model to predict the activity type of the participating activity from the inertial measurement signal; wherein the inertial measurement attention level computation comprising: comparing the inertial measurement signal to a movement model for the activity type of the participating activity to estimate the attention level of Bi in order to help to understand the neural activity of the human body during exercise (Bi, page 7, ¶4). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAURA HODGE whose telephone number is (571) 272-7101. The examiner can normally be reached M-F: 8:00 am-5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, UNSU JUNG can be reached at (571) 272-8506. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.N.H./Examiner, Art Unit 3792 /AMANDA L STEINBERG/Examiner, Art Unit 3792
Read full office action

Prosecution Timeline

Jun 02, 2023
Application Filed
Dec 22, 2025
Non-Final Rejection — §101, §103, §112
Feb 17, 2026
Interview Requested
Mar 09, 2026
Applicant Interview (Telephonic)
Mar 11, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599336
Wearable Apparatus For Continuous Monitoring Of Physiological Parameters
2y 5m to grant Granted Apr 14, 2026
Patent 12594422
SYSTEMS AND DEVICES FOR TREATING EQUILIBRIUM DISORDERS AND IMPROVING GAIT AND BALANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12594414
HEART SUPPORT AND MASSAGE MACHINE
2y 5m to grant Granted Apr 07, 2026
Patent 12582822
INTRA-ORAL APPLIANCES AND SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12576263
DEVICE FOR ATTACHING A HEART SUPPORT SYSTEM TO AN INSERTION DEVICE, AND METHOD FOR PRODUCING SAME
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
42%
Grant Probability
86%
With Interview (+43.7%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 95 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month