Prosecution Insights
Last updated: April 19, 2026
Application No. 17/821,328

MOVEMENT HEALTH TRACKER USING A WEARABLE DEVICE

Final Rejection §101§103
Filed
Aug 22, 2022
Examiner
TC 3600, DOCKET
Art Unit
3600
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Google LLC
OA Round
4 (Final)
4%
Grant Probability
At Risk
5-6
OA Rounds
1y 1m
To Grant
5%
With Interview

Examiner Intelligence

Grants only 4% of cases
4%
Career Allow Rate
5 granted / 142 resolved
-48.5% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 1m
Avg Prosecution
206 currently pending
Career history
348
Total Applications
across all art units

Statute-Specific Performance

§101
36.1%
-3.9% vs TC avg
§103
34.6%
-5.4% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 142 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Status of Application This Communication is a Final Office Action in response to the Amendments, Arguments, and Remarks filed on the 12th day of November, 2025. Currently Claims 1-8, 10-11,14-25 and 27-30 are pending. Claims 9, 12-13, 16 and 26 are cancelled. No claims are allowed. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8, 10-11,14-25 and 27-30 are rejected under 35 U.S.C. §101 because the claimed invention is directed to judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) with no practical application and without significantly more. Under MPEP 2106, when considering subject matter eligibility under 35 U.S.C. § 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (step 1). If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea) (step 2A prong 1), and if so, it must additionally be determined whether the claim is integrated into a practical application (step 2A prong 2). If an abstract idea is present in the claim without integration into a practical application, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself (step 2B). In the instant case, claims 1-8, 10-11,14-25 and 27-30 are directed to a system, method, and non-transitory computer-readable media. Thus, each of the claims falls within one of the four statutory categories (step 1). However, the claims also fall within the judicial exception of an abstract idea (step 2). While claims 1, 16, and 27, are directed to different categories, the language and scope are substantially the same and have been addressed together below. Under Step 2A Prong 1, the test is to identify whether the claims are “directed to” a judicial exception. Examiner notes that the claimed invention is directed to an abstract idea in that the instant application is directed to certain methods of organizing human activity specifically managing personal behavior and/or interactions between people (see MPEP 2106.04(a)(2)(II)), and mental processes (see MPEP 2106.04(a)(2)(III), and mathematical equations (see MPEP 2106.04(a)(2)(II)). Examiner notes that claims 1-8, 10-11,14-25 and 27-30 recite a method, device and non-transitory computer readable medium comprising: detecting, by one or more sensors of a wearable device worn by a user, user movements of the user, the user movements including one or more gestures; translating, by the one or more sensors, the user movements including the one or more gestures into signal data; separating and inputting the signal data into separate models; processing the signal data by the separate models using separate channels to process the signal data and combining an output of the separate channels to identify a feature set, the feature set having a time component and an amplitude component; converting the feature set, by a feature analytics engine processing the time component and the amplitude component, into a score that corresponds to a condition of the user; and determining the condition of the user by the separate models and the feature analytics engine from the user movements detected by the wearable device; and transmitting at least the score or the condition to a computing device for remote monitoring of the score or the condition, which is directed to concepts that are performed mentally and a product of human mental work. The limitations suggest a receiving information in the form of movements, applying algorithms to analyze the information, and generating the result of the analysis in the form of a medical condition, and the steps involved human judgments, observations and evaluations that can be practically or reasonably performed in the human mind, the claim recites an abstract idea consistent with the “mental process” grouping set forth in the see MPEP 2106.04(a)(2)(III). Alternatively, Examiner notes that claims 1-8, 10-11,14-25 and 27-30 recite a method, device and non-transitory computer readable medium comprising: detecting, by one or more sensors of a wearable device worn by a user, user movements of the user, the user movements including one or more hand gestures; translating, by the one or more sensors, the user movements including the one or more hand gestures into signal data; separating and inputting the signal data into separate models, the separate models trained based on user movement data from a population of users using a training algorithm; processing the signal data by the separate models using separate channels to process the signal data and combining an output of the separate channels to identify a feature set, the feature set having a time component and an amplitude component; converting the feature set, by a feature analytics engine processing the time component and the amplitude component, into a score that corresponds to a condition of the user; and determining the condition of the user by the separate models and the feature analytics engine from the user movements detected by the wearable device, and is similar to the abstract idea identified in MPEP 2106.04(a)(2)(II) in grouping “II” in that the claims recite certain methods of organizing human activity such as managing personal behavior based on tracking movements of the user. This is merely further embellishments of the abstract idea and does not further limit the claimed invention to render the claims patentable subject matter. The limitations, substantially comprising the body of the claim, recite standard processes found in standard practice in medical care in which a user is tested for various diseases using equipment meant for measuring specific medical features of users. This is common practice when a doctor indicates a potential sign of disease, a test is administered, and the result of the test is presented to the tester or doctor. Because the limitations above closely follow the steps standard in managing personal behavior such as tracking movement of a user to determine a medical condition, and the steps of the claims involve organizing human activity, the claim recites an abstract idea consistent with the “organizing human activity” grouping set forth in the see MPEP 2106.04(a)(2)(II). The conclusion that the claim recites an abstract idea within the groupings of the MPEP 2106.04(a)(2) remains grounded in the broadest reasonable interpretation consistent with the description of the invention in the specification. For example, [App. Spec ¶ 1], “movement health tracker using a wearable device”. Accordingly, the Examiner submits claims 1-11, 14-25, and 27-30, recite an abstract idea based on the language identified in claims 1, 16, and 27, and the abstract ideas previously identified based on that language that remains consistent with the groupings of Step 2A Prong 1 of the MPEP 2106.04(a)(1). If the claims are directed toward the judicial exception of an abstract idea, it must then be determined under Step 2A Prong 2 whether the judicial exception is integrated into a practical application. Examiner notes that considerations under Step 2A Prong 2 comprise most the consideration previously evaluated in the context of Step 2B. The Examiner submits that the considerations discussed previously determined that the claim does not recite “significantly more” at Step 2B would be evaluated the same under Step 2A Prong 1 and result in the determination that the claim does not integrate the abstract idea into a practical application. The instant application fails to integrate the judicial exception into a practical application because the instant application merely recites words “apply it” (or an equivalent) with the judicial exception or merely includes instructions to implement an abstract idea. The instant application is directed to a method instructing the reader to implement the identified method of organizing human activity of managing personal behavior (i.e., a mental process that a neurologist should follow when testing a patient for nervous system malfunctions, In re Meyer, 688 F.2d 789, 791-93, 215 USPQ 193, 194-96 (CCPA 1982)) on generically claimed computer structure. For instance, the additional elements or combination of elements other than the abstract idea itself include the elements such as “processor” or “memory” recited at a high level of generality. These elements do not themselves amount to an improvement to the interface or computer, to a technology or another technical field. This is consistent with Applicant’s disclosure which states that the computing device comprises of “a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet”. (App. Spec. ¶ 105; see also ¶ 102: “Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.”). Accordingly, the claimed “system” read in light of the specification employs any wide range of possible devices comprising a number of components that are “well-known” and included in an indiscriminate “computer”, “processor”, “remote computing device”, “sensor”, “database”, “memory” (e.g., processing device, modules). Thus, the claimed structure amounts to appending generic computer elements to abstract idea comprising the body of the claim. The computing elements are only involved at a general, high level, and do not have the particular role within any of the functions but to be a computer-implemented method using a generically claimed “computer”, “processor”, “sensor”, “database”, “remote computing device”, “memory” and even basic, generic recitations that imply use of the computer such as storing information via servers would add little if anything to the abstract idea. Similarly, reciting the abstract idea as software functions used to program a generic computer is not significant or meaningful: generic computers are programmed with software to perform various functions every day. A programmed generic computer is not a particular machine and by itself does not amount to an inventive concept because, as discussed in MPEP 2106.05(a), adding the words “apply it” (or an equivalent) with the judicial exception, or more instructions to implement an abstract idea on a computer, as discussed in Alice, 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)), is not enough to integrate the exception into a practical application. Further, it is not relevant that a human may perform a task differently from a computer. It is necessarily true that a human might apply an abstract idea in a different manner from a computer. What matters is the application, “stating an abstract idea while adding the words ‘apply it with a computer’” will not render an abstract idea non-abstract. Tranxition v. Lenovo, Nos. 2015-1907, -1941, -1958 (Fed. Cir. Nov. 16, 2016), slip op. at 7-8. Here, the instructions entirely comprise the abstract idea, leaving little if any aspects of the claim for further consideration under Step 2A Prong 2. In short, the role of the generic computing elements recited in claims 1-12, 14-25, and 27-30, is the same as the role of the computer in the claims considered by the Supreme Court in Alice, and the claim as whole amounts merely to an instruction to apply the abstract idea on the generic computerised system. Therefore, the claims have failed to integrate a practical application (2106.04(d)). Under the MPEP 2106.05, this supports the conclusion that the claim is directed to an abstract idea, and the analysis proceeds to Step 2B. While many considerations in Step 2A need not be reevaluated in Step 2B because the outcome will be the same. Here, on the basis of the additional elements other than the abstract idea, considered individually and in combination as discussed above, the Examiner respectfully submits that the claims 1-8, 10-11,14-25 and 27-30, does not contain any additional elements that individually or as an ordered combination amount to an inventive concept and the claims are ineligible. With respect to the dependent claims do not recite anything that is found to render the abstract idea as being transformed into a patent eligible invention. The dependent claims are merely reciting further embellishments of the abstract idea and do not claim anything that amounts to significantly more than the abstract idea itself. Claims 2-8, 10-12, 14-15, 17-25, and 28-30 are directed to further embellishments of the abstract idea in that they are directed to aspects of the central theme of the abstract idea identified above, as well as being directed to data processing and transmission which the courts have recognized as insignificant extra-solution activities (see at least M.P.E.P. 2106.05(g)). Data processing is one of the most basic and fundamental uses there are for a generic computing device is not sufficient to amount to significantly more. The examiner takes the position that simply appending the judicial exception with such a well understood step of data processing is not going to amount to significantly more than the abstract idea. Therefore, since there are no limitations in the claim that transform the abstract idea into a patent eligible application such that the claim amounts to significantly more than the abstract idea itself, the claims are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter. See MPEP 2106. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4, 7-8, 10-11, 14-19, 22-25, and 27-30 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20230190201 to Singleton et al. (hereinafter Singleton) in view of U.S. Patent Application Publication No. 20190209022 to Sobol et al. (hereinafter Sobol) in view of U.S. Patent Application Publication No. 20200387245 to Chen. Referring to Claims 1, 16, and 27 (substantially similar in scope and language), Singleton teaches a method, and wearable device A non-transitory storage medium comprising code that, when executed by processing circuitry, causes the processing circuitry to perform a method (see at least Singleton: ¶ 26) comprising: detecting, by one or more sensors of a wearable device worn by a user, user movements of the user (see at least Singleton: ¶ 26: “In some implementations, the rings 104 (e.g., wearable devices 104) of the system 100 may be configured to collect physiological data from the respective users 102 based on arterial blood flow within the user's finger. In particular, a ring 104 may utilize one or more LEDs (e.g., red LEDs, green LEDs) that emit light on the palm-side of a user's finger to collect physiological data based on arterial blood flow within the user's finger. In some implementations, the ring 104 may acquire the physiological data using a combination of both green and red LEDs. The physiological data may include any physiological data known in the art including, but not limited to, temperature data, accelerometer data (e.g., movement/motion data), heart rate data, HRV data, blood oxygen level data, or any combination thereof.”; see also Singleton: ¶ 40: “System 200 further includes a user device 106 (e.g., a smartphone) in communication with the ring 104. For example, the ring 104 may be in wireless and/or wired communication with the user device 106. In some implementations, the ring 104 may send measured and processed data (e.g., temperature data, photoplethysmography (PPG) data, motion/accelerometer data, ring input data, and the like) to the user device 106. The user device 106 may also send data to the ring 104, such as ring 104 firmware/configuration updates. The user device 106 may process data. In some implementations, the user device 106 may transmit data to the server 110 for processing and/or storage.”; see also Singleton: ¶ 48: “device electronics, battery 210, and substrates may be arranged in the ring 104 in a variety of ways. In some implementations, one substrate that includes device electronics may be mounted along the bottom of the ring 104 (e.g., the bottom half), such that the sensors (e.g., PPG system 235, temperature sensors 240, motion sensors 245, and other sensors) interface with the underside of the user's finger. In these implementations, the battery 210 may be included along the top portion of the ring 104 (e.g., on another substrate).”; see also Singleton: ¶ 74-77 “The processing module 230-a may sample the motion signals at a sampling rate (e.g., 50 Hz) and determine the motion of the ring 104 based on the sampled motion signals. For example, the processing module 230-a may sample acceleration signals to determine acceleration of the ring 104. As another example, the processing module 230-a may sample a gyro signal to determine angular motion. In some implementations, the processing module 230-a may store motion data in memory 215. Motion data may include sampled motion data as well as motion data that is calculated based on the sampled motion signals (e.g., acceleration and angular values)…The ring 104 may store a variety of data described herein. For example, the ring 104 may store temperature data, such as raw sampled temperature data and calculated temperature data (e.g., average temperatures). As another example, the ring 104 may store PPG signal data, such as pulse waveforms and data calculated based on the pulse waveforms (e.g., heart rate values, IBI values, HRV values, and respiratory rate values). The ring 104 may also store motion data, such as sampled motion data that indicates linear and angular motion.”; see also Singleton: ¶ 81 “The physiological measurements may be taken continuously throughout the day and/or night. In some implementations, the physiological measurements may be taken during 104 portions of the day and/or portions of the night. In some implementations, the physiological measurements may be taken in response to determining that the user is in a specific state, such as an active state, resting state, and/or a sleeping state. For example, the ring 104 can make physiological measurements in a resting/sleep state in order to acquire cleaner physiological signals. In one example, the ring 104 or other device/system may detect when a user is resting and/or sleeping and acquire physiological parameters (e.g., temperature) for that detected state. The devices/systems may use the resting/sleep physiological data and/or other data when the user is in other states in order to implement the techniques of the present disclosure.”; see also Singleton: ¶ 16 “e.g., wearable devices worn on both the left and right hands”; see also Singleton: ¶ 24 “a user 102 may have a ring (e.g., wearable device 104) that measures physiological parameters”; see also Singleton: ¶ 77 “Example derived values for motion data may include, but are not limited to, motion count values, regularity values, intensity values, metabolic equivalence of task values (METs), and orientation values. Motion counts, regularity values, intensity values, and METs may indicate an amount of user motion (e.g., velocity/acceleration) over time. Orientation values may indicate how the ring 104 is oriented on the user's finger and if the ring 104 is worn on the left hand or right hand”; see also Singleton: ¶ 99 “collecting physiological data via both wearable devices 104-d and 104-e (e.g., wearable ring devices 104 on each hand) may provide more insight into such medical conditions, that may lead to earlier diagnosis and treatment”); Examiner notes that the system of Singleton measures the movements of hands and other physiological data that is collected using sensors (see above) but fails to explicitly state, the user movements including one or more hand gestures. However, Chen, which talk about a method and system for monitoring medical conditions of users, teaches it is known to monitor specifically hand gestures when monitoring physiological and medical information related to users (see at least Chen: ¶ 25-26 “detecting trackpad user input gestures, a dedicated virtual keyboard region that may be used for detecting keyboard user input gestures, and a hybrid region that may be used for detecting either trackpad user input gestures or keyboard user input gestures depending on whether a thumb or a finger is determined to have provided the user input gestures… Electronic device 100 may be any portable, mobile, hand-held, or miniature electronic device that may be configured to handle user input gestures on an extended trackpad wherever a user travels”; see also Chen: ¶ 32 “a touch sensor assembly may be used to detect touch inputs (e.g., gestures, multi-touch inputs, taps, etc.)”; see also Chen: ¶ 37 “user input sensor states or events or gestures (e.g., sensor state data 522 of FIG. 5) that may be used to control or manipulate at least one functionality of device 100 (e.g., a performance or mode of device 100 that may be altered in a particular one of various ways (e.g., particular adjustments may be made by an output assembly and/or the like))”; see also Chen: ¶ 40, 43, 53 “touch or trackpad inputs, such as clicks, taps, gestures (e.g., swiping, pinching, etc.), and multi-touch inputs, may be detected on any portion of top case 132, including, in some embodiments, within keyboard region 134. Moreover, even where keyboard region 134 includes mechanical key mechanisms, touch-input region 136 may detect touch inputs (e.g., gestures) that are applied to the keycaps and not to the top case 132 directly.”; see also Chen: ¶ 62 “Touch and/or force sensor assembly 140 may include various touch and/or force sensing components, such as capacitive sensing elements, resistive sensing elements, and/or the like. Touch and/or force sensor assembly 140 may be configured to sense inputs applied to top case 132, and may sense selections (e.g., presses) of keys of mechanical keyboard 135, selections of affordances on any virtual key region of keyboard region 134, and/or touch inputs (e.g., clicks, taps, gestures, multi-touch inputs, etc.) applied to other areas of top case 132 (e.g., to a touch-input region or trackpad input region).”; see also Chen: ¶ 68 “the sensor assembly 140 can detect inputs in the keyboard region (e.g., key presses, gestures on or over the keys, etc.) as well as outside the keyboard region (e.g., clicks, taps, gestures, and other touch inputs applied to a palm rest region or any other touch or force sensitive region).”; see also Chen: ¶ 76-77 “A device (e.g., device 100) may use any suitable input mechanism(s), such as capacitive touch sensors, resistive touch sensors, acoustic wave sensors, or the like… a touch input function 120 b may include the detection of touch inputs, such as clicks, taps, gestures (e.g., swiping, pinching, etc.), and multi-touch inputs”; see also Chen: ¶ 99-101 “a first user input gesture may be presented to the user during a first period of time (e.g., visually (e.g., via display 121) and/or audibly and/or tactilely) and sensor data may be collected during the first period while the user interface is presented and/or during another period after the user interface is presented while the user may perform the requested gesture. Additional user interfaces may be presented to prompt a user to perform additional user input gestures during additional periods of time to train for detection of the additional gestures. For example, any suitable sensor data, including touch sensor data and/or force sensor data and/or the like, can be collected while the user performs various user input gestures, for example, to train a gesture detection algorithm”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of accessing, for a user input gesture detected, user input sensor category data for each one of a first user input sensor category and a second user input sensor category, determine, using a learning engine, the accessed user input sensor category data (as disclosed by Chen) to the known method and system for wearable devices determining one or more physiological characteristics associated to a user applying machine learning models to collect, analyze and process physiological data collected form a user to identify and track medical conditions using sensors (as disclosed by Singleton) to control or manipulate at least one functionality of device and re-train the learning engine using the accessed user input sensor category data. One of ordinary skill in the art would have been motivated to apply the known technique of accessing, for a user input gesture detected, user input sensor category data for each one of a first user input sensor category and a second user input sensor category, determine, using a learning engine, the accessed user input sensor category data because it would control or manipulate at least one functionality of device and re-train the learning engine using the accessed user input sensor category data (see Chen ¶ 37 and 99). Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of accessing, for a user input gesture detected, user input sensor category data for each one of a first user input sensor category and a second user input sensor category, determine, using a learning engine, the accessed user input sensor category data (as disclosed by Chen) to the known method and system for wearable devices determining one or more physiological characteristics associated to a user applying machine learning models to collect, analyze and process physiological data collected form a user to identify and track medical conditions using sensors (as disclosed by Singleton) to control or manipulate at least one functionality of device and re-train the learning engine using the accessed user input sensor category data, because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of accessing, for a user input gesture detected, user input sensor category data for each one of a first user input sensor category and a second user input sensor category, determine, using a learning engine, the accessed user input sensor category data to the known method and system for wearable devices determining one or more physiological characteristics associated to a user applying machine learning models to collect, analyze and process physiological data collected form a user to identify and track medical conditions to control or manipulate at least one functionality of device and re-train the learning engine using the accessed user input sensor category data). See also MPEP § 2143(I)(D). translating, by the one or more sensors, the user movements including the one or more hand gestures into signal data (see at least Singleton: ¶ 26: “In some implementations, the rings 104 (e.g., wearable devices 104) of the system 100 may be configured to collect physiological data from the respective users 102 based on arterial blood flow within the user's finger. In particular, a ring 104 may utilize one or more LEDs (e.g., red LEDs, green LEDs) that emit light on the palm-side of a user's finger to collect physiological data based on arterial blood flow within the user's finger. In some implementations, the ring 104 may acquire the physiological data using a combination of both green and red LEDs. The physiological data may include any physiological data known in the art including, but not limited to, temperature data, accelerometer data (e.g., movement/motion data), heart rate data, HRV data, blood oxygen level data, or any combination thereof.”; see also Singleton: ¶ 40: “System 200 further includes a user device 106 (e.g., a smartphone) in communication with the ring 104. For example, the ring 104 may be in wireless and/or wired communication with the user device 106. In some implementations, the ring 104 may send measured and processed data (e.g., temperature data, photoplethysmography (PPG) data, motion/accelerometer data, ring input data, and the like) to the user device 106. The user device 106 may also send data to the ring 104, such as ring 104 firmware/configuration updates. The user device 106 may process data. In some implementations, the user device 106 may transmit data to the server 110 for processing and/or storage.”; see also Singleton: ¶ 48: “device electronics, battery 210, and substrates may be arranged in the ring 104 in a variety of ways. In some implementations, one substrate that includes device electronics may be mounted along the bottom of the ring 104 (e.g., the bottom half), such that the sensors (e.g., PPG system 235, temperature sensors 240, motion sensors 245, and other sensors) interface with the underside of the user's finger. In these implementations, the battery 210 may be included along the top portion of the ring 104 (e.g., on another substrate).”; see also Singleton: ¶ 74-77 “The processing module 230-a may sample the motion signals at a sampling rate (e.g., 50 Hz) and determine the motion of the ring 104 based on the sampled motion signals. For example, the processing module 230-a may sample acceleration signals to determine acceleration of the ring 104. As another example, the processing module 230-a may sample a gyro signal to determine angular motion. In some implementations, the processing module 230-a may store motion data in memory 215. Motion data may include sampled motion data as well as motion data that is calculated based on the sampled motion signals (e.g., acceleration and angular values)…The ring 104 may store a variety of data described herein. For example, the ring 104 may store temperature data, such as raw sampled temperature data and calculated temperature data (e.g., average temperatures). As another example, the ring 104 may store PPG signal data, such as pulse waveforms and data calculated based on the pulse waveforms (e.g., heart rate values, IBI values, HRV values, and respiratory rate values). The ring 104 may also store motion data, such as sampled motion data that indicates linear and angular motion.”; see also Singleton: ¶ 81 “The physiological measurements may be taken continuously throughout the day and/or night. In some implementations, the physiological measurements may be taken during 104 portions of the day and/or portions of the night. In some implementations, the physiological measurements may be taken in response to determining that the user is in a specific state, such as an active state, resting state, and/or a sleeping state. For example, the ring 104 can make physiological measurements in a resting/sleep state in order to acquire cleaner physiological signals. In one example, the ring 104 or other device/system may detect when a user is resting and/or sleeping and acquire physiological parameters (e.g., temperature) for that detected state. The devices/systems may use the resting/sleep physiological data and/or other data when the user is in other states in order to implement the techniques of the present disclosure.”; see also Singleton: ¶ 16 “e.g., wearable devices worn on both the left and right hands”; see also Singleton: ¶ 24 “a user 102 may have a ring (e.g., wearable device 104) that measures physiological parameters”; see also Singleton: ¶ 77 “Example derived values for motion data may include, but are not limited to, motion count values, regularity values, intensity values, metabolic equivalence of task values (METs), and orientation values. Motion counts, regularity values, intensity values, and METs may indicate an amount of user motion (e.g., velocity/acceleration) over time. Orientation values may indicate how the ring 104 is oriented on the user's finger and if the ring 104 is worn on the left hand or right hand”; see also Singleton: ¶ 99 “collecting physiological data via both wearable devices 104-d and 104-e (e.g., wearable ring devices 104 on each hand) may provide more insight into such medical conditions, that may lead to earlier diagnosis and treatment”; see at least Chen: ¶ 25-26 “detecting trackpad user input gestures, a dedicated virtual keyboard region that may be used for detecting keyboard user input gestures, and a hybrid region that may be used for detecting either trackpad user input gestures or keyboard user input gestures depending on whether a thumb or a finger is determined to have provided the user input gestures… Electronic device 100 may be any portable, mobile, hand-held, or miniature electronic device that may be configured to handle user input gestures on an extended trackpad wherever a user travels”; see also Chen: ¶ 32 “a touch sensor assembly may be used to detect touch inputs (e.g., gestures, multi-touch inputs, taps, etc.)”; see also Chen: ¶ 37 “user input sensor states or events or gestures (e.g., sensor state data 522 of FIG. 5) that may be used to control or manipulate at least one functionality of device 100 (e.g., a performance or mode of device 100 that may be altered in a particular one of various ways (e.g., particular adjustments may be made by an output assembly and/or the like))”; see also Chen: ¶ 40, 43, 53 “touch or trackpad inputs, such as clicks, taps, gestures (e.g., swiping, pinching, etc.), and multi-touch inputs, may be detected on any portion of top case 132, including, in some embodiments, within keyboard region 134. Moreover, even where keyboard region 134 includes mechanical key mechanisms, touch-input region 136 may detect touch inputs (e.g., gestures) that are applied to the keycaps and not to the top case 132 directly.”; see also Chen: ¶ 62 “Touch and/or force sensor assembly 140 may include various touch and/or force sensing components, such as capacitive sensing elements, resistive sensing elements, and/or the like. Touch and/or force sensor assembly 140 may be configured to sense inputs applied to top case 132, and may sense selections (e.g., presses) of keys of mechanical keyboard 135, selections of affordances on any virtual key region of keyboard region 134, and/or touch inputs (e.g., clicks, taps, gestures, multi-touch inputs, etc.) applied to other areas of top case 132 (e.g., to a touch-input region or trackpad input region).”; see also Chen: ¶ 68 “the sensor assembly 140 can detect inputs in the keyboard region (e.g., key presses, gestures on or over the keys, etc.) as well as outside the keyboard region (e.g., clicks, taps, gestures, and other touch inputs applied to a palm rest region or any other touch or force sensitive region).”; see also Chen: ¶ 76-77 “A device (e.g., device 100) may use any suitable input mechanism(s), such as capacitive touch sensors, resistive touch sensors, acoustic wave sensors, or the like… a touch input function 120 b may include the detection of touch inputs, such as clicks, taps, gestures (e.g., swiping, pinching, etc.), and multi-touch inputs”; see also Chen: ¶ 99-101 “a first user input gesture may be presented to the user during a first period of time (e.g., visually (e.g., via display 121) and/or audibly and/or tactilely) and sensor data may be collected during the first period while the user interface is presented and/or during another period after the user interface is presented while the user may perform the requested gesture. Additional user interfaces may be presented to prompt a user to perform additional user input gestures during additional periods of time to train for detection of the additional gestures. For example, any suitable sensor data, including touch sensor data and/or force sensor data and/or the like, can be collected while the user performs various user input gestures, for example, to train a gesture detection algorithm”). separating and inputting the signal data into separate models, the separate models trained based on user movement data from a population of users using a training algorithm (see at least Chen: ¶ 25 “One or more models may be trained and then used for distinguishing between a thumb user input gesture or a finger user input gesture, such as by using any suitable user input gesture touch and/or location data and/or any suitable user input gesture force data that may be sensed by any suitable sensor assembly”; see also Chen: ¶ 99 “any suitable sensor data, including touch sensor data and/or force sensor data and/or the like, can be collected while the user performs various user input gestures, for example, to train a gesture detection algorithm”; see also Chen: ¶ 100 “the device or any suitable training system can assign each cluster to one of the user input gestures as part of the training process”; see also Chen: ¶ 101 and 105-107 “A learning engine or user input gesture model for an experiencing entity may be trained (e.g., at operation 806 of process 800 of FIG. 8) using the received sensor category data for the gesture (e.g., as inputs of a neural network of the learning engine) and using the received score for the gesture (e.g., as an output of the neural network of the learning engine)”); processing the signal data by the separate models using separate channels to process the signal data and combining an output of the separate channels to identify a feature set, the feature set having a time component and an amplitude component (see at least Chen: ¶ 98-99 “from the first sensor data collected at operation 601 (e.g., shape of touch event, peak force amplitude, force amplitude difference between adjacent peak and trough, length of touch area (e.g., length of user's digit print detected on trackpad region), width of touch area (e.g., width of user's digit print detected on trackpad region), ratio of length of touch area to width of touch area, touch area of user's digits in a keyboard area (e.g., region 334) versus a hybrid area (e.g., region 339), any suitable force data, force applied by event, plot of force applied by event over time”; see also Chen: ¶ 101 “any suitable third signal characteristic(s) may be extracted from the third sensor data collected at operation 701 (e.g., shape of touch event, peak force amplitude, force amplitude difference between adjacent peak and trough, length of touch area (e.g., length of user's digit print detected on trackpad region), width of touch area (e.g., width of user's digit print detected on trackpad region), ratio of length of touch area to width of touch area, touch area of user's digits in a keyboard area (e.g., region 334) versus a hybrid area (e.g., region 339), any suitable force data, force applied by event, plot of force applied by event over time”; see also Chen: ¶ 25 “One or more models may be trained and then used for distinguishing between a thumb user input gesture or a finger user input gesture, such as by using any suitable user input gesture touch and/or location data and/or any suitable user input gesture force data that may be sensed by any suitable sensor assembly”; see also Chen: ¶ 99 “any suitable sensor data, including touch sensor data and/or force sensor data and/or the like, can be collected while the user performs various user input gestures, for example, to train a gesture detection algorithm”; see also Chen: ¶ 100 “the device or any suitable training system can assign each cluster to one of the user input gestures as part of the training process”; see also Chen: ¶ 101 and 105-107 “A learning engine or user input gesture model for an experiencing entity may be trained (e.g., at operation 806 of process 800 of FIG. 8) using the received sensor category data for the gesture (e.g., as inputs of a neural network of the learning engine) and using the received score for the gesture (e.g., as an output of the neural network of the learning engine)”); converting the feature set, by a feature analytics engine processing the time component and the amplitude component, into a score that corresponds to a condition of the user (see at least Singleton: ¶ 98-101: “the one or more physiological characteristics determined based on the comparison of the physiological data collected by the multiple wearable devices 104 may include a metric associated with blood circulation of the user 102, one or more risk metrics associated with one or more medical conditions, or both… the one or more medical conditions may include Parkinson's, Alzheimer's, stroke, or any combination thereof. In particular, some medical conditions (e.g., Parkinson's, Alzheimer's) may manifest differently, or at different rates, on one side of the body as compared to the other side of the body. For instance, oftentimes symptoms of a stroke only (or primarily) manifest on one side of the body. As such, collecting physiological data via both wearable devices 104-d and 104-e (e.g., wearable ring devices 104 on each hand) may provide more insight into such medical conditions, that may lead to earlier diagnosis and treatment… the system 300 may use physiological data collected at a first location to determine a metric associated with a medical condition (e.g., Parkinson's)… For example, the system 300 may identify that physiological data collected from the right side of a user 102 differs from physiological data collected from the left side of the user. The system 300 may associate the difference in physiological data with additional sources (e.g., studies) to identify one or more physiological characteristics associated with stress or other medical conditions, such as Parkinson's and Alzheimer's.”; see at least Singleton: ¶ 124: “the one or more physiological characteristics comprise a metric associated with blood circulation of the user, one or more risk metrics associated with one or more medical conditions, or both. In some examples, the one or more medical conditions comprise”; see also Singleton: ¶ 81: “the external device 150 may itself analyze the user (e.g., the user's activity or condition in response to such prompts), for example using a camera to detect muscle tremors, using a microphone to detect slurred speech, or to detect any other indicia of health conditions.”; see also Singleton: ¶ 82 “For example, an updated algorithm for treating one or more health conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network (e.g., as an over-the-air update), and installed on the treatment device 110 and/or external device”; see at least Singleton: ¶ 84 “The treatment device 110 may be configured to calculate physiological characteristics relating to one or more signals received from the sensor(s) 113. For example, the treatment device 110 may be configured to algorithmically determine the presence or absence of a muscle tremor, fall, or other health condition from the signal.”; see at least Singleton: ¶ 88 “These external computing devices 180 can collect data recorded by the treatment device 110 and/or the external device 150. In some embodiments, such data can be anonymized and aggregated to perform large-scale analysis (e.g., using machine-learning techniques or other suitable data analysis techniques) to develop and improve treatment algorithms using data collected by a large number of treatment devices 110 associated with a large population of users. Additionally, the external computing devices 180 may transmit data to the external device 150 and/or the treatment device 110. For example, an updated algorithm for treating conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network 170, and installed on the recipient treatment device 110/150.”; see also Singleton: ¶ 74-76 “The processing module 230-a may sample the motion signals at a sampling rate (e.g., 50 Hz) and determine the motion of the ring 104 based on the sampled motion signals. For example, the processing module 230-a may sample acceleration signals to determine acceleration of the ring 104. As another example, the processing module 230-a may sample a gyro signal to determine angular motion. In some implementations, the processing module 230-a may store motion data in memory 215. Motion data may include sampled motion data as well as motion data that is calculated based on the sampled motion signals (e.g., acceleration and angular values).”; see also Singleton: ¶ 77-79: “The ring 104, or other computing device, may calculate and store additional values based on the sampled/calculated physiological data. For example, the processing module 230 may calculate and store various metrics, such as sleep metrics (e.g., a Sleep Score), activity metrics, and readiness metrics. In some implementations, additional values/metrics may be referred to as “derived values.” The ring 104, or other computing/wearable device, may calculate a variety of values/metrics with respect to motion. Example derived values for motion data may include, but are not limited to, motion count values, regularity values, intensity values, metabolic equivalence of task values (METs), and orientation values. Motion counts, regularity values, intensity values, and METs may indicate an amount of user motion (e.g., velocity/acceleration) over time. Orientation values may indicate how the ring 104 is oriented on the user's finger and if the ring 104 is worn on the left hand or right hand. In some implementations, motion counts and regularity values may be determined by counting a number of acceleration peaks within one or more periods of time (e.g., one or more 30 second to 1 minute periods). Intensity values may indicate a number of movements and the associated intensity (e.g., acceleration values) of the movements. The intensity values may be categorized as low, medium, and high, depending on associated threshold acceleration values. METs may be determined based on the intensity of movements during a period of time (e.g., 30 seconds), the regularity/irregularity of the movements, and the number of movements associated with the different intensities.”; see at least Singleton: ¶ 98-101: “the one or more physiological characteristics determined based on the comparison of the physiological data collected by the multiple wearable devices 104 may include a metric associated with blood circulation of the user 102, one or more risk metrics associated with one or more medical conditions, or both… the one or more medical conditions may include Parkinson's, Alzheimer's, stroke, or any combination thereof. In particular, some medical conditions (e.g., Parkinson's, Alzheimer's) may manifest differently, or at different rates, on one side of the body as compared to the other side of the body. For instance, oftentimes symptoms of a stroke only (or primarily) manifest on one side of the body. As such, collecting physiological data via both wearable devices 104-d and 104-e (e.g., wearable ring devices 104 on each hand) may provide more insight into such medical conditions, that may lead to earlier diagnosis and treatment… the system 300 may use physiological data collected at a first location to determine a metric associated with a medical condition (e.g., Parkinson's)… For example, the system 300 may identify that physiological data collected from the right side of a user 102 differs from physiological data collected from the left side of the user. The system 300 may associate the difference in physiological data with additional sources (e.g., studies) to identify one or more physiological characteristics associated with stress or other medical conditions, such as Parkinson's and Alzheimer's.”; see at least Singleton: ¶ 124: “the one or more physiological characteristics comprise a metric associated with blood circulation of the user, one or more risk metrics associated with one or more medical conditions, or both. In some examples, the one or more medical conditions comprise”; see also Singleton: ¶ 81: “the external device 150 may itself analyze the user (e.g., the user's activity or condition in response to such prompts), for example using a camera to detect muscle tremors, using a microphone to detect slurred speech, or to detect any other indicia of health conditions.”; see also Singleton: ¶ 82 “For example, an updated algorithm for treating one or more health conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network (e.g., as an over-the-air update), and installed on the treatment device 110 and/or external device”; see at least Singleton: ¶ 84 “The treatment device 110 may be configured to calculate physiological characteristics relating to one or more signals received from the sensor(s) 113. For example, the treatment device 110 may be configured to algorithmically determine the presence or absence of a muscle tremor, fall, or other health condition from the signal.”; see at least Singleton: ¶ 88 “These external computing devices 180 can collect data recorded by the treatment device 110 and/or the external device 150. In some embodiments, such data can be anonymized and aggregated to perform large-scale analysis (e.g., using machine-learning techniques or other suitable data analysis techniques) to develop and improve treatment algorithms using data collected by a large number of treatment devices 110 associated with a large population of users. Additionally, the external computing devices 180 may transmit data to the external device 150 and/or the treatment device 110. For example, an updated algorithm for treating conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network 170, and installed on the recipient treatment device 110/150.”; see also Singleton: ¶ 30-32; see at least Chen: ¶ 25-26 “detecting trackpad user input gestures, a dedicated virtual keyboard region that may be used for detecting keyboard user input gestures, and a hybrid region that may be used for detecting either trackpad user input gestures or keyboard user input gestures depending on whether a thumb or a finger is determined to have provided the user input gestures… Electronic device 100 may be any portable, mobile, hand-held, or miniature electronic device that may be configured to handle user input gestures on an extended trackpad wherever a user travels”; see also Chen: ¶ 32 “a touch sensor assembly may be used to detect touch inputs (e.g., gestures, multi-touch inputs, taps, etc.)”; see also Chen: ¶ 37 “user input sensor states or events or gestures (e.g., sensor state data 522 of FIG. 5) that may be used to control or manipulate at least one functionality of device 100 (e.g., a performance or mode of device 100 that may be altered in a particular one of various ways (e.g., particular adjustments may be made by an output assembly and/or the like))”; see also Chen: ¶ 40, 43, 53 “touch or trackpad inputs, such as clicks, taps, gestures (e.g., swiping, pinching, etc.), and multi-touch inputs, may be detected on any portion of top case 132, including, in some embodiments, within keyboard region 134. Moreover, even where keyboard region 134 includes mechanical key mechanisms, touch-input region 136 may detect touch inputs (e.g., gestures) that are applied to the keycaps and not to the top case 132 directly.”; see also Chen: ¶ 62 “Touch and/or force sensor assembly 140 may include various touch and/or force sensing components, such as capacitive sensing elements, resistive sensing elements, and/or the like. Touch and/or force sensor assembly 140 may be configured to sense inputs applied to top case 132, and may sense selections (e.g., presses) of keys of mechanical keyboard 135, selections of affordances on any virtual key region of keyboard region 134, and/or touch inputs (e.g., clicks, taps, gestures, multi-touch inputs, etc.) applied to other areas of top case 132 (e.g., to a touch-input region or trackpad input region).”; see also Chen: ¶ 68 “the sensor assembly 140 can detect inputs in the keyboard region (e.g., key presses, gestures on or over the keys, etc.) as well as outside the keyboard region (e.g., clicks, taps, gestures, and other touch inputs applied to a palm rest region or any other touch or force sensitive region).”; see also Chen: ¶ 76-77 “A device (e.g., device 100) may use any suitable input mechanism(s), such as capacitive touch sensors, resistive touch sensors, acoustic wave sensors, or the like… a touch input function 120 b may include the detection of touch inputs, such as clicks, taps, gestures (e.g., swiping, pinching, etc.), and multi-touch inputs”; see also Chen: ¶ 99-101 “a first user input gesture may be presented to the user during a first period of time (e.g., visually (e.g., via display 121) and/or audibly and/or tactilely) and sensor data may be collected during the first period while the user interface is presented and/or during another period after the user interface is presented while the user may perform the requested gesture. Additional user interfaces may be presented to prompt a user to perform additional user input gestures during additional periods of time to train for detection of the additional gestures. For example, any suitable sensor data, including touch sensor data and/or force sensor data and/or the like, can be collected while the user performs various user input gestures, for example, to train a gesture detection algorithm”; see also Chen: ¶ 105-107 “A user input gesture model custodian may receive from the experiencing entity (e.g., at operation 804 of process 800 of FIG. 8) not only device sensor category data for at least one device sensor category for a gesture that the experiencing entity is currently experiencing or conducting or carrying out, but also a score for that gesture experience (e.g., a score that the experiencing entity may supply as an indication of the gesture that the experiencing entity experienced from experiencing the gesture)… A learning engine or user input gesture model for an experiencing entity may be trained (e.g., at operation 806 of process 800 of FIG. 8) using the received sensor category data for the gesture (e.g., as inputs of a neural network of the learning engine) and using the received score for the gesture (e.g., as an output of the neural network of the learning engine)”); and determining the condition of the user by the separate models and the feature analytics engine from the user movements detected by the wearable device (see at least Singleton: ¶ 98-101 “Comparing physiological data collected by the multiple wearable devices 104 may provide insight into the user's 102 health. For example, the one or more physiological characteristics determined based on the comparison of the physiological data collected by the multiple wearable devices 104 may include a metric associated with blood circulation of the user 102, one or more risk metrics associated with one or more medical conditions, or both… the system 300 may identify that physiological data collected from the right side of a user 102 differs from physiological data collected from the left side of the user. The system 300 may associate the difference in physiological data with additional sources (e.g., studies) to identify one or more physiological characteristics associated with stress or other medical conditions, such as Parkinson's and Alzheimer's”; see also Singleton: ¶ 108 “may display one or more messages or other indications associated with one or more physiological conditions 430 determined based on physiological data collected via one or more wearable devices 104”; see also Singleton: ¶ 124 “the one or more physiological characteristics comprise a metric associated with blood circulation of the user, one or more risk metrics associated with one or more medical conditions, or both. In some examples, the one or more medical conditions comprise Parkinson's, Alzheimer's, stroke, or any combination thereof”; see at least Chen: ¶ 25 “One or more models may be trained and then used for distinguishing between a thumb user input gesture or a finger user input gesture, such as by using any suitable user input gesture touch and/or location data and/or any suitable user input gesture force data that may be sensed by any suitable sensor assembly”; see also Chen: ¶ 99 “any suitable sensor data, including touch sensor data and/or force sensor data and/or the like, can be collected while the user performs various user input gestures, for example, to train a gesture detection algorithm”; see also Chen: ¶ 100 “the device or any suitable training system can assign each cluster to one of the user input gestures as part of the training process”; see also Chen: ¶ 101 and 105-107 “A learning engine or user input gesture model for an experiencing entity may be trained (e.g., at operation 806 of process 800 of FIG. 8) using the received sensor category data for the gesture (e.g., as inputs of a neural network of the learning engine) and using the received score for the gesture (e.g., as an output of the neural network of the learning engine)”). transmitting at least the score or the condition to a computing device for remote monitoring of the score or the condition (further addressed below) Examiner notes that for compact prosecution purposes, while the combination of Singleton and Chen teaches a system and method for analyzing movement of individuals to classify the movements and determine conditions related to the user based on the stored information using a plurality of sensors to capture movement of the user wherein the system uses a trained algorithm, it fails to explicitly state that the trained model is used to determine a condition of the user. However, Sobol, which talks about a method and system for noninvasive detection and treatment of medical conditions and the device may also include one or more sensors to collect one or more of environmental data, activity data and physiological data (see Sobol: Abstract), teaches it is known to employ machine learning techniques and models trained with specific information related to physiological conditions and medical conditions to identify medical condition wherein the system employs a wearable device to determine signals to identify a medical condition of the user using the model and feature analytics based on movement of the user (see at least Sobol: ¶ 157 “As will be discussed in more detail later, GPU-based processing may be used as a training tool as part of a deep learning neural network as a way to extract meaningful health-related analytics from the large amount of acquired data from the wearable electronic device 100.”; see at least Sobol: ¶ 158 “a GPU-based approach may be used in conjunction with deep learning library-based frameworks (such as TensorFlow, Microsoft Cognitive Toolkit, PyTorch or the like) to train, validate and test certain machine learning models (such as deep learning models) that are computationally-intensive. In such case, these libraries may use additional libraries (for example, the deep learning library Keras) to organize layers of a deep learning neural network model as a way to expedite the analysis of the LEAP data. As such, high-level neural network APIs like Keras or related libraries help to simplify the amount of code that is required to train a neural network, and may be used in cooperation with Theano, TensorFlow or other back-end frameworks.”; see also Sobol: ¶ 159 “In one form, such processor or processors 173A may be programmed to perform machine learning functions, such as through a trained artificial neural network to determine, among other things, whether a patient associated with a particular wearable electronic device 100 is at risk of developing an infection or other adverse health condition, as will be described in more detail elsewhere within this disclosure”; see also Sobol: ¶ 163 “the memory 173B may store a trainable machine learning algorithm that can be accessed and executed by the processor 173A in order perform a classification, regression or clustering-based model or analysis, as well as to update the state of the machine learning algorithm through corresponding updates to the memory 173B”; see also Sobol: ¶ 166 “baseline activity data such as that acquired from sensors 121 that are in the form of accelerometers, gyroscopes and the like may be created through examples that can be correlated to known movements of the individual being monitored. For example, the individual may go through various sitting, standing, walking, running (if possible) and related movements that can be labeled for each activity where classification is desired. As will be discussed in more detail later, such labeling may be useful in performing supervised machine learning, particularly as it applies to training a machine learning model”; see also Sobol: ¶ 218 and 222 “Such a model may be supervised in that training data with known targets that are correlated to one or more health conditions may be used during training to model to learn to predict such change in health from HAR or ADL that in turn is based on the other variables that are in the form of the data being acquired by the wearable electronic device 100.”; see also Sobol: ¶ 225-229, 232-234, 242-245, and 257-270). Sobol further teaches the system capable of applying CNN or convolutional neural network systems to analyze received patient physiological data via wearable sensors (see at least Sobol: ¶ 68 and 109 “the frequency, intensity, duty cycle, or other waveform parameters may be varied in response to detected physiological parameters. In some embodiments, the physiological parameters can be collected across a wide range of users and used to train a machine-learning classification algorithm (e.g., a neural network model) that can be used to determine an appropriate neuromodulation to be applied given a particular parameter or set of parameters detected via the sensor(s).”). Examiner notes that Singleton does not explicitly state transmitting at least the score or condition to a computing device for remote monitoring of the score or the condition but Sobol teaches that it is known to detect conditions and the application server generating and sending reports and alerts to a family member or caregiver through a remote computing device (see at least Sobol: ¶ 305 “”; see also Sobol: ¶ 101-106 “FIG. 9 depicts the wearable electronic device and apportion of the system of FIG. 1 and their wireless connectivity through the cloud to ascertain the location and activity of a patient within a multi-patient dwelling, as well as to provide patient information in display form to a remote computing device according to one or more embodiments shown or described herein;”; see also Sobol: ¶ 118 “the present disclosure teaches a system that may track and monitor the device, log the collected data, perform analysis on the collected data and generate alerts. The data and alerts may be accessed through a user interface that can be displayed on a remote computing device or other suitable device with internet or cellular access”; see also Sobol: ¶ 126 “the system 1 is part of a network equipped with one or more applications including a caregiver application, configuration application and family application, all of which may be operated over the internet, a conventional cellular (i.e., 3G, 4G, GSM) network or the like through the remote computing device 900 that will be discussed in more detail in conjunction with FIGS. 4, 9 and 10A through 10F.”; see also Sobol: ¶ 186-187, 213-214, 216-218, and 269 “the physician may be one of the caregivers C who receives through his or her remote computing devices”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of employing an convolutional neural network on received physiological signal to identify features and alert a user to the condition of a person via a remote computing device (as disclosed by Sobol) to the known method and system for wearable devices determining one or more physiological characteristics associated to a user applying machine learning models to collect, analyze and process physiological data collected form a user to identify and track medical conditions (as disclosed by the combination of Singleton and Chen) to detect and treat conditions, for example using vibrational energy to noninvasively modulate nerve activity. One of ordinary skill in the art would have been motivated to apply the known technique of employing an convolutional neural network on received physiological signal to identify features and alert a user to the condition of a person via a remote computing device because it would detect and treat conditions, for example using vibrational energy to noninvasively modulate nerve activity (see Sobol ¶ 2). Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of employing an convolutional neural network on received physiological signal to identify features and alert a user to the condition of a person via a remote computing device (as disclosed by Sobol) to the known method and system for wearable devices determining one or more physiological characteristics associated to a user applying machine learning models to collect, analyze and process physiological data collected form a user to identify and track medical conditions (as disclosed by the combination of Singleton and Chen) to detect and treat conditions, for example using vibrational energy to noninvasively modulate nerve activity, because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of employing an convolutional neural network on received physiological signal to identify features and alert a user to the condition of a person via a remote computing device to the known method and system for wearable devices determining one or more physiological characteristics associated to a user applying machine learning models to collect, analyze and process physiological data collected form a user to identify and track medical conditions to detect and treat conditions, for example using vibrational energy to noninvasively modulate nerve activity). See also MPEP § 2143(I)(D). Referring to Claim 2 and 17 (substantially similar in scope and language), the combination of Singleton, Chen and Sobol teaches the method as in claim 1 and the wearable device of claim 16, including wherein the user movements are part of at least one predetermined exercise the user is prompted to perform before generating the signal data (see also Singleton: ¶ 81: “the external device 150 may itself analyze the user (e.g., the user's activity or condition in response to such prompts), for example using a camera to detect muscle tremors, using a microphone to detect slurred speech, or to detect any other indicia of health conditions.”; see also Singleton: ¶ 82 “For example, an updated algorithm for treating one or more health conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network (e.g., as an over-the-air update), and installed on the treatment device 110 and/or external device”; see at least Singleton: ¶ 84 “The treatment device 110 may be configured to calculate physiological characteristics relating to one or more signals received from the sensor(s) 113. For example, the treatment device 110 may be configured to algorithmically determine the presence or absence of a muscle tremor, fall, or other health condition from the signal.”; see at least Singleton: ¶ 88 “These external computing devices 180 can collect data recorded by the treatment device 110 and/or the external device 150. In some embodiments, such data can be anonymized and aggregated to perform large-scale analysis (e.g., using machine-learning techniques or other suitable data analysis techniques) to develop and improve treatment algorithms using data collected by a large number of treatment devices 110 associated with a large population of users. Additionally, the external computing devices 180 may transmit data to the external device 150 and/or the treatment device 110. For example, an updated algorithm for treating conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network 170, and installed on the recipient treatment device 110/150.”; see at least Singleton: ¶ 109-110 “The heart rate graph 410 may include a visual representation of how the user's heart rate reacted to different events and activities (e.g., exercise, sleep, rest, etc.)… The user device may display recommendations and/or information associated with the physiological characteristic data via a message. In some implementations, the user device 106 and/or servers 110 may generate alerts (e.g., messages, insights) associated with the physiological characteristic that may be displayed to the user via the GUI 400 (e.g., the application pages 405 or some other application page). In particular, the messages generated and displayed to the user via the GUI 400 may be associated with one or more characteristics (e.g., time of day, duration, range) of the physiological characteristic. For example, the message may alert the user to breathe, take a moment to relax, etc., based on the user's heart rate. In some cases, the message may display a recommendation of how to adjust their lifestyle to achieve a particular physiological characteristic”; see also Singleton: ¶ 99-100: “the one or more medical conditions may include Parkinson's, Alzheimer's, stroke, or any combination thereof. In particular, some medical conditions (e.g., Parkinson's, Alzheimer's) may manifest differently, or at different rates, on one side of the body as compared to the other side of the body. For instance, oftentimes symptoms of a stroke only (or primarily) manifest on one side of the body. As such, collecting physiological data via both wearable devices 104-d and 104-e (e.g., wearable ring devices 104 on each hand) may provide more insight into such medical conditions, that may lead to earlier diagnosis and treatment. For instance, studies have found that Parkinson's often begins with symptoms that begin mostly (or exclusively) one side of the body. Accordingly, by detecting baseline changes in physiological data happening on one side of the body (but not the other), aspects of the present disclosure may facilitate early Parkinson's diagnosis. Moreover, as physiological data is collected, the system 300 may detect that the changes attributable to Parkinson's may be shifting to the rest of the body, that may provide insight as to how the disease is developing and/or becoming more severe. Furthermore, studies have shown that the difference between physiological data collected on a user's left and right sides may provide more insight into certain medical conditions (e.g., stress, Parkinson's, Alzheimer's) than the actual physiological values collected from either respective side.”). Referring to Claim 3 and 18 (substantially similar in scope and language), the combination of Singleton, Chen, and Sobol teaches the method as in claim 2 and the wearable device of claim 17, including wherein the at least one predetermined exercise the user is prompted to perform comprises at least one exercise to performed with an extremity of the user on which the wearable device is worn (see also Singleton: ¶ 81: “the external device 150 may itself analyze the user (e.g., the user's activity or condition in response to such prompts), for example using a camera to detect muscle tremors, using a microphone to detect slurred speech, or to detect any other indicia of health conditions.”; see also Singleton: ¶ 82 “For example, an updated algorithm for treating one or more health conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network (e.g., as an over-the-air update), and installed on the treatment device 110 and/or external device”; see at least Singleton: ¶ 84 “The treatment device 110 may be configured to calculate physiological characteristics relating to one or more signals received from the sensor(s) 113. For example, the treatment device 110 may be configured to algorithmically determine the presence or absence of a muscle tremor, fall, or other health condition from the signal.”; see at least Singleton: ¶ 88 “These external computing devices 180 can collect data recorded by the treatment device 110 and/or the external device 150. In some embodiments, such data can be anonymized and aggregated to perform large-scale analysis (e.g., using machine-learning techniques or other suitable data analysis techniques) to develop and improve treatment algorithms using data collected by a large number of treatment devices 110 associated with a large population of users. Additionally, the external computing devices 180 may transmit data to the external device 150 and/or the treatment device 110. For example, an updated algorithm for treating conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network 170, and installed on the recipient treatment device 110/150.”; see at least Singleton: ¶ 109-110 “The heart rate graph 410 may include a visual representation of how the user's heart rate reacted to different events and activities (e.g., exercise, sleep, rest, etc.)… The user device may display recommendations and/or information associated with the physiological characteristic data via a message. In some implementations, the user device 106 and/or servers 110 may generate alerts (e.g., messages, insights) associated with the physiological characteristic that may be displayed to the user via the GUI 400 (e.g., the application pages 405 or some other application page). In particular, the messages generated and displayed to the user via the GUI 400 may be associated with one or more characteristics (e.g., time of day, duration, range) of the physiological characteristic. For example, the message may alert the user to breathe, take a moment to relax, etc., based on the user's heart rate. In some cases, the message may display a recommendation of how to adjust their lifestyle to achieve a particular physiological characteristic”; see also Singleton: ¶ 99-100: “the one or more medical conditions may include Parkinson's, Alzheimer's, stroke, or any combination thereof. In particular, some medical conditions (e.g., Parkinson's, Alzheimer's) may manifest differently, or at different rates, on one side of the body as compared to the other side of the body. For instance, oftentimes symptoms of a stroke only (or primarily) manifest on one side of the body. As such, collecting physiological data via both wearable devices 104-d and 104-e (e.g., wearable ring devices 104 on each hand) may provide more insight into such medical conditions, that may lead to earlier diagnosis and treatment. For instance, studies have found that Parkinson's often begins with symptoms that begin mostly (or exclusively) one side of the body. Accordingly, by detecting baseline changes in physiological data happening on one side of the body (but not the other), aspects of the present disclosure may facilitate early Parkinson's diagnosis. Moreover, as physiological data is collected, the system 300 may detect that the changes attributable to Parkinson's may be shifting to the rest of the body, that may provide insight as to how the disease is developing and/or becoming more severe. Furthermore, studies have shown that the difference between physiological data collected on a user's left and right sides may provide more insight into certain medical conditions (e.g., stress, Parkinson's, Alzheimer's) than the actual physiological values collected from either respective side.”). Referring to Claim 4 and 19 (substantially similar in scope and language), the combination of Singleton, Chen, and Sobol teaches the method as in claim 3 and the wearable device of claim 18, including the method further comprising generating the signal data while the user performs the at least one exercise with the extremity of the user on which the wearable device is worn (see also Singleton: ¶ 81: “the external device 150 may itself analyze the user (e.g., the user's activity or condition in response to such prompts), for example using a camera to detect muscle tremors, using a microphone to detect slurred speech, or to detect any other indicia of health conditions.”; see also Singleton: ¶ 82 “For example, an updated algorithm for treating one or more health conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network (e.g., as an over-the-air update), and installed on the treatment device 110 and/or external device”; see at least Singleton: ¶ 84 “The treatment device 110 may be configured to calculate physiological characteristics relating to one or more signals received from the sensor(s) 113. For example, the treatment device 110 may be configured to algorithmically determine the presence or absence of a muscle tremor, fall, or other health condition from the signal.”; see at least Singleton: ¶ 88 “These external computing devices 180 can collect data recorded by the treatment device 110 and/or the external device 150. In some embodiments, such data can be anonymized and aggregated to perform large-scale analysis (e.g., using machine-learning techniques or other suitable data analysis techniques) to develop and improve treatment algorithms using data collected by a large number of treatment devices 110 associated with a large population of users. Additionally, the external computing devices 180 may transmit data to the external device 150 and/or the treatment device 110. For example, an updated algorithm for treating conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network 170, and installed on the recipient treatment device 110/150.”; see at least Singleton: ¶ 109-110 “The heart rate graph 410 may include a visual representation of how the user's heart rate reacted to different events and activities (e.g., exercise, sleep, rest, etc.)… The user device may display recommendations and/or information associated with the physiological characteristic data via a message. In some implementations, the user device 106 and/or servers 110 may generate alerts (e.g., messages, insights) associated with the physiological characteristic that may be displayed to the user via the GUI 400 (e.g., the application pages 405 or some other application page). In particular, the messages generated and displayed to the user via the GUI 400 may be associated with one or more characteristics (e.g., time of day, duration, range) of the physiological characteristic. For example, the message may alert the user to breathe, take a moment to relax, etc., based on the user's heart rate. In some cases, the message may display a recommendation of how to adjust their lifestyle to achieve a particular physiological characteristic”; see also Singleton: ¶ 99-100: “the one or more medical conditions may include Parkinson's, Alzheimer's, stroke, or any combination thereof. In particular, some medical conditions (e.g., Parkinson's, Alzheimer's) may manifest differently, or at different rates, on one side of the body as compared to the other side of the body. For instance, oftentimes symptoms of a stroke only (or primarily) manifest on one side of the body. As such, collecting physiological data via both wearable devices 104-d and 104-e (e.g., wearable ring devices 104 on each hand) may provide more insight into such medical conditions, that may lead to earlier diagnosis and treatment. For instance, studies have found that Parkinson's often begins with symptoms that begin mostly (or exclusively) one side of the body. Accordingly, by detecting baseline changes in physiological data happening on one side of the body (but not the other), aspects of the present disclosure may facilitate early Parkinson's diagnosis. Moreover, as physiological data is collected, the system 300 may detect that the changes attributable to Parkinson's may be shifting to the rest of the body, that may provide insight as to how the disease is developing and/or becoming more severe. Furthermore, studies have shown that the difference between physiological data collected on a user's left and right sides may provide more insight into certain medical conditions (e.g., stress, Parkinson's, Alzheimer's) than the actual physiological values collected from either respective side.”). Referring to Claim 7 and 22 (substantially similar in scope and language), the combination of Singleton, Chen, and Sobol teaches the method as in claim 1 and the wearable device of claim 16, including wherein the wearable device comprises a wristband (see at least Singleton: ¶ 20 “Example wearable devices 104 may include wearable computing devices, such as a ring computing device (hereinafter “ring”) configured to be worn on a user's 102 finger, a wrist computing device (e.g., a smart watch, fitness band, or bracelet) configured to be worn on a user's 102 wrist, and/or a head mounted computing device (e.g., glasses/goggles). Wearable devices 104 may also include bands, straps (e.g., flexible or inflexible bands or straps), stick-on sensors, and the like, that may be positioned in other locations, such as bands around the head (e.g., a forehead headband), arm (e.g., a forearm band and/or bicep band), and/or leg (e.g., a thigh or calf band), behind the ear, under the armpit, and the like”; see also Singleton: ¶ 27, 34, 43, 65, and 80). Referring to Claim 24, the combination of Singleton, Chen, and Sobol teaches the wearable device of claim 16, including wherein the set of electronic sensors comprises an inertial measurement unit (IMU) sensor (see at least Singleton: ¶ 74). Referring to Claim 25, the combination of Singleton, Chen, and Sobol teaches the wearable device of claim 24, including wherein the set of electronic sensors includes a photoplethysmography (PPG) sensor (see at least Singleton: ¶ 40 “the ring 104 may send measured and processed data (e.g., temperature data, photoplethysmography (PPG) data, motion/accelerometer data, ring input data, and the like) to the user device 106.”, 66-71, and 76). Referring to Claim 8 and 23 (substantially similar in scope and language), the combination of Singleton, Chen, and Sobol teaches the method as in claim 1 and the wearable device of claim 16. Examiner notes that Singleton teaches applying a model to determine and classify user’s physiological data (see also Singleton: ¶ 74-76 “The processing module 230-a may sample the motion signals at a sampling rate (e.g., 50 Hz) and determine the motion of the ring 104 based on the sampled motion signals. For example, the processing module 230-a may sample acceleration signals to determine acceleration of the ring 104. As another example, the processing module 230-a may sample a gyro signal to determine angular motion. In some implementations, the processing module 230-a may store motion data in memory 215. Motion data may include sampled motion data as well as motion data that is calculated based on the sampled motion signals (e.g., acceleration and angular values).”; see also Singleton: ¶ 77-79: “The ring 104, or other computing device, may calculate and store additional values based on the sampled/calculated physiological data. For example, the processing module 230 may calculate and store various metrics, such as sleep metrics (e.g., a Sleep Score), activity metrics, and readiness metrics. In some implementations, additional values/metrics may be referred to as “derived values.” The ring 104, or other computing/wearable device, may calculate a variety of values/metrics with respect to motion. Example derived values for motion data may include, but are not limited to, motion count values, regularity values, intensity values, metabolic equivalence of task values (METs), and orientation values. Motion counts, regularity values, intensity values, and METs may indicate an amount of user motion (e.g., velocity/acceleration) over time. Orientation values may indicate how the ring 104 is oriented on the user's finger and if the ring 104 is worn on the left hand or right hand. In some implementations, motion counts and regularity values may be determined by counting a number of acceleration peaks within one or more periods of time (e.g., one or more 30 second to 1 minute periods). Intensity values may indicate a number of movements and the associated intensity (e.g., acceleration values) of the movements. The intensity values may be categorized as low, medium, and high, depending on associated threshold acceleration values. METs may be determined based on the intensity of movements during a period of time (e.g., 30 seconds), the regularity/irregularity of the movements, and the number of movements associated with the different intensities.”; see at least Singleton: ¶ 98-101: “the one or more physiological characteristics determined based on the comparison of the physiological data collected by the multiple wearable devices 104 may include a metric associated with blood circulation of the user 102, one or more risk metrics associated with one or more medical conditions, or both… the one or more medical conditions may include Parkinson's, Alzheimer's, stroke, or any combination thereof. In particular, some medical conditions (e.g., Parkinson's, Alzheimer's) may manifest differently, or at different rates, on one side of the body as compared to the other side of the body. For instance, oftentimes symptoms of a stroke only (or primarily) manifest on one side of the body. As such, collecting physiological data via both wearable devices 104-d and 104-e (e.g., wearable ring devices 104 on each hand) may provide more insight into such medical conditions, that may lead to earlier diagnosis and treatment… the system 300 may use physiological data collected at a first location to determine a metric associated with a medical condition (e.g., Parkinson's)… For example, the system 300 may identify that physiological data collected from the right side of a user 102 differs from physiological data collected from the left side of the user. The system 300 may associate the difference in physiological data with additional sources (e.g., studies) to identify one or more physiological characteristics associated with stress or other medical conditions, such as Parkinson's and Alzheimer's.”; see at least Singleton: ¶ 124: “the one or more physiological characteristics comprise a metric associated with blood circulation of the user, one or more risk metrics associated with one or more medical conditions, or both. In some examples, the one or more medical conditions comprise”; see also Singleton: ¶ 81: “the external device 150 may itself analyze the user (e.g., the user's activity or condition in response to such prompts), for example using a camera to detect muscle tremors, using a microphone to detect slurred speech, or to detect any other indicia of health conditions.”; see also Singleton: ¶ 82 “For example, an updated algorithm for treating one or more health conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network (e.g., as an over-the-air update), and installed on the treatment device 110 and/or external device”; see at least Singleton: ¶ 84 “The treatment device 110 may be configured to calculate physiological characteristics relating to one or more signals received from the sensor(s) 113. For example, the treatment device 110 may be configured to algorithmically determine the presence or absence of a muscle tremor, fall, or other health condition from the signal.”; see at least Singleton: ¶ 88 “These external computing devices 180 can collect data recorded by the treatment device 110 and/or the external device 150. In some embodiments, such data can be anonymized and aggregated to perform large-scale analysis (e.g., using machine-learning techniques or other suitable data analysis techniques) to develop and improve treatment algorithms using data collected by a large number of treatment devices 110 associated with a large population of users. Additionally, the external computing devices 180 may transmit data to the external device 150 and/or the treatment device 110. For example, an updated algorithm for treating conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network 170, and installed on the recipient treatment device 110/150.”; see at least Singleton: ¶ 30-31). Chen further teaches wherein one of the separate models the model comprises a convolutional neural network (CNN) configured to receive the signal data as input and to identify the feature set as output (see at least Chen: ¶ 103-104 “the learning engine may include any suitable neural network (e.g., an artificial neural network) that may be initially configured, trained on one or more sets of sensor data that may be generated during the performance of one or more known input gestures, and then used to predict a particular user input gesture based on another set of sensor data. A neural network or neuronal network or artificial neural network may be hardware-based, software-based, or any combination thereof, such as any suitable model (e.g., an analytical model, a computational model, etc.)”; see also Chen: 106 ¶ “A learning engine or user input gesture model for an experiencing entity may be trained (e.g., at operation 806 of process 800 of FIG. 8) using the received sensor category data for the gesture (e.g., as inputs of a neural network of the learning engine) and using the received score for the gesture (e.g., as an output of the neural network of the learning engine)”). Referring to Claim 10, the combination of Singleton, Chen, and Sobol teaches the method as in claim 9, including wherein the set of electronic sensors comprises an inertial measurement unit (IMU) sensor (see at least Singleton: ¶ 74). Referring to Claim 11, the combination of Singleton, Chen, and Sobol teaches the method as in claim 10, including wherein the set of electronic sensors includes a photoplethysmography (PPG) sensor (see at least Singleton: ¶ 40 “the ring 104 may send measured and processed data (e.g., temperature data, photoplethysmography (PPG) data, motion/accelerometer data, ring input data, and the like) to the user device 106.”, 66-71, and 76). Referring to Claim 12, the combination of Singleton, Chen, and Sobol teaches the method as in claim 11, including wherein the CNN includes a plurality of stacked layers, each of the plurality of stacked layers corresponding to a respective electronic sensor of the set of electronic sensors (see at least Sobol: ¶ 142-148 “More particularly, while wearable electronic device 100 and gateway 300 may be arranged at least in part on the OSI layer communication model, it will be appreciated that different combinations of layers could be used within a given protocol stack”; see also Sobol: ¶ 158 “In one form, a GPU-based approach may be used in conjunction with deep learning library-based frameworks (such as TensorFlow, Microsoft Cognitive Toolkit, PyTorch or the like) to train, validate and test certain machine learning models (such as deep learning models) that are computationally-intensive. In such case, these libraries may use additional libraries (for example, the deep learning library Keras) to organize layers of a deep learning neural network model as a way to expedite the analysis of the LEAP data. As such, high-level neural network APIs like Keras or related libraries help to simplify the amount of code that is required to train a neural network, and may be used in cooperation with Theano, TensorFlow or other back-end frameworks.”). Referring to Claim 14, the combination of Singleton, Chen, and Sobol teaches the method as in claim 13, including wherein the feature analytics engine comprises a fully connected regression network configured to convert the feature set into the score (see at least Sobol: ¶ 220 “In addition to grouping machine learning models based on whether they are supervised or unsupervised, they may be grouped according to their output, where in one form, a binary classification model provides a yes/no answer, whereas a regression model provides an answer that exists along a continuum of answers. Examples of classification models include SVM, kNN, decision trees, Naïve Bayes, logistic regression and random forests, among others, while examples of regression models include linear regression and nonlinear regression.”; see also Sobol: ¶ 224, 225, 228, 241 “a machine learning library such as Scikit-learn is used with the Python programming language to provide various classification, regression and clustering algorithms including SVM, random forests, gradient boosting and K-means”, and ¶243-244). Referring to Claim 15, the combination of Singleton, Chen, and Sobol teaches the method as in claim 13, including wherein the feature analytics engine comprises a decision tree configured to map the feature set into the score (see at least Sobol: ¶ 161, 165, 219 “Examples of machine learning include those grouped as supervised learning, unsupervised learning and reinforcement learning, and one or more approaches under these groups may be useful for such analysis of the acquired data. Particular examples of supervised learning may include Bayesian approaches (such as naïve Bayes, Bayesian belief, Bayesian linear regression and dynamic Bayesian networks that includes Markov-based models), decision tree approaches (such as classification and regression trees (CART)”). Referring to Claim 28, the combination of Singleton, Chen, and Sobol teaches the non-transitory storage medium of claim 27; the combination of Singleton and Sobol teaches wherein: the wearable device comprises a wristband (see at least Singleton: ¶ 20 “Example wearable devices 104 may include wearable computing devices, such as a ring computing device (hereinafter “ring”) configured to be worn on a user's 102 finger, a wrist computing device (e.g., a smart watch, fitness band, or bracelet) configured to be worn on a user's 102 wrist, and/or a head mounted computing device (e.g., glasses/goggles). Wearable devices 104 may also include bands, straps (e.g., flexible or inflexible bands or straps), stick-on sensors, and the like, that may be positioned in other locations, such as bands around the head (e.g., a forehead headband), arm (e.g., a forearm band and/or bicep band), and/or leg (e.g., a thigh or calf band), behind the ear, under the armpit, and the like”; see also Singleton: ¶ 27, 34, 43, 65, and 80); the one of the separate models comprises a convolutional neural network (CNN) configured to receive the signal data as input and to identify the feature set as output (see at least Chen: ¶ 103-104 “the learning engine may include any suitable neural network (e.g., an artificial neural network) that may be initially configured, trained on one or more sets of sensor data that may be generated during the performance of one or more known input gestures, and then used to predict a particular user input gesture based on another set of sensor data. A neural network or neuronal network or artificial neural network may be hardware-based, software-based, or any combination thereof, such as any suitable model (e.g., an analytical model, a computational model, etc.)”; see also Chen: 106 ¶ “A learning engine or user input gesture model for an experiencing entity may be trained (e.g., at operation 806 of process 800 of FIG. 8) using the received sensor category data for the gesture (e.g., as inputs of a neural network of the learning engine) and using the received score for the gesture (e.g., as an output of the neural network of the learning engine)”; see at least Sobol: ¶ 68 and 109 “the frequency, intensity, duty cycle, or other waveform parameters may be varied in response to detected physiological parameters. In some embodiments, the physiological parameters can be collected across a wide range of users and used to train a machine-learning classification algorithm (e.g., a neural network model) that can be used to determine an appropriate neuromodulation to be applied given a particular parameter or set of parameters detected via the sensor(s).”); the wearable device includes a set of electronic sensors, each of the set of electronic sensors being configured to produce the signal data in response to detecting the user movements (see also Singleton: ¶ 74-76 “The processing module 230-a may sample the motion signals at a sampling rate (e.g., 50 Hz) and determine the motion of the ring 104 based on the sampled motion signals. For example, the processing module 230-a may sample acceleration signals to determine acceleration of the ring 104. As another example, the processing module 230-a may sample a gyro signal to determine angular motion. In some implementations, the processing module 230-a may store motion data in memory 215. Motion data may include sampled motion data as well as motion data that is calculated based on the sampled motion signals (e.g., acceleration and angular values).”; see also Singleton: ¶ 77-79: “The ring 104, or other computing device, may calculate and store additional values based on the sampled/calculated physiological data. For example, the processing module 230 may calculate and store various metrics, such as sleep metrics (e.g., a Sleep Score), activity metrics, and readiness metrics. In some implementations, additional values/metrics may be referred to as “derived values.” The ring 104, or other computing/wearable device, may calculate a variety of values/metrics with respect to motion. Example derived values for motion data may include, but are not limited to, motion count values, regularity values, intensity values, metabolic equivalence of task values (METs), and orientation values. Motion counts, regularity values, intensity values, and METs may indicate an amount of user motion (e.g., velocity/acceleration) over time. Orientation values may indicate how the ring 104 is oriented on the user's finger and if the ring 104 is worn on the left hand or right hand. In some implementations, motion counts and regularity values may be determined by counting a number of acceleration peaks within one or more periods of time (e.g., one or more 30 second to 1 minute periods). Intensity values may indicate a number of movements and the associated intensity (e.g., acceleration values) of the movements. The intensity values may be categorized as low, medium, and high, depending on associated threshold acceleration values. METs may be determined based on the intensity of movements during a period of time (e.g., 30 seconds), the regularity/irregularity of the movements, and the number of movements associated with the different intensities.”; see at least Singleton: ¶ 98-101: “the one or more physiological characteristics determined based on the comparison of the physiological data collected by the multiple wearable devices 104 may include a metric associated with blood circulation of the user 102, one or more risk metrics associated with one or more medical conditions, or both… the one or more medical conditions may include Parkinson's, Alzheimer's, stroke, or any combination thereof. In particular, some medical conditions (e.g., Parkinson's, Alzheimer's) may manifest differently, or at different rates, on one side of the body as compared to the other side of the body. For instance, oftentimes symptoms of a stroke only (or primarily) manifest on one side of the body. As such, collecting physiological data via both wearable devices 104-d and 104-e (e.g., wearable ring devices 104 on each hand) may provide more insight into such medical conditions, that may lead to earlier diagnosis and treatment… the system 300 may use physiological data collected at a first location to determine a metric associated with a medical condition (e.g., Parkinson's)… For example, the system 300 may identify that physiological data collected from the right side of a user 102 differs from physiological data collected from the left side of the user. The system 300 may associate the difference in physiological data with additional sources (e.g., studies) to identify one or more physiological characteristics associated with stress or other medical conditions, such as Parkinson's and Alzheimer's.”; see at least Singleton: ¶ 124: “the one or more physiological characteristics comprise a metric associated with blood circulation of the user, one or more risk metrics associated with one or more medical conditions, or both. In some examples, the one or more medical conditions comprise”; see also Singleton: ¶ 81: “the external device 150 may itself analyze the user (e.g., the user's activity or condition in response to such prompts), for example using a camera to detect muscle tremors, using a microphone to detect slurred speech, or to detect any other indicia of health conditions.”; see also Singleton: ¶ 82 “For example, an updated algorithm for treating one or more health conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network (e.g., as an over-the-air update), and installed on the treatment device 110 and/or external device”; see at least Singleton: ¶ 84 “The treatment device 110 may be configured to calculate physiological characteristics relating to one or more signals received from the sensor(s) 113. For example, the treatment device 110 may be configured to algorithmically determine the presence or absence of a muscle tremor, fall, or other health condition from the signal.”; see at least Singleton: ¶ 88 “These external computing devices 180 can collect data recorded by the treatment device 110 and/or the external device 150. In some embodiments, such data can be anonymized and aggregated to perform large-scale analysis (e.g., using machine-learning techniques or other suitable data analysis techniques) to develop and improve treatment algorithms using data collected by a large number of treatment devices 110 associated with a large population of users. Additionally, the external computing devices 180 may transmit data to the external device 150 and/or the treatment device 110. For example, an updated algorithm for treating conditions may be developed by the external computing devices 180 (e.g., using machine learning or other techniques) and then provided to the treatment device 110 and/or the external device 150 via the network 170, and installed on the recipient treatment device 110/150.”); and the set of electronic sensors comprises an inertial measurement unit (IMU) sensor and a photoplethysmography (PPG) sensor (see at least Singleton: ¶ 40 “the ring 104 may send measured and processed data (e.g., temperature data, photoplethysmography (PPG) data, motion/accelerometer data, ring input data, and the like) to the user device 106.”, 66-71, and 76). Referring to Claim 29, the combination of Singleton, Chen, and Sobol teaches the wearable device of claim 16, including wherein the feature analytics engine comprises a fully connected regression network configured to convert the feature set into the score (see at least Sobol: ¶ 60 “wherein the neuropsychiatric condition is analyzed through a regression-based machine learning model”). Referring to Claim 30, the combination of Singleton, Chen, and Sobol teaches the wearable device of claim 16, including wherein the feature analytics engine comprises a decision tree configured to map the feature set into the score (see at least Sobol: ¶ 109 “FIG. 11B depicts a program structure for a decision tree of certain HAR-related movements based on an accelerometer portion of the LEAP data generated by the wearable electronic device and system of FIG. 1 according to one or more embodiments shown or described herein;”). Claim(s) 5-6 and 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20230190201 to Singleton et al. (hereinafter Singleton) in view of U.S. Patent Application Publication No. 20190209022 to Sobol et al. (hereinafter Sobol) in view of U.S. Patent Application Publication No. 20200387245 to Chen in view of U.S. Patent Application Publication No. 20230137366 to Roesler et al. (hereinafter Roesler). Referring to Claim 5 and 20 (substantially similar in scope and language), the combination of Singleton, Chen, and Sobol teaches the method as in claim 1 and the wearable device of claim 16, Examiner notes that Singleton does not explicitly state wherein: the user movements include finger tapping of a thumb against an index finger on a same hand of the user on which the wearable device is worn. However, Roesler, which talks about a method and system for remote monitoring of patient motor functions, teaches it is known to apply the Unified Parkinson’s Disease Rating Scale to determine a metric rating and score based on the user’s movement including finger tapping a thumb against an index finger to test for Parkinson’s Disease (see at least Roesler: ¶ 44-48: “At step 230, the computing device 110 tracks the movement of the active points 320 as the user as the user performs an exercise…The exercises such as the tapping exercise can be known exercises used for the diagnosis of conditions. For example, the tapping exercises can be those of section 3.4 of the Movement Disorder Society-Unified Parkinson's Disease Rating Scale (“MDS-UPDRS”). In this case, the exercise is a tapping exercise that requires the user to tap their thumb and pointer finger together (FIG. 3B) and then spread them apart as much as and as fast as they can (FIG. 3A).”; see also Singleton: ¶ 51-56 “Thus, for finger tapping exercises that ask the user to tap as quickly as they can, the tracked motion is compared against speed metrics; for finger tapping exercises that as the user to make the tapping movement as wide as possible, the tracked motion is compared against range of motion metrics, etc. [0055] The consistency metrics can include a slowing down of the pace of the finger taps from one tap to the next across the exercise. [0056] In embodiments where the system 100 implements the tests from section 3.4 of the MDS-UPDRS, the computing device 110 tracks the regularity and smoothness of the rhythm during the tapping exercise (e.g., interruptions or hesitations), the slowing of the pace during the exercise and a change in the amplitude (the range of motion between the fully opened hand and fingers touching, and back) of the movements after the start.”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of prompting the user to implement an exercise test including a finger tapping test to indicate Parkinson’s Disease (as disclosed by Roesler) to the known method and system for monitoring the medical status and potential medical condition of a patient wherein the information suggests exercise and tracks metrics during the exercise to determine physiological characteristics related to the user such as signs of Parkinson’s (as disclosed by the combination of Singleton, Chen, and Sobol) to allow for accurate remote monitoring for patients. One of ordinary skill in the art would have been motivated to apply the known technique of prompting the user to implement an exercise test including a finger tapping test to indicate Parkinson’s Disease because it would allow for accurate remote monitoring for patients (see Roesler ¶ 7). Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of prompting the user to implement an exercise test including a finger tapping test to indicate Parkinson’s Disease (as disclosed by Roesler) to the known method and system for monitoring the medical status and potential medical condition of a patient wherein the information suggests exercise and tracks metrics during the exercise to determine physiological characteristics related to the user such as signs of Parkinson’s (as disclosed by the combination of Singleton, Chen, and Sobol) to allow for accurate remote monitoring for patients, because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of prompting the user to implement an exercise test including a finger tapping test to indicate Parkinson’s Disease to the known method and system for monitoring the medical status and potential medical condition of a patient wherein the information suggests exercise and tracks metrics during the exercise to determine physiological characteristics related to the user such as signs of Parkinson’s to allow for accurate remote monitoring for patients). See also MPEP § 2143(I)(D). Referring to Claim 6 and 21 (substantially similar in scope and language), the combination of Singleton, Chen, Sobol, and Roesler teaches the method as in claim 5 and the wearable device of claim 20, wherein the score corresponds to a rating on a unified Parkinson’s disease rating scale (see at least Roesler: ¶ 44-48: “At step 230, the computing device 110 tracks the movement of the active points 320 as the user as the user performs an exercise…The exercises such as the tapping exercise can be known exercises used for the diagnosis of conditions. For example, the tapping exercises can be those of section 3.4 of the Movement Disorder Society-Unified Parkinson's Disease Rating Scale (“MDS-UPDRS”). In this case, the exercise is a tapping exercise that requires the user to tap their thumb and pointer finger together (FIG. 3B) and then spread them apart as much as and as fast as they can (FIG. 3A).”; see also Singleton: ¶ 51-56 “Thus, for finger tapping exercises that ask the user to tap as quickly as they can, the tracked motion is compared against speed metrics; for finger tapping exercises that as the user to make the tapping movement as wide as possible, the tracked motion is compared against range of motion metrics, etc. [0055] The consistency metrics can include a slowing down of the pace of the finger taps from one tap to the next across the exercise. [0056] In embodiments where the system 100 implements the tests from section 3.4 of the MDS-UPDRS, the computing device 110 tracks the regularity and smoothness of the rhythm during the tapping exercise (e.g., interruptions or hesitations), the slowing of the pace during the exercise and a change in the amplitude (the range of motion between the fully opened hand and fingers touching, and back) of the movements after the start.”). Response to Arguments Applicant's arguments filed with respect to the rejection of the claims under 101 have been fully considered but they are not persuasive. The rejection has been updated to address the amendments submitted. Applicant argues “processing the signal data by the separate models using separate channels to process the signal data and combining an output of the separate channels to identify a feature set, the feature set having a time component and an amplitude component" cannot practically be performed in the human mind” and “separate models using separate channels to identify a feature set, where the feature set includes a time component and an amplitude component”. Examiner respectfully disagrees. Claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); and a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057, 1067, 100 USPQ2d 1492, 1500 (Fed. Cir. 2011); Examiner notes that claims 1-11, 14-25, and 27-30 recite a method, device and non-transitory computer readable medium comprising: detecting, by one or more sensors of a wearable device worn by a user, user movements of the user, the user movements including one or more hand gestures; translating, by the one or more sensors, the user movements including the one or more hand gestures into signal data; separating and inputting the signal data into separate models, the separate models trained based on user movement data from a population of users using a training algorithm; processing the signal data by the separate models using separate channels to process the signal data and combining an output of the separate channels to identify a feature set, the feature set having a time component and an amplitude component; converting the feature set, by a feature analytics engine processing the time component and the amplitude component, into a score that corresponds to a condition of the user; and determining the condition of the user by the separate models and the feature analytics engine from the user movements detected by the wearable device, which is directed to concepts that are performed mentally and a product of human mental work. The limitations suggest a receiving information in the form of movements, applying algorithms to analyze the information, and generating the result of the analysis in the form of a medical condition, and the steps involved human judgments, observations and evaluations that can be practically or reasonably performed in the human mind, the claim recites an abstract idea consistent with the “mental process” grouping set forth in the see MPEP 2106.04(a)(2)(III). Examiner notes that the claimed invention amounts to a mental process in that the system is collecting command information, analyzing or processing the information to determine whether or not there is a medical condition, and the subsequently performing the command based on the determination which is similar to the abstract ideas identified in Electric Power Group and Classen. Claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures "can be carried out in existing computers long in use, no new machinery being necessary." 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of "anonymous loan shopping" recited in a computer system claim is an abstract idea because it could be "performed by humans without a computer"). In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process. An example of a case identifying a mental process performed on a generic computer as an abstract idea is Voter Verified, Inc. v. Election Systems & Software, LLC, 887 F.3d 1376, 1385, 126 USPQ2d 1498, 1504 (Fed. Cir. 2018). In this case, the Federal Circuit relied upon the specification in explaining that the claimed steps of voting, verifying the vote, and submitting the vote for tabulation are "human cognitive actions" that humans have performed for hundreds of years. The claims therefore recited an abstract idea, despite the fact that the claimed voting steps were performed on a computer. 887 F.3d at 1385, 126 USPQ2d at 1504. Another example is FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016). The patentee in FairWarning claimed a system and method of detecting fraud and/or misuse in a computer environment, in which information regarding accesses of a patient’s personal health information was analyzed according to one of several rules (i.e., related to accesses in excess of a specific volume, accesses during a pre-determined time interval, or accesses by a specific user) to determine if the activity indicates improper access. 839 F.3d. at 1092, 120 USPQ2d at 1294. The court determined that these claims were directed to a mental process of detecting misuse, and that the claimed rules here were "the same questions (though perhaps phrased with different words) that humans in analogous situations detecting fraud have asked for decades, if not centuries." 839 F.3d. at 1094-95, 120 USPQ2d at 1296. An example of a case in which a computer was used as a tool to perform a mental process is Mortgage Grader, 811 F.3d. at 1324, 117 USPQ2d at 1699. The patentee in Mortgage Grader claimed a computer-implemented system for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The interface prompts a borrower to enter personal information, which the grading module uses to calculate the borrower’s credit grading, and allows the borrower to identify and compare loan packages in the database using the credit grading. 811 F.3d. at 1318, 117 USPQ2d at 1695. The Federal Circuit determined that these claims were directed to the concept of "anonymous loan shopping", which was a concept that could be "performed by humans without a computer." 811 F.3d. at 1324, 117 USPQ2d at 1699. Another example is Berkheimer v. HP, Inc., 881 F.3d 1360, 125 USPQ2d 1649 (Fed. Cir. 2018), in which the patentee claimed methods for parsing and evaluating data using a computer processing system. The Federal Circuit determined that these claims were directed to mental processes of parsing and comparing data, because the steps were recited at a high level of generality and merely used computers as a tool to perform the processes. 881 F.3d at 1366, 125 USPQ2d at 1652-53. Both product claims (e.g., computer system, computer-readable medium, etc.) and process claims may recite mental processes. For example, in Mortgage Grader, the patentee claimed a computer-implemented system and a method for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The Federal Circuit determined that both the computer-implemented system and method claims were directed to "anonymous loan shopping", which was an abstract idea because it could be "performed by humans without a computer." 811 F.3d. at 1318, 1324-25, 117 USPQ2d at 1695, 1699-1700. See also FairWarning IP, 839 F.3d at 1092, 120 USPQ2d at 1294 (identifying both system and process claims for detecting improper access of a patient's protected health information in a health-care system computer environment as directed to abstract idea of detecting fraud); Content Extraction & Transmission LLC v. Wells Fargo Bank, N.A., 776 F.3d 1343, 1345, 113 USPQ2d 1354, 1356 (Fed. Cir. 2014) (system and method claims of inputting information from a hard copy document into a computer program). Accordingly, the phrase "mental processes" should be understood as referring to the type of abstract idea, and not to the statutory category of the claim. Examples of product claims reciting mental processes include: An application program interface for extracting and processing information from a diversity of types of hard copy documents – Content Extraction, 776 F.3d at 1345, 113 USPQ2d at 1356; and A computer readable medium containing program instructions for detecting fraud – CyberSource, 654 F.3d at 1368 n. 1, 99 USPQ2d at 1692 n.1. Examiner notes that the claimed in invention is similar to the Voter Verified, Inc., FairWarning, Mortgage Grader, Berkheimer, Content Extraction and CyberSource applications wherein the court identified computer system or “remote computing device”, “processors”, “memory”, and “electronic sensors” is merely serving as generic computer, computing environment, or tool to perform the mental process or abstract idea. Applicant argues that the system cannot be certain methods of organizing human activity because “processing the signal data by the separate models using separate channels to process the signal data and combining an output of the separate channels to identify a feature set, the feature set having a time component and an amplitude component” is not “no way relates to "fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations); and managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions)”. Examiner respectfully disagrees. The sub-grouping "managing personal behavior or relationships or interactions between people" include social activities, teaching, and following rules or instructions. An example of a claim reciting managing personal behavior is Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 115 USPQ2d 1636 (Fed. Cir. 2015). The patentee in this case claimed methods comprising storing user-selected pre-set limits on spending in a database, and when one of the limits is reached, communicating a notification to the user via a device. 792 F.3d. at 1367, 115 USPQ2d at 1639-40. The Federal Circuit determined that the claims were directed to the abstract idea of "tracking financial transactions to determine whether they exceed a pre-set spending limit (i.e., budgeting)", which "is not meaningfully different from the ideas found to be abstract in other cases before the Supreme Court and our court involving methods of organizing human activity." 792 F.3d. at 1367-68, 115 USPQ2d at 1640. Other examples of managing personal behavior recited in a claim include: ii. considering historical usage information while inputting data, BSG Tech. LLC v. Buyseasons, Inc., 899 F.3d 1281, 1286, 127 USPQ2d 1688, 1691 (Fed. Cir. 2018). Examiner notes that claims 1-11, 14-25, and 27-30 recite a method, device and non-transitory computer readable medium comprising: detecting, by one or more sensors of a wearable device worn by a user, user movements of the user, the user movements including one or more hand gestures; translating, by the one or more sensors, the user movements including the one or more hand gestures into signal data; separating and inputting the signal data into separate models, the separate models trained based on user movement data from a population of users using a training algorithm; processing the signal data by the separate models using separate channels to process the signal data and combining an output of the separate channels to identify a feature set, the feature set having a time component and an amplitude component; converting the feature set, by a feature analytics engine processing the time component and the amplitude component, into a score that corresponds to a condition of the user; and determining the condition of the user by the separate models and the feature analytics engine from the user movements detected by the wearable device, and is similar to the abstract idea identified in MPEP 2106.04(a)(2)(II) in grouping “II” in that the claims recite certain methods of organizing human activity such as managing personal behavior based on tracking movements of the user. This is merely further embellishments of the abstract idea and does not further limit the claimed invention to render the claims patentable subject matter. The limitations, substantially comprising the body of the claim, recite standard processes found in standard practice in medical care in which a user is tested for various diseases using equipment meant for measuring specific medical features of users. This is common practice when a doctor indicates a potential sign of disease, a test is administered, and the result of the test is presented to the tester or doctor. The system is receiving information related to a user’s body movement, processing that body movement using specific data as amplitude and type of gesture to follow a rule set to present a monitoring individual of a specific condition determined based on stored rules. Because the limitations above closely follow the steps standard in managing personal behavior such as tracking movement of a user to determine a medical condition, and the steps of the claims involve organizing human activity, the claim recites an abstract idea consistent with the “organizing human activity” grouping set forth in the see MPEP 2106.04(a)(2)(II). Examiner notes that the claimed invention is more similar to the abstract ideas in BSG and Intellectual Ventures in that the system is processing historical and tracked information related to a user to output an analysis specific to the user. Other examples of managing personal behavior recited in a claim include: i. filtering content, BASCOM Global Internet v. AT&T Mobility, LLC, 827 F.3d 1341, 1345-46, 119 USPQ2d 1236, 1239 (Fed. Cir. 2016) (finding that filtering content was an abstract idea under step 2A, but reversing an invalidity judgment of ineligibility due to an inadequate step 2B analysis); ii. considering historical usage information while inputting data, BSG Tech. LLC v. Buyseasons, Inc., 899 F.3d 1281, 1286, 127 USPQ2d 1688, 1691 (Fed. Cir. 2018); and iii. a mental process that a neurologist should follow when testing a patient for nervous system malfunctions, In re Meyer, 688 F.2d 789, 791-93, 215 USPQ 193, 194-96 (CCPA 1982). Another example of a claim reciting social activities is Interval Licensing LLC, v. AOL, Inc., 896 F.3d 1335, 127 USPQ2d 1553 (Fed. Cir. 2018). The social activity at issue was the social activity of "’providing information to a person without interfering with the person’s primary activity.’" 896 F.3d at 1344, 127 USPQ2d 1553 (citing Interval Licensing LLC v. AOL, Inc., 193 F. Supp.3d 1184, 1188 (W.D. 2014)). The patentee claimed an attention manager for acquiring content from an information source, controlling the timing of the display of acquired content, displaying the content, and acquiring an updated version of the previously-acquired content when the information source updates its content. 896 F.3d at 1339-40, 127 USPQ2d at 1555. The Federal Circuit concluded that "[s]tanding alone, the act of providing someone an additional set of information without disrupting the ongoing provision of an initial set of information is an abstract idea," observing that the district court "pointed to the nontechnical human activity of passing a note to a person who is in the middle of a meeting or conversation as further illustrating the basic, longstanding practice that is the focus of the [patent ineligible] claimed invention." 896 F.3d at 1344-45, 127 USPQ2d at 1559. Other examples of following rules or instructions recited in a claim include: a series of instructions of how to hedge risk, Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1004 (2010). Examiner notes that the claimed invention is more similar to the judicial exceptions identified in BASCOM Global Internet, BSG Tech. LLC, In re Meyer, Interval Licensing LLC, and Bilski v. Kappos in that the system is filters conditions of users to monitoring users, considers historical information while inputting data and presenting it to the monitoring user, and acquiring content from an information source, controlling the timing of the display of acquired content, displaying the content, and acquiring an updated version of the previously-acquired content. Therefore, the claims are directed to an abstract idea in the form of a judicial exception. Applicant argues “the technical improvements improve the technical/technology medical field of monitoring, tracking, and assessing human physiological data to allow monitoring in a patients' natural environment to detect changes to a condition or to detect an onset of a condition” and “processing the signal data by the separate models using separate channels to process the signal data and combining an output of the separate channels to identify a feature set, the feature set having a time component and an amplitude component" is an additional element that integrates any judicial exception into a practical application”. Examiner respectfully disagrees. The second part of the Alice/Mayo test is often referred to as a search for an inventive concept. Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 217, 110 USPQ2d 1976, 1981 (2014) (citing Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 71-72, 101 USPQ2d 1961, 1966 (2012)). Evaluating additional elements to determine whether they amount to an inventive concept requires considering them both individually and in combination to ensure that they amount to significantly more than the judicial exception itself. Because this approach considers all claim elements, the Supreme Court has noted that "it is consistent with the general rule that patent claims ‘must be considered as a whole.’" Alice Corp., 573 U.S. at 218 n.3, 110 USPQ2d at 1981 (quoting Diamond v. Diehr, 450 U.S. 175, 188, 209 USPQ 1, 8-9 (1981)). Consideration of the elements in combination is particularly important, because even if an additional element does not amount to significantly more on its own, it can still amount to significantly more when considered in combination with the other elements of the claim. See, e.g., Rapid Litig. Mgmt. v. CellzDirect, 827 F.3d 1042, 1051, 119 USPQ2d 1370, 1375 (Fed. Cir. 2016) (process reciting combination of individually well-known freezing and thawing steps was "far from routine and conventional" and thus eligible); BASCOM Global Internet Servs. v. AT&T Mobility LLC, 827 F.3d 1341, 1350, 119 USPQ2d 1236, 1242 (Fed. Cir. 2016) (inventive concept may be found in the non-conventional and non-generic arrangement of components that are individually well-known and conventional). Limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception include ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); and Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook, 437 U.S. 584, 588-90, 198 USPQ 193, 197-98 (1978) (MPEP § 2106.05(h)). It is important to note that in order for a method claim to improve computer functionality, the broadest reasonable interpretation of the claim must be limited to computer implementation. That is, a claim whose entire scope can be performed mentally, cannot be said to improve computer technology. Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 120 USPQ2d 1473 (Fed. Cir. 2016) (a method of translating a logic circuit into a hardware component description of a logic circuit was found to be ineligible because the method did not employ a computer and a skilled artisan could perform all the steps mentally). Similarly, a claimed process covering embodiments that can be performed on a computer, as well as embodiments that can be practiced verbally or with a telephone, cannot improve computer technology. See RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1328, 122 USPQ2d 1377, 1381 (Fed. Cir. 2017) (process for encoding/decoding facial data using image codes assigned to particular facial features held ineligible because the process did not require a computer). Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality: ii. Accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016), iii. Mere automation of manual processes, such as using a generic computer to process an application for financing a purchase, Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed. Cir. 2017) or speeding up a loan-application process by enabling borrowers to avoid physically going to or calling each lender and filling out a loan application, LendingTree, LLC v. Zillow, Inc., 656 Fed. App'x 991, 996-97 (Fed. Cir. 2016) (non-precedential); vii. Providing historical usage information to users while they are inputting data, in order to improve the quality and organization of information added to a database, because "an improvement to the information stored by a database is not equivalent to an improvement in the database’s functionality," BSG Tech LLC v. Buyseasons, Inc., 899 F.3d 1281, 1287-88, 127 USPQ2d 1688, 1693-94 (Fed. Cir. 2018). To show that the involvement of a computer assists in improving the technology, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § 2106.05(f) for more information about mere instructions to apply an exception. Examples that the courts have indicated may not be sufficient to show an improvement to technology include: i. A commonplace business method being applied on a general purpose computer, Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1976; Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). The instant application fails to integrate the judicial exception into a practical application because the instant application merely recites words “apply it” (or an equivalent) with the judicial exception or merely includes instructions to implement an abstract idea. The instant application is directed to a method instructing the reader to implement the identified method of organizing human activity of managing personal behavior (i.e., a mental process that a neurologist should follow when testing a patient for nervous system malfunctions, In re Meyer, 688 F.2d 789, 791-93, 215 USPQ 193, 194-96 (CCPA 1982)) on generically claimed computer structure. For instance, the additional elements or combination of elements other than the abstract idea itself include the elements such as “processor” or “memory” recited at a high level of generality. These elements do not themselves amount to an improvement to the interface or computer, to a technology or another technical field. This is consistent with Applicant’s disclosure which states that the computing device comprises of “a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet”. (App. Spec. ¶ 105; see also ¶ 102: “Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.”). Accordingly, the claimed “system” read in light of the specification employs any wide range of possible devices comprising a number of components that are “well-known” and included in an indiscriminate “computer”, “processor”, “database”, “memory” (e.g., processing device, modules). Thus, the claimed structure amounts to appending generic computer elements to abstract idea comprising the body of the claim. The computing elements are only involved at a general, high level, and do not have the particular role within any of the functions but to be an computer-implemented method using a generically claimed “processor” and “memory” and even basic, generic recitations that imply use of the computer such as storing information via servers would add little if anything to the abstract idea. Examiner notes that the claimed invention is more like the implementations of computer elements found in FairWarning IP, LLC, Credit Acceptance Corp. v. Westlake Services, LendingTree, LLC v. Zillow, Inc., BSG Tech LLC v. Buyseasons, Inc., Alice Corp. and Versata Dev. Group, Inc. and fail to implement a technical improvement, practical application, or significantly more than the abstract idea. The claims stand rejected. 103 Rejections Applicant's arguments filed with respect of the claims under 35 USC 103 have been fully considered and found unpersuasive. Applicant argues that the combination does not teach “”. Examiner respectfully submits that the Chen reference teaches determine time and amplitude related to movements of the individual and gestures of the user (see at least Chen: ¶ 98-99 “from the first sensor data collected at operation 601 (e.g., shape of touch event, peak force amplitude, force amplitude difference between adjacent peak and trough, length of touch area (e.g., length of user's digit print detected on trackpad region), width of touch area (e.g., width of user's digit print detected on trackpad region), ratio of length of touch area to width of touch area, touch area of user's digits in a keyboard area (e.g., region 334) versus a hybrid area (e.g., region 339), any suitable force data, force applied by event, plot of force applied by event over time”; see also Chen: ¶ 101 “any suitable third signal characteristic(s) may be extracted from the third sensor data collected at operation 701 (e.g., shape of touch event, peak force amplitude, force amplitude difference between adjacent peak and trough, length of touch area (e.g., length of user's digit print detected on trackpad region), width of touch area (e.g., width of user's digit print detected on trackpad region), ratio of length of touch area to width of touch area, touch area of user's digits in a keyboard area (e.g., region 334) versus a hybrid area (e.g., region 339), any suitable force data, force applied by event, plot of force applied by event over time”; see also Chen: ¶ 25 “One or more models may be trained and then used for distinguishing between a thumb user input gesture or a finger user input gesture, such as by using any suitable user input gesture touch and/or location data and/or any suitable user input gesture force data that may be sensed by any suitable sensor assembly”; see also Chen: ¶ 99 “any suitable sensor data, including touch sensor data and/or force sensor data and/or the like, can be collected while the user performs various user input gestures, for example, to train a gesture detection algorithm”; see also Chen: ¶ 100 “the device or any suitable training system can assign each cluster to one of the user input gestures as part of the training process”; see also Chen: ¶ 101 and 105-107 “A learning engine or user input gesture model for an experiencing entity may be trained (e.g., at operation 806 of process 800 of FIG. 8) using the received sensor category data for the gesture (e.g., as inputs of a neural network of the learning engine) and using the received score for the gesture (e.g., as an output of the neural network of the learning engine)”). The claims stand rejected. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C YOUNG whose telephone number is (571)272-1882. The examiner can normally be reached M-F: 7:00 p.m.- 3:00 p.m. EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nate Uber can be reached at (571)270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Michael Young/Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Aug 22, 2022
Application Filed
Jul 18, 2024
Non-Final Rejection — §101, §103
Sep 24, 2024
Interview Requested
Oct 11, 2024
Applicant Interview (Telephonic)
Oct 22, 2024
Response Filed
Nov 18, 2024
Examiner Interview Summary
Jan 25, 2025
Final Rejection — §101, §103
Apr 16, 2025
Interview Requested
Apr 30, 2025
Request for Continued Examination
May 01, 2025
Response after Non-Final Action
Aug 09, 2025
Non-Final Rejection — §101, §103
Oct 22, 2025
Interview Requested
Nov 12, 2025
Response Filed
Jan 07, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 8813663
SEEDING MACHINE WITH SEED DELIVERY SYSTEM
2y 5m to grant Granted Aug 26, 2014
Patent null
Interconnection module of the ornamental electrical molding
Granted
Patent null
SYSTEMS AND METHODS FOR ENTITY SPECIFIC, DATA CAPTURE AND EXCHANGE OVER A NETWORK
Granted
Patent null
Systems and Methods for Performing Workflow
Granted
Patent null
DISTRIBUTED LEDGER PROTOCOL TO INCENTIVIZE TRANSACTIONAL AND NON-TRANSACTIONAL COMMERCE
Granted
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
4%
Grant Probability
5%
With Interview (+1.5%)
1y 1m
Median Time to Grant
High
PTA Risk
Based on 142 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month