DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 18 February 2026 has been entered.
Response to Amendment
The amendments filed 18 February 2026 have been entered. Claims 1-21 are pending.
Claim Objections
Claim 12 is objected to because of the following informalities: "bass between" should be --pass between-- in line 13 of the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Utilizing the two step process adopted by the Supreme Court (Alice Corp vs CLS Bank Int'l, US
Supreme Court, 110 USPQ2d 1976 (2014) and the recent 101 guideline Federal Register Vol. 84, No., Jan
2019)), determination of the subject matter eligibility under the 35 U.S.C. 101 is as follows: Specifically, the Step 1 requires claim belongs to one of the four statutory categories (process, machine, manufacture, or composition of matter). If Step 1 is satisfied, then in the first part of Step 2A (Prong One), identification of any judicial recognized exceptions in the claim is made. If any limitation in the claim is identified as judicial recognized exception, then in the second part of Step 2A (Prong Two), determination is made whether the identified judicial exception is being integrated into practical application. If the identified judicial exception is not integrated into a practical application, then in Step 2B, the claim is further evaluated to see if the additional elements, individually and in combination provide "inventive concept" that would amount to significantly more than the judicial exception. If the element and combination of elements do not amount to significantly more than the judicial recognized exception itself, then the claim is ineligible under the 35 U.S.C. 101.
Claims 1-21 are rejected under 35 U.S.C. 101.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, in this case an abstract idea, without significantly more. The claim recite(s) "determining, by the head-worn device, a value of a food consumption parameter of the user based in part on the tracked movement of the hand and the monitored movement of the jaw ". This judicial exception is not integrated into a practical application and the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 1 satisfies Step 1, namely the claim is directed to one of the four statutory classes, method. Following Step 2A Prong one, any judicial exceptions are identified in the claims. In claim 1, the limitations "determining, by the head-worn device, a value of a food consumption parameter of the user based in part on the tracked movement of the hand and the monitored movement of the jaw" are abstract ideas as they are directed to a mental process. With the identification of an abstract idea, the next phase is to proceed Step 2A, Prong Two, wherewith additional elements and taken as a whole, evaluation occurs of whether the identified abstract idea is integrated into a practical application.
In Step 2A, Prong Two, the claim does not recite any additional elements or evidence that amounts to significantly more than the judicial exception. Besides the abstract idea, the claim recites the additional elements “detecting, by a motion sensor of a wearable device worn on a hand or a corresponding wrist of a user, a hand movement of the user; in response to detecting the hand movement of a user, tracking, based on measuring by a head-worn device worn by the user, a time for ultrawideband (UWB) signals to pass between a head-worn device UWB interface of the head-worn device and a wearable device UWB interface of the wearable device, a relative position between the head-worn device and the wearable device based on movement of the hand of the user relative to a head of the user, wherein all UWB interfaces involved in tracking the relative position are disposed on mobile devices worn by the user; monitoring, by the head-worn device, movement of a jaw of the user using a contact microphone coupled to the head-worn device, wherein the contact microphone is configured to detect tissue-based vibrations caused by jaw movement of the user”. However, these components may be seen as the use of well-understood, routine, or conventional elements to perform a non-mental process in order to gather data for the mental process step, much like the example given in MPEP 2106.04(d)(2)(c), such that these limitations are extra-solution activity and thus do not integrate the judicial exception into a practical application. The detecting, tracking and monitoring steps lead to the final limitation of “determining” such that the end result of use of the system is only the generic determined indicator which may be any generic output, or no output at all. As this determination is not defined as requiring any further action, such as a form of prophylaxis or treatment or an improvement to a computer or other technology, the claim limitations constitute mere generation of data, in this case the measurement of data relating to movement of the hand or wrist and movement of the jaw, such that the claim does not integrate the judicial exception into any practical application. Under the broadest reasonable interpretation, the claim elements are recited with a high level of generality (as written, each claimed step of the process may be performed by a person in an undefined manner) that there are no meaningful limitations to the abstract idea. Consequently, with the identified abstract idea not being integrated into a practical application, the next step is Step 2B, evaluating whether the additional elements provide "inventive concept" that would amount to significantly more than the abstract idea.
In Step 2B, claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitation of “detecting, by a motion sensor of a wearable device worn on a hand or a corresponding wrist of a user, a hand movement of the user; in response to detecting the hand movement of a user, tracking, based on measuring by a head-worn device worn by the user, a time for ultrawideband (UWB) signals to pass between a head-worn device UWB interface of the head-worn device and a wearable device UWB interface of the wearable device, a relative position between the head-worn device and the wearable device based on movement of the hand of the user relative to a head of the user, wherein all UWB interfaces involved in tracking the relative position are disposed on mobile devices worn by the user; monitoring, by the head-worn device, movement of a jaw of the user using a contact microphone coupled to the head-worn device, wherein the contact microphone is configured to detect tissue-based vibrations caused by jaw movement of the user” constitutes extra-solution activity to the judicial exception, which does not amount to an inventive concept when the activity is well-understood, routine, or conventional, and are thus not indicative of integration into a practical application. The claim limitation constitutes adding a generic wearable sensor and contact microphone, which Connor (US 20210249116 A1) describes as both well understood, routine, or conventional in its description of the state of the art including various wearable sensors and microphones (Paragraphs 0009-0026). Should the applicant argue that the use of wearable UWB devices is not the same as a generic sensor for the tracking of movements of a user, Qi (“A Novel Approach to Joint Flexion/Extension Angles Measurement Based on Wearable UWB Radios”) additionally discloses that body-worn UWB interfaces are well-understood, routine, or conventional in stating that “Ultrawideband (UWB) radio is an emerging technology that has attracted significant interest in recent years due to its high data rate transmission, robustness to fading, security, low loss penetration, low-power spectral density, multiple access, and scalability feasibility” (Page 301). As discussed above with respect to integration of the abstract idea into a practical application, the present elements amount to no more than mere indications to apply the exception.
In Summary, claim 1 recites abstract idea without being integrated into a practical application, and does not provide additional elements that would amount to significantly more. As such, taken as a whole, the claim and is ineligible under the 35 U.S.C. 101.
Claim 12 is rejected under 35 U.S.C. for similar reasons.
Claims 2-11 and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, in this case an abstract idea, without significantly more. As each of these claims depends from claim 1, which was rejected under 35 U.S.C. 101 in paragraph 5 of this action, these claims must be evaluated on whether they sufficiently add to the practical application of claim 1, or comprise significantly more than the limitations of claim 1.
Besides the abstract idea of claim 1, claims 2-4 and 6-11 recite limitations which contain further abstract ideas; claim 5 recites the additional element of a camera, which may be seen as extra-solution activity which does not amount to an inventive concept when the activity is well-understood, routine, or conventional, and which is disclosed as such by the state of the art as disclosed by Connor (cited above); claim 21 recites the additional element of a display, which may be seen as extra-solution activity which does not amount to an inventive concept when the activity is well-understood, routine, or conventional, and which is disclosed as such by the state of the art as disclosed by Connor (cited above, sees paragraph 0011-0017) while the updating of a parameter may additionally be seen as a mental process (e.g., by taking in new information, one may update a determination). The claim element of claim 1 of a method is recited with a high level of generality (as written, the determining step may be carried out by a person alone or with a generic computer in any undefined manner). This limitation provides no practical application, nor does it provide meaningful limitations to the abstract idea.
Claims 13-20 are rejected under 35 U.S.C. 101 for similar reasons.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5, 9, 12 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Connor (US 20210249116 A1) in view of Qi ("A Novel Approach to Joint Flexion/Extension Angles Measurement Based on Wearable UWB Radios").
Regarding claims 1 and 12, Connor discloses a method (see method of using system shown in any of Figs. 12-16) comprising:
Detecting, by a motion sensor of a wearable device worn on a hand or a corresponding wrist of a user (Fig. 12-16, motion sensor 1206/1306/1406/1506/1606 on smart watch wrist band 1205/1305/1405/1505/1605), a hand movement of the user (paragraph 0052-0053, 0090, 0096, 103, 109, 0116--a wrist-worn motion sensor can detect a pattern of hand and/or arm motion which is associated with food consumption);
In response to detecting the hand movement of a user (Paragraph 0090, 0096, 0103, 0109, 0116-- the camera is activated to record food images when data from the chewing sensor and/or data from the motion sensor indicate that the person is eating), tracking, by a head-worn device worn by the user (Fig. 12-16, eyewear frame 1201/1301/1401/1501/1601), a relative position between the head-worn device and the wearable device, based on movement of the hand of the user relative to a head of the user (Paragraph 0051, 0053, 0055, 0057, 0077, 0091, 0097, 0104, 0110, 0117-- In an example, the focal direction of a camera can be changed automatically to track a person's hands… an imaging member can automatically start taking pictures and/or recording images when data from a wrist-worn motion sensor shows a pattern of hand and/or arm motion which is generally associated with food consumption) wherein all sensing devices involved in tracking the relative position are disposed on mobile devices worn by the user (paragraph 0052-0053, 0090, 0096, 103, 109, 0116--a wrist-worn motion sensor; Paragraph 0090, 0096, 0103, 0109, 0116-- the camera…chewing sensor…);
Monitoring, by the head-worn device, movement of a jaw of the user using a contact microphone (Fig. 12-16—chewing sensor 1203/1303/1403/1503/1603) coupled to the head-worn device wherein the contact microphone is configured to detect tissue-based vibrations caused by jaw movement of the user (paragraph 0058, 0092, 0099, 0105, 0112, 0118—a chewing sensor can be a microphone or other sonic energy sensor which detects chewing and/or swallowing sounds during eating…; paragraph 0090, 0096, 0103, 0109, 0116-- chewing sensor on the eyewear frame which detects when the person eats); and
Determining, by the head-worn device (Paragraph 0028, 0048-0049—analysis occurs using the wearable system), a value of a food consumption parameter (paragraph 0090, 0096, 0103, 0109, 0116—detects when the person eats…; paragraph 0027-- how much food the person eats…measuring food consumption) of the user based in part on the tracked movement of the hand and the monitored movement of the jaw (paragraph 0090, 0096, 0103, 0109, 0116-- joint analysis of data from the chewing sensor and data from the motion sensor can provide more accurate detection of eating than data from either sensor alone or separate analysis of data from both sensors)
and a system (system shown in any of Figs. 12-16) comprising:
a head-worn device configured to be worn on a head of a user (Fig. 12-16, eyewear frame 1201/1301/1401/1501/1601) comprising a contact microphone configured to detect tissue-based vibrations caused by jaw movement of the user (Fig. 12-16—chewing sensor 1203/1303/1403/1503/1603; paragraph 0058, 0090, 0092, 0096, 0099, 0103, 0105, 0109, 0112, 0116 0118) and a first interface (Paragraph 0053--In an example, electronically-functional eyewear can be in wireless communication with a motion sensor which is worn on a person's wrist, finger, hand, or arm); and
a wearable device configured to be worn on a wrist or a hand of the user comprising (Fig. 12-16, motion sensor 1206/1306/1406/1506/1606 on smart watch wrist band 1205/1305/1405/1505/1605), and a second interface configured to communicate with the head-worn device over a communication channel (Paragraph 0053--In an example, electronically-functional eyewear can be in wireless communication with a motion sensor which is worn on a person's wrist, finger, hand, or arm),
the wearable device comprising a motion sensor (Fig. 12-16, motion sensor 1206/1306/1406/1506/1606 on smart watch wrist band 1205/1305/1405/1505/1605) configured to detect a hand movement of the user (paragraph 0052-0053, 0090, 0096, 103, 109, 0116--a wrist-worn motion sensor can detect a pattern of hand and/or arm motion which is associated with food consumption);
wherein the head-worn device is configured to:
in response to detecting hand movement of the user (Paragraph 0090, 0096, 0103, 0109, 0116-- the camera is activated to record food images when data from the chewing sensor and/or data from the motion sensor indicate that the person is eating; paragraph 0095, 0102, 0108, 0115--The example shown in this figure shows how the output of one type of sensor can be used to trigger operation of another type of sensor. For example, a relatively less-intrusive sensor (such as a motion sensor) can be used to continually monitor and this less-intrusive sensor may trigger operation of a more-intrusive sensor (such as an imaging sensor) only when probable food consumption is detected by the less-intrusive sensor.), track movement of the hand of the user relative to the head of the user based on the communication transmitted or received from the wearable device over the communication channel (paragraph 0052-0053, 0090, 0096, 103, 109, 0116--a wrist-worn motion sensor can detect a pattern of hand and/or arm motion which is associated with food consumption; Paragraph 0051, 0053, 0055, 0057, 0077, 0091, 0097, 0104, 0110, 0117-- In an example, the focal direction of a camera can be changed automatically to track a person's hands… an imaging member can automatically start taking pictures and/or recording images when data from a wrist-worn motion sensor shows a pattern of hand and/or arm motion which is generally associated with food consumption), wherein all interfaces involved in tracking the relative position are disposed on mobile devices worn by the user (paragraph 0052-0053, 0090, 0096, 103, 109, 0116--a wrist-worn motion sensor; Paragraph 0090, 0096, 0103, 0109, 0116-- the camera…chewing sensor…);
monitor movement of a jaw of the user using the contact microphone (Fig. 12-16—chewing sensor 1203/1303/1403/1503/1603) coupled to the headset (paragraph 0058, 0092, 0099, 0105, 0112, 0118—a chewing sensor can be a microphone or other sonic energy sensor which detects chewing and/or swallowing sounds during eating…; paragraph 0090, 0096, 0103, 0109, 0116-- chewing sensor on the eyewear frame which detects when the person eats); and
determine a value of a food consumption parameter (paragraph 0090, 0096, 0103, 0109, 0116—detects when the person eats…; paragraph 0027-- how much food the person eats…measuring food consumption) of the target user based in part on the tracked movement of the hand and the monitored movement of the jaw (paragraph 0090, 0096, 0103, 0109, 0116-- joint analysis of data from the chewing sensor and data from the motion sensor can provide more accurate detection of eating than data from either sensor alone or separate analysis of data from both sensors).
However, Connor fails to explicitly disclose tracking movement of a hand of a target user relative to a head of the target user based on measuring a time for an ultrawideband signal to pass between a first UWB interface and a second UWB interface and where all UWB interfaces involved in tracking the relative position are disposed on mobile devices worn by the user.
Qi, in the same field of endeavor of a system for tracking human motion by monitoring proximity between different parts of a body, teaches a system including UWB-based measurement between two parts of a body, where all UWB interfaces involved in tracking are worn on the user (See Figs. 1 and 2—two UWB transceivers may be worn on a user’s body to monitor the user’s movements) and wherein movement is tracked based on measuring a time for signals to pass between UWB interfaces (Page 301, II Biomechanics of Human Movement and System Description-- ranging data are collected between different nodes during segment’s movement through the estimation of the propagation delay between transmitter and receiver. Time-of-arrival (TOA) of the first arrival path is the most commonly used distance estimation method.). Qi additionally teaches that UWB-based systems have advantages over other means of tracking (Page 301-- Ultrawideband (UWB) radio is an emerging technology that has attracted significant interest in recent years due to its high data rate transmission, robustness to fading, security, low loss penetration, low-power spectral density, multiple access, and scalability feasibility [18]. Particularly, wearable UWB radios are good candidates for human motion tracking, since they can provide high ranging and positioning accuracies and offer low power consumption and robust performance in multipath environment).
As a result, it would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Connor to utilize UWB interfaces for the monitoring of relative position and movement as disclosed by Qi as a simple substitution for the particular relative movement tracking of Connor, where such UWB interfaces may be used on the head-worn device and wrist-worn device to monitor the relative position between the head-worn device and wrist-worn device, as both means of movement tracking are known in the art and because this modification would additionally predictably improve the motion tracking system by reducing power consumption while maintaining high accuracy and data rate transmission.
It is additionally noted that because Connor discloses the activation of a sensor which is used to determine the relative position of the wearable device and the head-worn device based on the output of another sensor (such as the motion sensor of the wearable device), the substitution of the UWB based distance measuring of Qi for the camera-based hand tracking of Connor would additionally support activating a UWB system based on the output of the motion sensor, where this would predictably improve the power consumption of the device by only operating a single sensor until a movement of interest is detected, triggering additional sensor activation to more carefully monitor the movement.
Regarding claims 5 and 16, the combination of Connor and Qi discloses the method and system of claims 1 and 12. Connor additionally discloses monitoring a food object or a drink object consumed by the user using a camera coupled to the head-worn device (Camera 1202/1302/1402/1502/1602; paragraph 0090, 0096, 0103, 0109, 0116—camera on the eyewear frame which records food images when activated).
Regarding claim 9, the combination of Connor and Qi discloses the method and system of claims 1 and 12. Connor additionally discloses wherein determining the value of the food consumption parameter based in part on the tracked movement of the hand is performed by the head-worn device (Paragraph 0028, 0048-0049—analysis occurs using the wearable system).
Claim(s) 2-4 and 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Connor in view of Qi, further in view of Oztaskent (US 20220409134 A1).
Regarding claims 2 and 13, the combination of Connor and Qi discloses the method and system of claims 1 and 12. Connor additionally teaches determining the value of the food consumption parameter of the user comprises accessing a dataset containing (1) tracked movement of hands of users relative to heads of the corresponding users and applying the tracked movement of the hand of the user relative to the head of the user to determine the value of the food consumption parameter of the user (Paragraph 0052-0053--a wrist-worn motion sensor can detect a pattern of hand and/or arm motion which is associated with food consumption).
However, Connor does not explicitly disclose accessing a machine-learning model trained on a dataset containing 2) monitored jaw movements of the users, and (3) values of the food consumption parameter of the users; and applying the tracked movement of the hand of the target user relative to the head of the user, and the monitored movement of the jaw of the target user to the machine-learning model to determine the value of the food consumption parameter of the target user.
Oztaskent, in the same field of endeavor of a wearable system for monitoring a user including food and drink consumption by the user (Paragraphs 0017-0019)
wherein determining the value of the food consumption parameter of the user comprises:
accessing a machine-learning model trained on a dataset containing (2) monitored jaw movements of the users (Paragraph 0019, 0034-- trained machine learning modules which can be used to determine whether a sensor is present or included on a user device, to detect information related to voluntary user actions (e.g. chewing, eating, speaking, swallowing, drinking), breathing patterns, “first-stage” detections (e.g. classifying or categorizing a signal as relating to a particular activity, such as eating, coughing, speaking, swallowing) and “second-stage” detections (e.g. combining multiple signals or inputs from different sensors, detecting the amount of food consumed, amount of liquid consumed, the depth of breathing, the number of times swallowed, how rapidly a user is chewing), and (3) values of the food consumption parameter of the users (paragraph 0051-- user device 291 can be used to record information which can be used to train a machine learning algorithm or machine learning model. For example, a user may track his or her food consumption (e.g. when the user eats, the estimated composition of food consumption, or amount of food eaten) or amount of liquid consumption); and
applying the tracked movement of the hand of the target user relative to the head of the user, and the monitored movement of the jaw of the target user to the machine-learning model to determine the value of the food consumption parameter of the target user (Paragraph 0022-0023, 0034-- signals collected via the earbud can be used to estimate an amount or type of liquid consumed by a user. For example, a trained machine learning model can be used based on information provided by the user to a mobile application which records the amount and type of liquid consumed. The trained machine learning model can take as inputs the type of liquid consumed, available health sensor data, and signals obtained from the earbud to train a machine learning model. The trained machine learning model can then be used to classify or detect the amount of liquid consumed… a trained machine learning model can be used to detect a quantity of food being consumed or the constituent components of the food being consumed. For example, a trained machine learning model can be used which can detect the time over which food was being consumed, the quantity of food consumed, or the relative proportions of carbohydrates, proteins, and fats contained in the food consumed).
It would have been obvious to one having ordinary skill in the art at the time of filing to modify the system and method of Connor to include a trained machine learning model containing 2) monitored jaw movements of the users, and (3) values of the food consumption parameter of the users; and applying the tracked movement of the hand of the target user relative to the head of the user, and the monitored movement of the jaw of the target user to the machine-learning model to determine the value of the food consumption parameter of the target user as disclosed by Oztaskent in order to predictably improve the accuracy and efficiency of the system by allowing for analysis of patterns of all of the measured values against a dataset, which is further supported by Connor’s disclosure of the system performing a joint analysis of wrist and jaw movements to promote greater accuracy of the determination (paragraph 0090, 0096, 0103, 0109, 0116-- joint analysis of data from the chewing sensor and data from the motion sensor can provide more accurate detection of eating than data from either sensor alone or separate analysis of data from both sensors).
Regarding claims 3 and 14, the combination of Connor, Qi, and Oztaskent discloses the method and system of claims 2 and 13. Oztaskent further discloses wherein determining the value of the food consumption parameter of the user further comprises: identifying a pattern among a plurality of patterns of jaw movements of the users, the plurality of patterns corresponding to at least one of chewing, drinking, or choking (Paragraph 0019-- through analysis of a signal (e.g. sound or air pressure), the opening and closing of an eustachian tube can be detected and related to certain actions, such as swallowing, chewing, yawning, or sneezing. In some examples, certain signatures or classifications associated with an action can be used to classify a signal detected through an earbud. For example, the opening and closing of the eustachian tube can be used to identify a particular type of action. This information can be included or used in training a machine learning model; paragraph 0020-- In some examples, models or algorithms using multi-stage detection can be used. For example, in some examples, a first stage event such as a sneezing, swallowing, chewing, or coughing can be detected using a first model and a second model can be used to interpret the event with more detail, such as to track illness, allergies, choking, or to potentially initiate an emergency response; paragraph 0082-0084-- the first model can be a trained machine learning model which can classify the sensor data into a particular category, such as “chewing,” “speaking,” “yawning” or “breathing.”… sensor data can be analyzed with a second model. In some examples, the second model can be a trained machine learning model. In some examples, the second model can be selected based on the output or classification of the first model. For example, if the first model suspects that the user is drinking, the second model selected can be a “drinking” model which can estimate the amount of liquid consumed…).
Regarding claims 4 and 15, the combination of Connor, Qi, and Oztaskent discloses the method and system of claims 3 and 14. Oztaskent further discloses detecting choking of the user based on the identified pattern (paragraph 0019-0020, 0082-0083-- In some examples, models or algorithms using multi-stage detection can be used. For example, in some examples, a first stage event such as a sneezing, swallowing, chewing, or coughing can be detected using a first model and a second model can be used to interpret the event with more detail, such as to track illness, allergies, choking, or to potentially initiate an emergency response); and
responsive to detecting choking of the user, sending an alert to another device (Paragraph 0084--if a “choking” condition is detected by the first model, an emergency response can be initiated whereby emergency services are contacted from a user device notifying them of the user's condition).
Claim(s) 6-8, 17-19, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Connor in view of Qi, further in view of Shashua (US 20210110159 A1).
Regarding claims 6 and 17, the combination of Connor and Qi discloses the method and system of claims 5 and 16. Connor additionally discloses wherein monitoring the food object or the drink object comprises: periodically taking images of objects that are within reach of the user (paragraph 0090, 0096, 0103, 0109, 0116—camera on the eyewear frame which records food images when activated; paragraph 0052-0053—taking pictures and/or recording images when data…shows a pattern…which is generally associated with food consumption…toward a reachable food source).
However, Connor does not explicitly disclose identifying at least one of the images as the food object or the drink object using machine-learning models. Shashua, in the same field of endeavor of a wearable system for monitoring consumption by a user (Abstract), discloses periodically taking images of objects that are within reach of the target user (Paragraph 0046-0048, 0065, 0077); and identifying at least one of the images as the food object or the drink object using machine-learning models (Paragraph 0123—detection of a consumable product may be accomplished by appearance-based algorithms, template-matching based algorithms, skeletal based algorithms, color-recognition algorithms, machine-learning based algorithms, neural-network based algorithms, vector analysis algorithms, and so forth). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the system and method of Connor to include analyzing the images using machine learning models to identify the food or drink object as disclosed by Shashua in order to predictably improve the accuracy and efficiency of the system, which is further supported by Connor’s disclosure of the system performing a joint analysis of wrist and jaw movements to promote greater accuracy of the determination (paragraph 0090, 0096, 0103, 0109, 0116-- joint analysis of data from the chewing sensor and data from the motion sensor can provide more accurate detection of eating than data from either sensor alone or separate analysis of data from both sensors).
Regarding claims 7 and 18, the combination of Connor, Qi, and Shashua discloses the method and system of claims 5 and 16. Shashua additionally discloses identifying the food object or the drink object is based on identifying packaging of the food object or the drink object (Paragraph 0137-0138-- the at least one processing device may be configured to determine the type indicator associated with the detected consumable product based, at least in part, on detection of a label associated with packaging of the consumable product and recognition of text appearing on the detected label…).
Regarding claims 8 and 19, the combination of Connor, Qi, and Shashua discloses the method and system of claims 5 and 16. Shashua additionally discloses retrieving a calorie density of the identified food object or drink object from a database (Paragraph 0116-0117--the estimated amount of the consumable product consumed by the user may include an amount of calories or nutrients consumed by a user, or a quantity of a particular item (e.g. three cookies). For example, nutrition information for a detected type of consumable may be accessed through a network or from a database and used to determine the amount of fat, protein, carbohydrates, sodium, cholesterol, etc. consumed by the user);
estimating a volume of the identified food object or the drink object that has been consumed based in part on the tracked movement of the hand and the monitored movement of the jaw of the user (Paragraph 0115--the consumption indicator may include movement of a hand, utensil, cup, or other object to and from the mouth, or movement of the jaws in a chewing pattern, or other movement associated with consumption. The consumption indicator may also include audible indicators, such as noises associated with chewing, swallowing, or the like; paragraph 0116-0117, 0141--the at least one processing device may be configured to analyze one or more of the plurality of images to estimate an amount of the consumable product consumed by a user. The estimated amount may the measured as a volume, a percentage, a number of consumption events, and so forth…); and
determining a total calorie of the food object or drink object consumed based on the calorie density of the identified food object or drink object and the estimated volume of the identified food object (Paragraph 0116-0117, 0154-0156--the estimated amount of the consumable product consumed by the user may include an amount of calories or nutrients consumed by a user…the feedback may be based on a type of the detected consumable product and an estimated amount of the consumable product consumed by a user).
Regarding claim 21, the combination of Connor and Qi discloses the method of claim 1. However, Connor fails to explicitly disclose displaying, on a display of the head-worn device, the value of the food consumption parameter; receiving user input concerning at least one of the food consumption parameter or the value of the food consumption parameter; and updating, based on the user input, at least one of the food consumption parameter or the value of the food consumption parameter.
Shashua, in the same field of endeavor of a wearable system for monitoring consumption by a user (Abstract), discloses displaying, on a display of the head-worn device, the value of the food consumption parameter (Paragraph 0046, 0050, 0068-- in some embodiments, apparatus 110 may include a feedback outputting unit 230 for producing an output of information to user 100. Feedback outputting unit 230 may include one or more vibration devices, e.g., a vibration motor, a speaker, or a display; paragraph 0057-- in some embodiments, apparatus 110 may be configured to provide an augmented reality display projected onto a lens of glasses 130 (if provided), or alternatively, may include a display for projecting time information, for example, according to the disclosed embodiments; paragraph 0135-0136-- the identification of ingredients may be used to determine feedback related to the consumable product; paragraph 0153-0156-- the feedback may be based on a type of the detected consumable product and an estimated amount of the consumable product consumed by a user. The type of the detected product and the amount consumed may be determined by any means disclosed herein…); receiving user input concerning at least one of the food consumption parameter or the value of the food consumption parameter (Paragraph 0134-0136-- It is also contemplated that the at least one processing device may be configured to seek input from user 100 when ingredients are identified but not an ultimate type of the consumable product. For example, the at least one processing device may generate instructions for causing a device to display image 1900 and the list of ingredients including lettuce, tomato, sliced cheese, and meat patty, and to request input form user 100 regarding the proper type.); and updating, based on the user input, at least one of the food consumption parameter or the value of the food consumption parameter (Paragraph 0134-0136-- A device may then be configured to perform the operations of the instructions and to receive the input. The input may then be saved with the image, e.g. image 1900, for later use in comparison to other captured images…).
It would have been obvious to one having ordinary skill in the art at the time of filing to modify the method of Connor to include the displaying and updating steps of Shashua in order to predictably improve the accuracy and usefulness of the device by allowing a user to easily observe the food consumption parameters and to provide information which may confirm or alter the food consumption parameter (e.g., to confirm a type of food which was consumed and thus clarify the amount of calories or macronutrients consumed) to avoid false determinations by the method.
Claim(s) 10-11 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Connor in view of Qi, further in view of Sazonov (US 20180242908 A1).
Regarding claims 10 and 20, the combination of Connor and Qi discloses the method and system of claims 1 and 12. However, Connor does not explicitly disclose accessing values of one or more second parameters associated with a second aspect of the user collected during a same time period when the value of the food consumption parameter is determined; and correlating the value of the food consumption parameter with the values of the one or more second parameters associated of the target user. Sazonov, in the same field of endeavor of a system and method of monitoring food intake using a wearable sensor system (Abstract), discloses accessing values of one or more second parameters associated with a second aspect of the target user collected during a same time period when the value of food consumption parameter is determined (Paragraph 0121-- Inertial measurement unit 206 on FIG. 2 may detect body motion signals); and
correlating the value of the food consumption parameter with the values of the one or more second parameters associated of the user (Paragraph 0121-- Data from sensor 206 can be used to identify when an individual is asleep to avoid recording false positives during rest. Further, individuals typically do not eat during rigorous exercise. Therefore, false positives associated with jaw motion and hand gesture signals while an individual breathes heavily and jogs, for example, can be avoided by measuring body acceleration to indicate ongoing exercise). It would have been obvious to one having ordinary skill in the art at the time of filing to modify the system of Connor to utilize the second parameter as disclosed by Sazonov in order to predictably improve the accuracy of the system by reducing false incidents of eating or drinking by providing a secondary parameter which can be used to confirm or reject an instance of eating or drinking as being erroneous.
Regarding claim 11, the combination of Connor, Qi, and Sazonov discloses the method of claim 10. Sazonov additionally discloses wherein the one or more second parameters include at least a parameter associated with an amount of exercise or hours of sleep of the user (Paragraph 0121-- Data from sensor 206 can be used to identify when an individual is asleep to avoid recording false positives during rest. Further, individuals typically do not eat during rigorous exercise. Therefore, false positives associated with jaw motion and hand gesture signals while an individual breathes heavily and jogs, for example, can be avoided by measuring body acceleration to indicate ongoing exercise).
Response to Arguments
Applicant's arguments filed 18 February 2026 regarding the rejection of the claims under 35 U.S.C. 101 have been fully considered but they are not persuasive.
The applicant argues that the claims do not recite a mental process because the limitations of “tracking…” and “monitoring…” cannot practically be performed in the human mind, specifically comparing the claimed invention to an example from MPEP 2106.04(d), MPEP 2106.05(a), and Example 47 of the July 2024 SME Guidance.
However, these arguments ignore the limitation which has been cited as abstract: “determining, by the head-worn device, a value of a food consumption parameter of the user based in part on the tracked movement of the hand and the monitored movement of the jaw”. The human mind can practically perform some determination based on gathered data, such as the data tracked and monitored by the head and wrist worn devices. For instance, a human is capable of visually observing the gathered data and counting a number of peaks or troughs on a signal to equate to a number of movements which create a minimal or maximal relative distance between a user’s wrist and head. Example 47, steps d-f recite specific computer solutions that may be performed by an ANN in response to detecting anomalies; this is not the same as merely determining a value of some parameter.
The applicant additionally argues that any abstract idea of the claimed invention is integrated into a practical application by providing an improvement in the technology.
While the claim language does reflect the argued improvement, as noted above in this application, the use of UWB is not presently seen as an improvement to the technology when body-worn UWB sensing for proximity and tracking is well-understood, routine, or conventional in the art of human movement tracking.
Applicant additionally argues that Connor does not teach or suggest a “contact microphone is configured to detect tissue-based vibrations caused by jaw movement of the user”. As sounds related to chewing are created based on tissue-based vibrations caused by jaw movement of the user (i.e., sound is produced by vibration and any sounds during chewing would be caused by jaw movement of the user), the teachings of Connor are sufficient to disclose the claim limitation.
The claims remain rejected under 35 U.S.C. 101.
Applicant’s arguments with respect to claim(s) 1-21 under 35 U.S.C. 102/103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Specifically, the Adam reference challenged in the applicant’s arguments has been replaced with Qi as described above in this action. Additionally, as noted previously in the arguments, sounds related to chewing are created based on tissue-based vibrations caused by jaw movement of the user (i.e., sound is produced by vibration and any sounds during chewing would be caused by jaw movement of the user), such that the teachings of Connor are sufficient to disclose the newly amended claim limitation relating to the contact microphone.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNA ROBERTS whose telephone number is (571)272-7912. The examiner can normally be reached M-F 8:30-4:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Valvis can be reached at (571) 272-4233. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANNA ROBERTS/Examiner, Art Unit 3791