Prosecution Insights
Last updated: April 19, 2026
Application No. 18/287,428

SYSTEM AND METHOD FOR DETECTION OF HEALTH-RELATED BEHAVIORS

Final Rejection §101§103
Filed
Oct 18, 2023
Examiner
MACCAGNO, PIERRE L
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The Trustees of Dartmouth College
OA Round
2 (Final)
22%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
53%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
28 granted / 130 resolved
-30.5% vs TC avg
Strong +32% interview lift
Without
With
+31.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
44 currently pending
Career history
174
Total Applications
across all art units

Statute-Specific Performance

§101
45.8%
+5.8% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
7.0%
-33.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is a final rejection Claims 11-21 are pending Claim 11 was amended Claims 20, 21 were added Claims 11-21 are rejected under 35 USC § 101 Claims 11-21 are rejected under 35 USC § 103 Priority Acknowledgement is made of Applicant’s claim for a domestic priority date of 5-2-2021 Information Disclosure Statement The information disclosure statements (IDS) submitted on 10-18-2023, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 11-21 are not patent eligible because the claimed invention is directed to an abstract idea without significantly more. Analysis First, claims are directed to one or more of the following statutory categories: a process, a machine, a manufacture, and a composition of matter. Regarding claims 11-21 the claims recite an abstract idea of “detection of health related behaviors related to eating”. Independent Claim 11 is rejected under 35 U.S.C 101 based on the following analysis. -Step 1 (Does the claim fall within a statutory category? YES): claim 11 recites a device for inferring eating behaviors in real-life situations. -Step 2A Prong One (Does the claim fall within at least one of the groupings of abstract ideas?: YES): The claimed invention: a housing adapted to be worn on a user's head; positioned to capture a video of a mouth of the user; classify using a model the video frame-by-frame using a target frame and a plurality of frames preceding the target frame; aggregate video frames in segments based on their classifications; output an inferred eating behavior of each segment of the captured video. belonging to the grouping of mental processes under concepts performed in the human mind (including an observation, evaluation, judgement, opinion) as it recites “detection of health related behaviors related to eating”. (refer to MPP 2106.04(a)(2)). Alternatively the claim belongs to certain methods of organizing human activity under managing personal relationships or interactions between people as it recites “detection of health related behaviors related to eating”. (refer to MPP 2106.04(a)(2)). Accordingly this claim recites an abstract idea. -Step 2A Prong Two (Are there additional elements in the claim that imposes a meaningful limit on the abstract idea? NO). Claim 11 recites: a camera attached to the housing; a processor for processing the video; a memory for storing the video and instructions for processing the video; wherein the processor executes instructions stored in the memory to: preprocess a video captured by a camera focused on a user's mouth by resizing and down-sampling the captured video. Such that it amounts to no more than mere instructions to apply the exception using a generic computer component (refer to MPEP 2106.05(f)). Accordingly, these additional elements, when considered separately and as an ordered combination do not integrate the judicial exception/abstract idea into a “practical application” of the judicial exception because they do not impose any meaningful limit on practicing the judicial exception. -Step 2B (Does the additional elements of the claim provide an inventive concept?: NO. As discussed previously with respect to Step 2A Prong Two: Claim 11 recites: a camera attached to the housing; a processor for processing the video; a memory for storing the video and instructions for processing the video; wherein the processor executes instructions stored in the memory to: preprocess a video captured by a camera focused on a user's mouth by resizing and down-sampling the captured video. Amounting to mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)) Accordingly, even when viewed as a whole the claim does not provide an inventive concept (significantly more than the abstract idea) and hence the claim is ineligible. Dependent Claims: Step 2A Prong One: The following dependent claims recites additional limitations that further define the abstract idea of “detection of health related behaviors related to eating”. The claim limitations include: Claim 13: wherein the housing further comprises a hat with a bill or brim extending outward from a forehead of the user; Claim 14: mounted on the bill or brim so that it captures a view of the mouth of the user; Step 2A Prong Two (Are there additional elements in the claim that imposes a meaningful limit on the abstract idea? NO). The following dependent claims recite mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, these additional elements, when considered separately and as an ordered combination do not integrate the judicial exception/abstract idea into a “practical application” of the judicial exception because they do not impose any meaningful limit on practicing the judicial exception. The claims include: Claim 12: portable power supply for providing power to the camera, processor, and memory; Claim 14: the camera; Claim 15: further comprising a port or antenna for downloading the results; Claim 16: wherein the processor further executes instructions stored in the memory to minimize power consumption by the wearable device; Claim 17: wherein the processor and memory are attached to the housing at a location different from that of the camera; Claim 18: wherein the processor and memory are attached to the wearable device at the back of a user's head; Claim 19: wherein computational resources of the processor are capable of executing the instructions in the wearable device. Claim 20: the resizing the captured video comprising resizing to a square dimension; Claim 21: the down-sampling comprising down- sampling from 30 frames per second to 5 frames per second. Accordingly, these additional elements, when considered separately and as an ordered combination do not integrate the judicial exception/abstract idea into a “practical application” of the judicial exception because they do not impose any meaningful limit on practicing the judicial exception. Step 2B (Does the additional elements of the claim provide an inventive concept?: NO). As discussed previously with respect to Step 2A Prong Two, the following dependent claims recite mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, even when viewed as a whole the claim does not provide an inventive concept (significantly more than the abstract idea) and hence the claim is ineligible. The claims include: Claim 12: portable power supply for providing power to the camera, processor, and memory; Claim 14: the camera; Claim 15: further comprising a port or antenna for downloading the results; Claim 16: wherein the processor further executes instructions stored in the memory to minimize power consumption by the wearable device; Claim 17: wherein the processor and memory are attached to the housing at a location different from that of the camera; Claim 18: wherein the processor and memory are attached to the wearable device at the back of a user's head; Claim 19: wherein computational resources of the processor are capable of executing the instructions in the wearable device. Claim 20: the resizing the captured video comprising resizing to a square dimension; Claim 21: the down-sampling comprising down- sampling from 30 frames per second to 5 frames per second Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 11-16, 19, 20, 21 are rejected under 35 U.S.C. 103 as being un-patentable by Vleugels et.al (US 20190236465 A1) hereinafter “Vleugels”, in view of Waters et.al (US 20140304891 A1) hereinafter “Waters”, in view of in view Lee et.al (US 20060233254 A1) hereinafter “Lee”, in further view of Francesca et.al (WO 2021069945 A1) hereinafter “Francesca” Regarding claim 11 Vleugels teaches: A wearable device for inferring eating behaviors in real-life situations, comprising: (See at least [0032] via: “…Identifying behavior indicators might comprise detecting at least one gesture of a user wearing a wearable device having a plurality of sensors to detect movement and other physical inputs related to a user, receiving sensor inputs from the plurality of sensors, reading data from external data sources, and from the sensors inputs and the data, identifying the behavior indicators…”; in addition see at least [0053] via: “…The devices are able to handle a wide range of meal scenarios and dining settings in a discreet and socially-acceptable manner, and are capable of estimating and tracking food intake content and quantity as well as other aspects of eating behavior. The devices can provide both real-time and non-real-time feedback to the person about their eating behavior, habits and patterns…”) a camera attached to the housing, the camera positioned to capture a video of a mouth of the user; (See at least [0241] via: “…the position, the orientation and the angle of view of the camera are such that an image or video capture is possible without any user intervention. In such an embodiment, the wearable device may use a variety of techniques to determine the proper timing of the image or video stream capture such that it can capture the food or a portion of the food being consumed..”; in addition see at least [0085] via: “…Methods for event detection may include, but are not limited to, detection based on monitoring of movement or position of the body or of specific parts of the body, monitoring of swallowing patterns, monitoring of mouth and lips movement, monitoring of saliva, monitoring of movement of cheeks or jaws, monitoring of biting or teeth grinding, monitoring of signals from the mouth, .....”) a processor for processing the video; (See at least [0244] via: “…The wearable device 670 may include a processor…”) a memory for storing the video and instructions for processing the video; (See at least [0244] via: “…The wearable device 670 may include … a program code memory and program code (software) stored therein and/or inside the electronic device to optionally allow users to customize a subset of the functionality of wearable device 670..”) wherein the processor executes instructions stored in the memory to: (See at least [0060] via: “…functionality of the electronic device might be implemented by hardware circuitry, or by program instructions that are executed by a processor in the electronic device, or a combination. Where it is indicated that a processor does something, it may be that the processor does that thing as a consequence of executing instructions read from an instruction memory wherein the instructions provide for performing that thing…”) preprocess a video captured by a camera focused on a user's mouth [by resizing and down-sampling the captured video]; (See at least [0085] via: “…Methods for event detection may include, but are not limited to, detection based on monitoring of movement or position of the body or of specific parts of the body, monitoring of swallowing patterns, monitoring of mouth and lips movement, monitoring of saliva, monitoring of movement of cheeks or jaws, monitoring of biting or teeth grinding, monitoring of signals from the mouth, .....”; in addition see at least [0241] via: “… the position, the orientation and the angle of view of the camera are such that an image or video capture is possible without any user intervention. In such an embodiment, the wearable device may use a variety of techniques to determine the proper timing of the image or video stream capture such that it can capture the food or a portion of the food being consumed. It may also choose to capture multiple images or video streams for this purpose..”). output an inferred eating behavior of each segment of the captured video. (See at least [0053] via: “…The devices are able to handle a wide range of meal scenarios and dining settings in a discreet and socially-acceptable manner, and are capable of estimating and tracking food intake content and quantity as well as other aspects of eating behavior. The devices can provide both real-time and non-real-time feedback to the person about their eating behavior, habits and patterns..”; See at least [0105] via: “…Outputs of event detection subsystem 101 that indicate the occurrence of a subject's eating or drinking events are recorded. Event detection subsystem 101 may do additional processing to obtain additional relevant information about the event such as start time, end time, metrics representative of the subject's pace of eating or drinking, metrics representative of quantities consumed. This additional information may also be recorded…”) Nevertheless Vleugels is silent the following limitations that are taught by Waters : a housing adapted to be worn on a user's head; (See at least [0072] via: “…FIG. 27 is a perspective view of a hat with a brim showing a camera device including first and second lens devices mounted adjacent to a lower surface of the brim in electrical communication with a control panel and a power source..”; in addition see at least [0073] via: “… FIG. 28 is a bottom perspective view of the hat of FIG. 27..”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Vleugels to incorporate the teachings of Waters. Those in the art would have recognized that Vleugels’ teaching regarding a wearable device having a camera capable of video capture in addition to a plurality of sensors to detect movement and other physical inputs related to a user, specific to estimating and tracking food intake content and quantity as well as other aspects of eating behavior, habits and patterns could be modified to include Water’s teaching regarding a hat with a brim showing a camera device mounted adjacent to a lower surface of the brim. The combination of Vleugels and Waters is useful in positioning the camera to obtain video of the users mouth in order to estimate the eating behavior of the user. Nevertheless Vleugels and Waters are silent the following limitations that are taught by Lee: resizing and down-sampling the captured video (See at least [0134] via: “…A downsampler 310 downsamples an input video according to the resolution, frame rate or video image size of a base layer. An MPEG downsampler or a wavelet downsampler may be used to downsample the input frame to the resolution of the base layer. A frame scheme or frame interpolation scheme may be simply used to change the frame rate for downsampling. Downsampling an image to a smaller size can be accomplished by removing information in a boundary region from video information or reducing the size of video information to match the size of a screen. For example, downsampling may be performed to resize an original input video with 16:9 aspect ratio to 4:3....”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Vleugels and Waters to incorporate the teachings of Lee. Those in the art would have recognized that Vleugels’ teaching regarding a wearable device having a camera capable of video capture in addition to a plurality of sensors to detect movement and other physical inputs related to a user, specific to estimating and tracking food intake content and quantity as well as other aspects of eating behavior, habits and patterns could be modified to include Lee’s teaching regarding resizing the input video sequence utilizing downsampling. The combination of Vleugels, Waters and Lee is useful when there is a need to resize the video coming from the capture camera that has an aspect ratio different from the aspect ratio of the output screen used to output the video. Nevertheless Vleugels Waters and Lee are silent the following limitations that are taught by Francesca: classify using a Slow Fast convoluted neural network model the video frame-by-frame using a target frame and a plurality of frames preceding the target frame; (See at least [claim 1] via: “…A method for recognizing person activity in a video comprising a sequence of frames (100), each frame showing at least a portion of the person, the method comprising: obtaining a set of consecutives 3D poses (103) of the person using the sequence of frames, each pose illustrating a posture of the person from a frame of the sequence of frames, obtaining a feature map (102) elaborated using a first encoder neural network (101) configured to receive the sequence of frames as input and to output the feature map having dimensions associated with time, space, and a number of channels, obtaining a vector of spatiotemporal features using a second recurrent neural network (121, ..., 123) outputting the vector of spatiotemporal features and receiving as input the set of consecutive poses, obtaining, using the vector of spatiotemporal features and a third neural network (124), a matrix of spatial attention weights (105)wherein each weight indicates the importance of a location in the matrix, obtaining, using the vector of spatiotemporal features and a fourth neural network (129), a matrix of temporal attention weights (110) wherein each weight indicates the importance of an instant, modulating (106) the feature map using the matrix of spatial attention weights to obtain a spatially modulated feature map, modulating (111) the feature map using the vector of temporal attention weights to obtain a temporally modulated feature map, performing a convolution (114) of the spatially modulated feature map and of the temporally modulated feature map to obtain a convoluted feature map, performing a classification (115) using the convoluted feature map so as to determine the activity of the person in the video...”; in addition see at least [Page 8, lines 26-28 and Page 9, lines 1-3] via: “…Additionally, the method of document "Slowfast networks for video recognition" (C. Feichtenhofer, H. Fan, J. Malik, and K. He. CoRR, abs/1812.03982, 2018.) may be used to process the video 100. In a first step 101, the sequence of frames 100 is inputted to a first encoder neural network, for example a first encoder neural network trained for activity detection in a preliminary training step...”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Vleugels Waters and Lee to incorporate the teachings of Francesca. Those in the art would have recognized that Vleugels’ teaching regarding a wearable device having a camera capable of video capture in addition to a plurality of sensors to detect movement and other physical inputs related to a user, specific to estimating and tracking food intake content and quantity as well as other aspects of eating behavior, habits and patterns could be modified to include Francesca’s teaching regarding determining a classification result of the processed video based on the motion of the person in the video. The combination of Vleugels, Waters, Lee and Francesca is useful in classifying video frames in order to be able to determine eating behaviors, habits and patterns of a user. aggregate video frames in segments based on their classifications; (See at least [Page 2, lines 12-30 and Page 3, lines 1-11] via: “…... recognizing person activity in a video comprising a sequence of frames, each frame showing at least a portion of the person, the method comprising: obtaining a set of consecutives 3D poses of the person using the sequence of frames, each pose illustrating a posture of the person from a frame of the sequence of frames, obtaining a feature map elaborated using a first encoder neural network configured to receive the sequence of frames as input and to output the feature map having dimensions associated with time, space, and a number of channels, obtaining a vector of spatiotemporal features using a second recurrent neural network outputting the vector of spatiotemporal features and receiving as input the set of consecutive poses, obtaining, using the vector of spatiotemporal features and a third neural network, a matrix of spatial attention weights wherein each weight indicates the importance of a location in the matrix, obtaining, using the vector of spatiotemporal features and a fourth neural network, a matrix of temporal attention weights wherein each weight indicates the importance of an instant (typically the instant of each 3D pose, i.e. instants associated with frames), modulating the feature map using the matrix of spatial attention weights to obtain a spatially modulated feature map, modulating the feature map using the vector of temporal attention weights to obtain a temporally modulated feature map, performing a convolution of the spatially modulated feature map and of the temporally modulated feature map to obtain a convoluted feature map, performing a classification using the convoluted feature map so as to determine the activity of the person in the video. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Vleugels, Waters and Lee to incorporate the teachings of Francesca. Those in the art would have recognized that Vleugels’ teaching regarding a wearable device having a camera capable of video capture in addition to a plurality of sensors to detect movement and other physical inputs related to a user, specific to estimating and tracking food intake content and quantity as well as other aspects of eating behavior, habits and patterns could be modified to include Francesca’s teaching regarding recognizing a person activity in a video comprising a sequence of frames by obtaining a set of consecutives 3D poses of the person using the sequence of frames. The combination of Vleugels, Waters, Lee and Francesca is useful in putting together video frames in order to be able to determine eating behaviors, habits and patterns of a user. Regarding claim 12: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claim 11. Vleugels also teaches: further comprising a portable power supply for providing power to the camera, processor, and memory. (See at least [0245] via: “… Wearable device 670 relies on battery 669 and Power Management Unit (“PMU”) 660 to deliver power at the proper supply voltage levels to all electronic circuits and components…”; in addition see at least [0244] via: “…The wearable device 670 may include a processor, a program code memory and program code (software) stored therein and/or inside the electronic device..”) Regarding claim 13: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claim 11. Vleugels and Francesca are silent the following claim that is taught by Waters: wherein the housing further comprises a hat with a bill or brim extending outward from a forehead of the user. (See at least [0072] via: “…FIG. 27 is a perspective view of a hat with a brim showing a camera device including first and second lens devices mounted adjacent to a lower surface of the brim in electrical communication with a control panel and a power source..”; in addition see at least [0073] via: “… FIG. 28 is a bottom perspective view of the hat of FIG. 27..”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Vleugels to incorporate the teachings of Waters. Those in the art would have recognized that Vleugels’ teaching regarding a wearable device having a camera capable of video capture in addition to a plurality of sensors to detect movement and other physical inputs related to a user, specific to estimating and tracking food intake content and quantity as well as other aspects of eating behavior, habits and patterns could be modified to include Water’s teaching regarding a hat with a brim showing a camera device mounted adjacent to a lower surface of the brim. The combination of Vleugels and Waters is useful in positioning the camera to obtain video of the users mouth in order to estimate the eating behavior of the user. Regarding claim 14: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claims 11 & 13. Vleugels and Francesca are silent the following claim that is taught by Waters: wherein the camera is mounted on the bill or brim so that it captures a view of the mouth of the user. (See at least [0072] via: “…FIG. 27 is a perspective view of a hat with a brim showing a camera device including first and second lens devices mounted adjacent to a lower surface of the brim in electrical communication with a control panel and a power source..”; in addition see at least [0073] via: “… FIG. 28 is a bottom perspective view of the hat of FIG. 27..”) The Examiner interprets that the camara that is mounted at the lower surface of the brim of the hat would be able to capture a view of the mouth of the user when the hat is worn by the user…”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Vleugels to incorporate the teachings of Waters. Those in the art would have recognized that Vleugels’ teaching regarding a wearable device having a camera capable of video capture in addition to a plurality of sensors to detect movement and other physical inputs related to a user, specific to estimating and tracking food intake content and quantity as well as other aspects of eating behavior, habits and patterns could be modified to include Water’s teaching regarding a hat with a brim showing a camera device mounted adjacent to a lower surface of the brim. The combination of Vleugels and Waters is useful in positioning the camera to obtain video of the users mouth in order to estimate the eating behavior of the user. Regarding claim 15: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claim 11. Vleugels also teaches: further comprising a port or antenna for downloading the results. (See at least [0253] via: “… Sensor data used to track certain aspects of a user's behavior, such as for example a user's eating behavior, may be stored locally inside memory 666 of wearable device 670 and processed locally using processing unit 667 inside wearable device 670. Sensor data may also be transferred to the mobile phone or remote compute server, using radio circuitry 664, for further processing and analysis…”) Regarding claim 16: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claim 11 and 12. Vleugels also teaches: wherein the processor further executes instructions stored in the memory to minimize power consumption by the wearable device. (See at least [0102] via: “…The object information retrieval subsystem may be housed in its entirety or in part inside a battery operated electronic device and it may desirable to minimize the power consumption of the object information retrieval subsystem. When no event is detected, the radio circuitry (e.g., the NFC reader circuitry or Bluetooth Low Energy module) may be placed in a low power state. Upon detection or inference of an actual, probable or imminent occurrence of an event, the object information retrieval subsystem may be placed in a higher power state. One or more additional circuitry inside the object information retrieval subsystem may be powered up to activate the object information retrieval subsystem, to improve the range or performance of object information retrieval subsystem, etc. In one specific example, the NFC reader is disabled or placed in a low power standby or sleep mode when no event is detected. Upon detection or inference of an event, the NFC reader is placed in a higher power state in which it can communicate with NFC tags of neighboring objects. After reading a pre-configured number of NFC tags, after a pre-configured time or upon detection of the end or completion of the event, the NFC reader may be disabled again or may be placed back into a low power standby or sleep mode…”) Regarding claim 19: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claim 11. Vleugels also teaches: wherein computational resources of the processor are capable of executing the instructions in the wearable device. (See at least [0060] via: “…functionality of the electronic device might be implemented by hardware circuitry, or by program instructions that are executed by a processor in the electronic device, or a combination. Where it is indicated that a processor does something, it may be that the processor does that thing as a consequence of executing instructions read from an instruction memory wherein the instructions provide for performing that thing…”; in addition see at least [0242] via: “…the electronic device is a wearable device..” ) Regarding claim 20: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claim 11. Vleugels, Waters, and Francesca are silent the following claim that is a matter of design choice: the resizing the captured video comprising resizing to a square dimension (Lee discloses “ downsampling may be performed to resize an original input video” [0134] but does not specify the dimension or the aspect ratio of the resized video to be a square. Motivation to combine Lee was already recited in claim 11. However, one of ordinary skill in the art would recognize that it would have been an obvious matter of design choice to convert the aspect ratio of the video signal from the capture camera to a video signal having a square dimension with an aspect ratio of 1:1. As claimed, the processing required to change to a video signal from a capture camera to a resized video signal output with an aspect ratio of 1:1 would be the same as the processing required to resize an original input video with 16:9 aspect ratio to 4:3 as disclosed by Lee. Regarding claim 21: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claim 11. Vleugels, Waters, and Francesca are silent the following claim that is a matter of design choice: the down-sampling comprising down- sampling from 30 frames per second to 5 frames per second (Lee discloses “ A frame scheme or frame interpolation scheme may be simply used to change the frame rate for downsampling. ...Downsampling an image to a smaller size downsampling may be performed to resize an original input video” [0134] but does not specify down sampling from 30 frames per second to 5 frames per second. Motivation to combine Lee was already recited in claim 11. However, one of ordinary skill in the art would recognize that it would have been an obvious matter of design choice to down sample from 30 frames per second to 5 frames per second the video signal from the capture camera to an output video signal. As claimed, the processing required to change the video frame rate of 30 frames per second from a capture camera to a video frame rate of 5 frames per second would be the same as the processing required to change the frame rate as disclosed by Lee Claim 17 is rejected under 35 U.S.C. 103 as being un-patentable by Vleugels in view of Waters, in view of Lee, in view of Francesca, in further view of Kang et.al (CN 204481974 U) hereinafter “Kang” Regarding claim 17: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claim 11. Vleugels, Waters, Lee and Francesca are silent the following claim that is taught by Kang: wherein the processor and memory are attached to the housing at a location different from that of the camera. (See at least [page 5, lines 3-16] via: “…an image acquisition and transmission system, the system comprising: a wearable device, a camera, a receiving device and an output device. wherein the wearable device is a head wearing device; the camera is set on the head worn device; The system further comprises a transmission device, a transmission device worn on the human body … is established with the camera with a wired data connection or wireless data connection, a wireless data connection comprises a Bluetooth connection and wireless fidelity WIFI connection; transmission apparatus includes a transceiver, a memory and a power supply, transceiver via a data connection to receive image data collected by the camera, a memory and a transceiver, storing transceiver receives the image data, a power supply supplying power to the transmission device; receiving device establishes a wireless communication connection with transmission equipment and connected with remote receiving transceiver to transmit image data via the wireless communication; output device connected with the receiving device through the data line, an output display image data received by the receiving device..”; in addition see at least [page 5, lines 18-20] via: “… the transmission device provided separately from the wearable device (image capturing device). the transmission device and the wearable device are separately set, the volume is not limited, so the transmission device includes a transceiver, a memory and a power supply, also may include a processor…”; in addition see at least [page 5, lines 29-31] via: “…As the transmission device separate from the image acquisition device, the transmission device is not limited by the volume weight, … memory capacity or processing performance..”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Vleugels, Waters, Lee and Francesca to incorporate the teachings of Kang. Those in the art would have recognized that Vleugels’ teaching regarding a wearable device having a camera capable of video capture in addition to a plurality of sensors to detect movement and other physical inputs related to a user, specific to estimating and tracking food intake content and quantity as well as other aspects of eating behavior, habits and patterns could be modified to include Kang’s teaching regarding a camera that is a head wearing device and a transmission device that includes a processor and memory that is also a head wearing device that is wirelessly connected yet set physically separately from the camera. The combination of Vleugels, Waters and Kang is useful in distributing the weight and volume over the headpiece such that the “transmission device separate from the image acquisition device ….is not limited by the volume weight, … memory capacity or processing performance” (Wang [page 5, lines 29-31]). Claim 18 is rejected under 35 U.S.C. 103 as being un-patentable by Vleugels in view of Waters, in view of Lee, in view of Francesca, in view of Kang, in further view of Eadie et.al (US 20230218159 A1) hereinafter “Eadie” Regarding claim 18: Vleugels, Waters, Lee and Francesca teach the invention as claimed and detailed above with respect to claim 11, and Vleugels, Waters, Francesca and Kang teach the invention as claimed and detailed above with respect to claim 17 . Vleugels, Waters, Francesca and Kang are silent the following claim that is taught by Eadie: wherein the processor and memory are attached to the wearable device at the back of a user's head. (See at least [0025] via: “…the head mounted unit includes the camera…”; in addition see at least [0453] via: “…a computing device illustrated in FIGS. 3A through 3C is moved from a distinct device in the form of a control unit to the head mounted unit where it may be supported on a rear of the patient's head…”; in addition see at least [0163] via: “… FIG. 3B illustrates power flow through the electrical components of the CU of FIG. 3A..”; in addition see fig 37. The Examiner notes that the control unit as shown in fig. 3B encompasses a processor and memory. Furthermore fig. 37 shows the control unit located and attached to the wearable device at the back of the user’s head. PNG media_image1.png 550 448 media_image1.png Greyscale PNG media_image2.png 616 473 media_image2.png Greyscale It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Vleugels, Waters. Francesca and Kang to incorporate the teachings of Easdie. Those in the art would have recognized that Vleugels’ teaching regarding a wearable device having a camera capable of video capture in addition to a plurality of sensors to detect movement and other physical inputs related to a user, specific to estimating and tracking food intake content and quantity as well as other aspects of eating behavior, habits and patterns and Kang’s teaching regarding a camera that is a head wearing device and a transmission device that includes a processor and memory that is also a head wearing device that is wirelessly connected yet set physically separately from the camera could be modified to include Eadie’s teaching regarding a head mounted device that includes a camera in addition to control unit that encompasses a processor and memory that is a head mounted device supported by the rear of a patient’s head. The combination of Vleugels, Waters, Francesca Kang and Easie is useful in distributing the weight and volume over the headpiece such that the “transmission device separate from the image acquisition device ….is not limited by the volume weight, … memory capacity or processing performance” (Wang [page 5, lines 29-31]). Prior Art Made of Record The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure, and is listed in the attached form PTO-892 (Notice of References Cited). Unless expressly noted otherwise by the Examiner, all documents listed on form PTO-892 are cited in their entirety. Connor (US 20150168365 A1) - Caloric Intake Measuring System Using Spectroscopic And 3D Imaging Analysis- teaches: a caloric intake measuring system comprising: a spectroscopic sensor that collects data concerning light that is absorbed by or reflected from food, wherein this food is to be consumed by a person, and wherein this data is used to estimate the composition of this food; and an imaging device that takes images of this food from different angles, wherein these images from different angles are used to estimate the quantity of this food. Information concerning the estimated composition of the food and information concerning the estimated quantity of the food can be combined to estimate the person's caloric intake. Connor (US 20140349257 A1) - Smart Watch And Food Utensil For Monitoring Food Consumption - teaches: a device and system for monitoring a person's food consumption comprising: a wearable sensor that automatically collects data to detect eating events; a smart food utensil, probe, or dish that collects data concerning the chemical composition of food which the person is prompted to use when an eating event is detected; and a data analysis component that analyzes chemical composition data to estimate the types and amounts of foods, ingredients, nutrients, and/or calories consumed by the person. In an example, the wearable sensor can be part of a smart watch or smart bracelet. In an example, the smart food utensil, probe, or dish can be a smart spoon with a chemical composition sensor. The integrated operation of the wearable sensor and the smart food utensil, probe, or dish disclosed in this invention offers accurate measurement of food consumption with low intrusion into the person's privacy Response to Arguments Applicant's arguments filed 11-10-2025, have been fully considered but not found persuasive. Applicant amended independent claim 11, and added claims 20-21 as posted in the above analysis with additions underlined and deletions as .. In response to applicant's arguments regarding claim rejection under 35 U.S.C § 101. Several steps are taken in the analysis as to whether an invention is rejected under 101. The first step is to determine if the claim falls within a statutory category. In this case it does for claim 11 since the claim recites a device for inferring eating behaviors in real-life situations. The second step under 2A prong one is to determine if the claims recite an abstract idea, which would be the case if the invention can be grouped as either: a) mathematical concepts; (b) mental processes; or (c) certain methods of organizing human activity (encompassing (i) fundamental economic principles, (ii) commercial or legal interactions or (iii) managing personal behavior or relationships or interactions between people). The current invention is classified as an abstract idea since it may be grouped as a mental process as it recites “detection of health related behaviors related to eating”. Alternatively the claim belongs to certain methods of organizing human activity under managing personal relationships or interactions between people as it recites “detection of health related behaviors related to eating” The third step under 2A Prong Two is to determine if additional elements in the claim imposes a meaningful limit on the abstract idea in order to integrate it into a practical idea. The current invention does not represent a practical idea since the additional elements amount to mere instructions to implement an abstract idea on a computer, or merely use a generic computer as a tool to implement the abstract idea. the fourth step under 2B is to determine if additional elements of the claim provide an inventive concept. An invention may be classified as an inventive concept if a computer-implemented processes is determined to be significantly more than an abstract idea (and thus eligible), where generic computer components are able in combination to perform functions that are not merely generic, and non-conventional even if generic computer operations on a generic computing device is used to implement the abstract idea. The current invention does not represent an inventive concept since the additional elements amount to mere instructions to implement an abstract idea on a computer, or merely use a generic computer as a tool to implement the abstract idea. Step 2A Prong ONE The Applicant argues that the invention does not belong to the grouping of mental processes under concepts performed in the human mind as it recites “detection of health related behaviors related to eating”. Neither does it belong to the grouping of certain methods of organizing human activity under managing personal behavior or relationships or interactions between people as it recites “detection of health related behaviors related to eating”. Regarding the mental process the Applicant argues that the invention cannot be practically performed by the human mind. Furthermore the Applicant argues that there does not seem to be any “human activity” regarding organizing human activity under managing personal behavior or relationships or interactions between people. The Examiner disagrees since the Applicant’s arguments are not persuasive. The Examiner explains the method used to select the abstract idea, which is to strip the additional elements from the claims. As seen below the recited boldened words constitute the abstract idea after stripping the un-boldened additional elements of amended limitation of claim 11: Grouping of claim 11: a housing adapted to be worn on a user's head; a camera attached to the housing, the camera positioned to capture a video of a mouth of the user; a processor for processing the video; a memory for storing the video and instructions for processing the video; wherein the processor executes instructions stored in the memory to: preprocess a video captured by a camera focused on a user's mouth by resizing and down-sampling the captured video; classify using a Slow Fast convoluted neural network model the preprocessed video frame-by-frame using a target frame and a plurality of frames preceding the target frame; aggregate video frames in segments based on their classifications; and output an inferred eating behavior of each segment of the captured video. The selected abstract idea (boldened limitations) of claim 11 can be implemented by pencil and paper and thus belong to the grouping of mental processes under concepts performed in the human mind (including an observation, evaluation, judgement, opinion) as it recites “detection of health related behaviors related to eating”. Alternatively, the selected abstract idea belongs to certain methods of organizing human activity under managing personal behavior or relationships or interactions between people as it recites “detection of health related behaviors related to eating”. (refer to MPP 2106.04(a)(2)). Accordingly this claim recites an abstract idea. Step 2A Prong TWO The Applicant argues that even if the invention belongs to an abstract idea the claimed subject matter is directed to a practical application based on the amendments. Specifically the Applicant argues that the claims represent an improvement as the specification describes a problem in a technokogy or technical field which would have been understood by one of ordinary skill in the art as an improvement over that problem. Accordingly the Applicant requests withdrawal of the 101 rejection. The Examiner disagrees since the Applicant’s arguments are not persuasive. The Applicant refers to a colloquial interpretation of a practical application. What is required instead is a demonstration of improvement to the functioning of a computer, or to any other technology or technical field that the invention has not recited. All of the added amendments include physical components that are generic whose function amount to mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. The Examiner restates that claim does not integrate the abstract idea into a practical application. Claim 11 recites additional elements that impose a meaningful limit on the abstract idea: Claim 11 recites the following additional elements: a camera attached to the housing; a processor for processing the video; a memory for storing the video and instructions for processing the video; wherein the processor executes instructions stored in the memory to: preprocess a video captured by a camera focused on a user's mouth by resizing and down-sampling the captured video; The additional elements as recited above for claim 11 amounts to additional elements that are recited at a high-level of generality such that it amounts to no more than mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, the claim as a whole does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. In order to integrate the abstract idea into a practical idea the Applicant could demonstrate at least one of the conditions enumerated below applies: Improvements to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a) Applying the judicial exception with, or by use of, a particular machine - see MPEP 2106.05(b) Effecting a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c) Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo The Applicant has not demonstrated any of the above listed conditions. As a result, the Examiner restates the rejection of the invention under 35 USC §101. Step 2B Similar to the analysis under Step 2A Prong Two, the additional elements amount to mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). The use of generic computer components, in combination, do not perform functions that are not merely generic, and non-conventional even if the generic computer operations on a generic computing device is used to implement the abstract idea. Accordingly, the claim does not provide an inventive concept (significantly more than the abstract idea) and hence the claim is ineligible. In order evaluate whether the claim recites additional elements that amount to an inventive concept what could be shown is: Adding a specific limitation (unconventional other than what is well-understood, routine, conventional (WURC) activity in the field - see MPEP 2106.05(d) The Applicant has not demonstrated the above listed condition. In response to applicant's arguments regarding claim rejection under 35 U.S.C § 103. The applicant argues that one of skill in the art would not modify Vleugels with Waters' headgear having a camera as alleged in the Office Action. The Office Action alleges "[t]he Examiner interprets that the camara that is mounted at the lower surface of the brim of the hat would be able to capture a view of the mouth of the user when the hat is worn by the user." (Office Action at p. 17). Applicant disagrees. Waters is directed to "headgear having a camera device mounted thereto and, in particular, to headgear having a camera device mounted to a brim portion thereof for capturing images and/or video forwardly of the headgear." Therefore, one of ordinary skill in the art would not utilize the headgear of Waters because the headgear of Waters faces the camera forwardly, whereas the claimed camera must "capture a video of a mouth of the user." Accordingly, reconsideration and withdrawal of the rejections are respectfully requested for at least this additional reason. The Examiner disagrees since the Applicant’s arguments are not persuasive. Waters teaches “housing adapted to be worn on a user's head”; (See at least [0072] via: “…FIG. 27 is a perspective view of a hat with a brim showing a camera device including first and second lens devices mounted adjacent to a lower surface of the brim in electrical communication with a control panel and a power source..”; in addition see at least [0073] via: “… FIG. 28 is a bottom perspective view of the hat of FIG. 27..”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Vleugels to incorporate the teachings of Waters. Those in the art would have recognized that Vleugels’ teaching regarding a wearable device having a camera capable of video capture in addition to a plurality of sensors to detect movement and other physical inputs related to a user, specific to estimating and tracking food intake content and quantity as well as other aspects of eating behavior, habits and patterns could be modified to include Water’s teaching regarding a hat with a brim showing a camera device mounted adjacent to a lower surface of the brim. The combination of Vleugels and Waters is useful in positioning the camera to obtain video of the users mouth in order to estimate the eating behavior of the user. Furthermore the applicant's arguments with respect to the amendments to claim 11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. For reasons of record and as set forth above, the examiner maintains the rejection of claims 11-21 as being directed to a judicial exception without significantly more, and thereby being directed to non-statutory subject matter under 35 USC §101 in addition to maintaining the rejection under 35 USC §103. In reaching this decision, the Examiner considered all evidence presented and all arguments actually made by Applicant. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PIERRE L MACCAGNO whose telephone number is (571)270-5408. The examiner can normally be reached M-F 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at (571)270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PIERRE L MACCAGNO/Examiner, Art Unit 3687 /STEVEN G.S. SANGHERA/Primary Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Oct 18, 2023
Application Filed
Jun 04, 2025
Non-Final Rejection — §101, §103
Oct 23, 2025
Examiner Interview Summary
Nov 10, 2025
Response Filed
Mar 06, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580057
SYSTEMS AND METHODS FOR NATURAL LANGUAGE PROCESSING-BASED CLASSIFICATION OF ELECTRONIC MEDICAL RECORDS
2y 5m to grant Granted Mar 17, 2026
Patent 12423674
SECURE QR CODE TRANSACTIONS
2y 5m to grant Granted Sep 23, 2025
Patent 12263019
APPARATUS AND A METHOD FOR THE GENERATION OF A PLURALITY OF PERSONAL TARGETS
2y 5m to grant Granted Apr 01, 2025
Patent 12211008
FAILURE MODELING BY INCORPORATION OF TERRESTRIAL CONDITIONS
2y 5m to grant Granted Jan 28, 2025
Patent 12190313
SYSTEMS AND METHODS FOR CARD REPLACEMENT
2y 5m to grant Granted Jan 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
22%
Grant Probability
53%
With Interview (+31.5%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month