Prosecution Insights
Last updated: April 19, 2026
Application No. 18/444,464

METHOD AND DEVICE FOR PROVIDING FOOD INTAKE SUPPORT SERVICE

Non-Final OA §101§102§103
Filed
Feb 16, 2024
Examiner
BIANCAMANO, ALYSSA N
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Hyodol Co. Ltd.
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
94%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
90 granted / 161 resolved
-14.1% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
46 currently pending
Career history
207
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
33.3%
-6.7% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
33.1%
-6.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 161 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “100” has been used to designate both a robot device and a first device (see Fig. 2). A suggested amendment is as follows: The first device, comprising a spoon 110a and a storage case 110b, should be depicted as “110” (see Specification, [0040-0041]). The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters "152" and "158" have both been used to designate the input unit (see Fig. 3). A suggested amendment is as follows: Reference character “158” should designate the “control unit” (see Specification, [0056]). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation is: “second device” recited in claims 9-10 and 12-15 (see MPEP 2181(I)(A), noting that “device for” is a non-structural generic placeholder). Because this claim limitation is being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it is being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this limitation interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation to avoid it being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation recites sufficient structure to perform the claimed function so as to avoid it being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Objections Claims 2-3, 8, 10, 13, and 15 are objected to because of the following informalities: “on a user” recited in claim 2, ln. 2 should likely read “[[on]]of a user”; “the mood of the user that has checked based on the plurality of sensing data” recited in claim 3, ln. 2-3 should likely read “the mood of the user see claim 2, which recites that the “sensing data” is obtained by the second device, as opposed to the “plurality of sensing data” which is obtained from the first device); “at least one of ingredients required for the meal plan from the user; and ordering ingredients” recited in claim 8, ln. 3-4 should likely read “at least one [[of ]]ingredient[[s]] required for the meal plan from the user; and ordering the at least one ingredient[[s]]”; “on the user” recited in claim 10, ln. 2 should likely read “[[on]]of the user”; “performs” recited in claim 10, ln. 2 should likely read “perform[[s]]”; “to transmit, to a monitoring device, an analysis result obtained by the analyzing the plurality of sensing data upon checking” recited in claim 13, ln. 2-3 should likely read “to transmit[[,]] to a monitoring device[[,]] an analysis result, obtained by analyzing the plurality of sensing data, upon checking” to avoid claim ambiguity; and “to order ingredients according to a purchase request signal to a preset mart server in accordance with the purchase request signal for at least one of ingredients required for the meal plan received from the user” recited in claim 15, ln. 2-4 should likely read “to order at least one ingredient[[s]] required for the meal plan according to a purchase request signal from the user to a preset mart server. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 6-7, 9-10, and 13-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more. Regarding claim 1, analyzed as representative claim: [Step 1] Claim 1 recites in part “A method”, which falls within the “process” statutory category of invention. [Step 2A – Prong 1] The claim recites a series of steps which can practically be performed by one or more humans through mental process (i.e., observation, evaluation, judgment, and/or opinion) (see MPEP 2106.04(a)(2)(III), and certain methods of organizing human activity (i.e., managing personal behavior or relationships or interactions between people – including social activities, teaching, and following rules of instructions) (see MPEP 2106.04(a)(2)(II)). Claim 1 recites: A method of providing a food intake support service, comprising: outputting, by a second device included in a robot device comprising the second device and a first device, an alarm when a preset food intake time arrives (human activity: interactions between two individuals, e.g., teaching/instruction); receiving, by the second device, a plurality of sensing data related to food intake from the first device (mental process: observation, and/or data gathering/transmission); analyzing, by the second device, the plurality of sensing data (mental process: evaluation); outputting, by the second device, feedback based on the plurality of sensing data (human activity: interactions between two individuals, e.g., teaching/instruction); and performing, by the second device, a real-time interactive communication based on inputted voice data (human activity: interactions between two individuals, e.g., teaching/instruction). The limitations, under their broadest reasonable interpretation, encompass mental processes and certain methods of organizing human activity, as shown above, but for the recitation of generic computing components italicized above (a first device and a second device of a robot device). For example, a human (e.g., caretaker) could instruct (e.g., verbally alarm/notify) a person (e.g., patient) when a preset meal time arrives, mentally analyze gathered data related to food intake (e.g., evaluate a time it takes to consume a meal based on timer data, a heart rate or blood pressure taken by a monitoring device during consumption, a like/dislike of certain foods based on visual observation, etc.), verbally output feedback based on the gathered data (e.g., instruct the patient to eat slower or to avoid certain foods), and engage in verbal communication with the patient. Thus, the claim recites abstract ideas. [Step 2A – Prong 2] The claim fails to recite additional limitations to integrate the abstract ideas into a practical application. That is, while the claim recites a “second device” included in a robot device for performing the steps of outputting an alarm, receiving data, analyzing the data, outputting feedback, and performing a real-time interactive communication, the second device is recited at a high level of generality and amounts to no more than mere automation of manual processes. The generic manner in which the second device is claimed amounts to no more than instructions to implement the abstract ideas using a generic computing component (see MPEP 2106.05(f)). Similarly, the additional element of a “first device” included in the robot device for receiving and transmitting a plurality of sensing data related to food intake is recited at a high level of generality and also amounts to no more than instructions to implement the abstract ideas using a generic computing component (see Specification, [0110], “The devices and control thereof described above may be implemented in hardware components, software components, and/or combinations of hardware components and software components. For example, the devices and components described in the embodiments may be implemented using one or more general-purpose computers or special-purpose computers, such as processors, controllers, ALUs (arithmetic logic units), digital signal processors, microcomputers, FPGAs (field programmable gate arrays), PLUs (programmable logic units), microprocessors, or any other devices capable of executing and responding to instructions.”). Additionally, and/or alternatively, the limitation of receiving, by the second device, a plurality of sensing data related to food intake from the first device is directed to the insignificant extra-solution activity of data gathering/transmission, which does not integrate the abstract ideas into a practical application (See MPEP 2106.05(g)). There is no indication that the combination of elements improves the functionality of a computer or other technology (See MPEP 2106.05(a)), recites a “particular machine” to apply or use the abstract ideas (See MPEP 2106.05(b)), recites a transformation of an article to a different thing or state (See MPEP 2106.05(c)), or recites any other meaningful limitation (See MPEP 2106.05(e)). As indicated by the Specification, the generic components are utilized to automate functions performed by a human (see Specification, [0004-0005], due to a decrease in funding and manpower to provide care services). Accordingly, the claim is directed to the abstract ideas. [Step 2B] As discussed above with respect to integration of the abstract ideas into a practical application, the claim does not further include additional elements that are sufficient to amount to significantly more than the abstract ideas. While the claim recites a robot device comprising a first device and a second device for performing the claimed functions, the limitations are recited at a high level of generality such that they do not amount to a particular machine or technical improvement thereof, nor do they represent an improvement in any other technology. Rather, the generic manner in which these limitations are claimed amount to mere instructions to implement the abstract ideas using generic computing components. Additionally, and/or alternatively, as noted above, the limitation of receiving, by the second device, a plurality of sensing data related to food intake from the first device is directed to insignificant extra-solution activity (data gathering/transmission). Taking the claim elements separately, the functions performed by the devices are devoid of technical/technological details. Further, the limitations, when taken in combination, add nothing that is not already present when looking at the elements taken individually. Furthermore, the Specification further demonstrates that the elements are recited for their well-understood, routine, and conventional functionality, and which refers to the elements in a manner that indicates that they are sufficiently well-known that the Specification does not need to describe the particulars of such elements to satisfy enablement (see Specification, [0110], “The devices and control thereof described above may be implemented in hardware components, software components, and/or combinations of hardware components and software components. For example, the devices and components described in the embodiments may be implemented using one or more general-purpose computers or special-purpose computers, such as processors, controllers, ALUs (arithmetic logic units), digital signal processors, microcomputers, FPGAs (field programmable gate arrays), PLUs (programmable logic units), microprocessors, or any other devices capable of executing and responding to instructions.”; see further Specification, [0041], “The first device 110 may be implemented broadly as a spoon 110 and a spoon storage case 110b” and [0045], “The second device 150 may include a robot 150a in the form of a plush doll and a cradle 150b on which the robot 150a can be placed”). Thereby, claim 1 is not patent eligible. Independent claim 9 recites a device configured to provide a food intake support service comprising a first device and a second device which execute the limitations recited above. As similarly noted above, the first and second device are recited at a high level of generality such that they do not amount to a particular machine or technical improvement thereof, nor do they represent an improvement in any other technology. Rather, the additional elements merely amount to instructions to implement the abstract ideas on generic computing components performing their generic functions (e.g., plurality of sensors for obtaining a plurality of sensing data) (see Specification, [0110], “The devices and control thereof described above may be implemented in hardware components, software components, and/or combinations of hardware components and software components. For example, the devices and components described in the embodiments may be implemented using one or more general-purpose computers or special-purpose computers, such as processors, controllers, ALUs (arithmetic logic units), digital signal processors, microcomputers, FPGAs (field programmable gate arrays), PLUs (programmable logic units), microprocessors, or any other devices capable of executing and responding to instructions.”). Accordingly, the claim fails to include additional limitations that integrate the abstract ideas into a practical application or provide significantly more. Therefore, claim 9 is likewise not patent eligible. Claims 2-3, 6-7, 10, and 13-14 are dependent on claims 1 and 9 and therefore recite the same abstract ideas noted above. While the dependent claims may have a narrower scope than the independent claims, the claims fail to recite additional limitations that would integrate the abstract ideas into a practical application or provide significantly more (i.e., an inventive concept). The limitations of claims 2, 6-7, 10, and 13-14 recite further, under their broadest reasonable interpretation, abstract ideas and/or insignificant extra-solution activity comprising obtaining sensing data on a user of the robot device (mental process: observation, and/or insignificant extra-solution activity of data gathering), checking the mood of the user based on the sensing data (mental process: observation/evaluation), checking if food intake has ended (mental process: observation/evaluation), transmitting an analysis result (insignificant extra-solution activity of data transmission), obtaining an analysis result by analyzing a plurality of sensing data (mental process: observation/evaluation) and creating and outputting a meal plan for a fixed period of time based on a health condition of a user (mental process: observation/evaluation/judgment/opinion, and human activity: interactions between two individuals, e.g., teaching/instruction). Moreover, claims 3 and 10 further merely define the communication performed, and thus do not recite additional limitations to integrate the abstract ideas into a practical application or provide significantly more. Accordingly, the analysis performed on claims 1 and 9 above are also applicable on claims 2-3, 6-7, 10, and 13-14, and therefore, the recited dependent claims are also patent ineligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 6 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kuroki (JP 6528019 B2). Regarding claim 1, Kuroki discloses a method of providing a food intake support service ([0001]; [0010], interactively inducing a user’s intake of a meal), comprising: outputting, by a second device included in a robot device comprising the second device and a first device, an alarm when a preset food intake time arrives (Figs. 1-3; [0013-0014]; [0022-0023]; [0040]; [0042]; [0048-0049]; [0052], client 6 (robot device) comprising a meal induction device 2 (second device) and a meal intake detection tray 3 (first device), wherein the meal induction device 2 aurally induces the user to take a meal corresponding to received meal intake data from a monitoring device, points to a dish, and also transmits in real time a control signal for turning on and off display portions 34a to 34d and intake device installation display portion 37 of the meal intake detection tray); receiving, by the second device, a plurality of sensing data related to food intake from the first device (Figs. 1 & 3; [0023]; [0041-0042], wherein the detection outputs of the detection units 32a to 32d, the intake device detection units 33a to 33d, and the intake device installation detection unit 36 of the meal intake detection tray 3 are transferred to CPU 20J of the meal induction device 2 through communication bus 99 in real time); analyzing, by the second device, the plurality of sensing data (Fig. 12; [0023]; [0042], where CPU 20J and RAM 20K of the meal induction device 2 constitute a meal intake measurement means, and wherein the outputs received by the meal induction 2 are used to measure the meal intake operation (e.g., order, intake interval, and the like in which a user has consumed the dishes) of user 10); outputting, by the second device, feedback based on the plurality of sensing data ([0022-0023]; [0034]; [0042]; [0049], wherein feedback (e.g., controlling the tableware display units 34a to 34d, motor control board 20M for controlling movement of the device 2, and the voice output board 20N that controls a speaker 2H of the device 2) is provided based on the detection outputs of 32a to 32d, the intake device detection units 33a to 33d, and the intake device installation detection unit 36 to interactively instruct/induce the user to take a next meal); and performing, by the second device, a real-time interactive communication based on inputted voice data ([0022-0023]; [0037]; [0049]; [0059-0060], wherein the device 2, in response to the meal intake operation of the user, induces the next food intake via inputted and set audible instruction output through speaker 2H in order to interactively induce the meal intake to the user). Regarding claim 6, Kuroki further discloses checking, by the second device, whether the food intake has ended from the first device ([0049]; [0051]; [0063], wherein the device 2 determines if the intake timer has elapsed and/or if the corresponding tableware 4a to 4d is lifted or the intake device 5 is directed to the corresponding tableware 4a to 4d to determine that the dish 4a-4d has been consumed); and transmitting, by the second device, an analysis result obtained by analyzing the plurality of sensing data to a monitoring device (Figs. 1 & 12; [0014]; [0046]; [0071-0072], wherein the meal intake measurement data indicating the meal intake operation of the user (analysis result) is transmitted from the transmitting/receiving unit 20W of the device 2 to the monitoring device 7). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Kuroki in view of Kim (KR 2016/00665341 A1) (hereinafter “Studio”). Regarding claim 2, Kuroki may not further explicitly disclose after the outputting of the alarm: obtaining, by the second device, sensing data on a user of the robot device; and checking, by the second device, a mood of the user based on the sensing data obtained by the second device. However, Studio, directed to a caregiver toy for supporting elderly people ([0001]), teaches the caregiver toy configured to collect information on a user’s biometrics and emotional state based on voice data and movement information/actions of the user collected from a microphone and a sensor unit, respectively, of the caregiver toy ([0019-0020]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize the meal induction device (stuffed toy communication robot) of Kuroki configured for outputting communication/guidance to the user to further obtain sensing data of the user for checking a mood of the user, as taught by Studio, in order to provide additional and/or adjusted care/guidance to the user (Studio, [0015]; [0024], wherein, by analyzing the current state of the user, appropriate messages suitable for the user’s state can be output by the caregiver toy). Regarding claim 3, Kuroki may not further explicitly disclose, however, Studio further teaches wherein the real-time interactive communication is performed based on the mood of the user that has checked based on the plurality of sensing data obtained by the second device and the voice data ([0015]; [0019-0020]; [0024], wherein the caregiver toy collects the user’s current state, including voice information and/or movement information, and outputs an appropriate, received message by means of a voice or a movement, thereby enabling the user to feel that a friend is always therewith). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize the meal induction device (stuffed toy communication robot) of Kuroki configured for outputting communication/guidance to the user to further provide the real-time interactive communication with the user based on the mood of the user based on sensing data and voice data, as taught by Studio, in order to provide additional and/or more tailored communication/guidance to the user (Studio, [0015]; [0024], wherein, by analyzing the current state of the user, appropriate messages suitable for the user’s state can be output by the caregiver toy). Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kuroki in view of Dubey et al. (U.S. Pub. 2020/0000258 A1) (hereinafter “Dubey”). Regarding claim 4, Kuroki discloses wherein the plurality of sensing data is received from a plurality of sensors provided in the first device (Figs. 1 & 3; [0020-0021]; [0023]; [0041-0042]; [0076-0077], wherein the meal intake detection tray includes a plurality of sensors/detection units (detection units 32a to 32d, the intake device detection units 33a to 33d, and the intake device installation detection unit 36) for receiving the plurality of sensing data (e.g., installation state of dishes 4a-4d, lifting/consumption of dishes 4a-4d, state of intake device 5 (e.g., spoon))). However, Kuroki may not further explicitly disclose wherein the plurality of sensors includes at least one of a motion sensor, a salinity sensor, a gas sensor, or a temperature sensor that is provided in the first device that is a cutlery including a spoon. Rather, Kuroki further explicitly discloses wherein an intake device of the first device used to consume a meal may be a spoon, and the first device comprises intake device detection units for detecting the intake device ([0017-0018]; [0020], wherein the intake device detection units 33a-33d are reflection type optical sensors that optically detect that the intake device is directed to corresponding tableware installation units 31a-31d). Nevertheless, Dubey, directed to monitoring an eating utensil ([0001]), teaches the eating utensil, implemented as a spoon or fork, comprising sensors including motion sensors (accelerometer and/or gyroscope sensors), as well as optionally a temperature sensor, wherein the accelerometer and/or gyroscope sensors provide for the detection of the movement of the eating utensil ([0019-0020]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to substitute the intake device and corresponding detection units of Kuroki with the eating utensil of Dubey and/or further integrate the eating utensil of Dubey in the first device of Kuroki as an alternative and/or additional technique to monitor food intake, including intake order, based on the detected path of the eating utensil (Kuroki, [0004]; [0017]; [0020-0021]; [0039]; [0042]; [0052]; [0063], wherein the order of meal intake is important for disease prevention and is determined at least in part by monitoring the intake device state; Dubey, [0005]; [0020], wherein the accelerometer sensor and/or gyroscope sensor provide for detection of the movement/path of the eating utensil in order to determine if the path differs from an ideal path). Regarding claim 5, Kuroki further discloses wherein the output feedback, in addition to turning on display portions 34a-34d, includes visual and audible output by the second device ([0022-0023]; [0042]; [0047]; [0049]). Kuroki may not further explicitly disclose wherein the feedback is output based on at least one of a mastication speed of a user of the robot device, a salinity level contained in a food, whether the food is spoiled, or a temperature of the food based on the plurality of sensing data. However, Dubey teaches wherein feedback is output based on a temperature of the food, which is sensed by a temperature sensor of the eating utensil (Fig. 2; [0020-0022], wherein an LED is used to output feedback based on a temperature (high or low temperature) of the food detected by the eating utensil). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to output feedback regarding a temperature of the food, as taught by Dubey, in the invention of Kuroki in order to provide additional guidance/information to the user (Kuroki, [0017]; [0020]; [0022-0023]; [0042]; [0047]; [0049]; [0066], where LEDs 34a-34d provide guidance/information to the user, including type of food being consumed, wherein a type of food may be hot food versus cold food, and wherein the feedback provided by the tray (first device) is also provided by the second device (i.e., wherein, in addition to lighting the display portions 34a-34d, the second device provides visual and audible guidance)). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kuroki. Regarding claim 7, Kuroki further discloses, after the checking of whether the food intake has ended, outputting, by the second device, a meal plan for a fixed period of time based on a health condition of a user of the robot device ([0013-0014]; [0072-0075]; [0078], wherein a health condition analysis is started after the meal intake induction operation is completed, and a proper dietary intake order and intake interval for the user for a next meal (meal plan for a fixed period of time) is provided by the monitoring device accordingly and induced/output by the device 2). While Kuroki may not explicitly disclose wherein the meal plan is created by the second device, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize the components (e.g., CPU 20J, RAM 20K, ROM 20L) of the device 2 (second device) to create the meal plan based on the analyzed results (meal intake measurement data) determined by the device 2 for faster creation/transmission to the user and/or to reduce the system parts required for cost and/or efficiency purposes (Kuroki, [0014]; [0024-0025], wherein the health condition of the user is analyzed based on the meal intake measurement data received from the device 2, and wherein the health condition analysis unit 76, as well as the monitoring device, are constituted by a CPU and memory). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kuroki in view of Yoo (KR 101552339 B1). Regarding claim 8, Kuroki may not further explicitly disclose, after the creating and outputting of the meal plan: receiving, by the second device, a purchase request signal for at least one of ingredients required for the meal plan from the user; and ordering ingredients according to the purchase request signal to a preset mart server. However, Yoo, directed to servicing a personalized food menu (p. 1, paragraph 1 (Title)), teaches wherein information on calculated types and amounts of food ingredients and the costs incurred is provided to a user terminal, the user confirms the location of a local food ingredient distribution and information on the prices of the corresponding food ingredients displayed on the user terminal and requests an order for the food ingredients (a purchase request signal), and accordingly the user terminal is connected to a payment server and payment-completed order information is provided to the corresponding local food ingredient distributor to request a delivery service for the paid food ingredients (p. 11, paragraphs 2-5; p, 16, paragraph 3 (see highlighted paragraphs)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a server which corresponds to a local food ingredient distributor to fulfill a request/order for ingredients, as taught by Yoo, in the invention of Kuroki in order to provide the dishes for the meal intake induction system (Kuroki, Fig. 1; [0014-0015], wherein a delivery company delivers the food (dishes 4a-4d) to the user’s home, the system of Kuroki also including a user terminal 8). Claims 9 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Kuroki in view of Kim (KR 20180078555 A). Regarding claim 9, Kuroki discloses a device configured to provide a food intake support service (Fig. 1; [0001]; [0013], client 6 for interactively inducing a user’s intake of a meal), comprising: a first device comprising a plurality of sensors and configured to obtain a plurality of sensing data related to food intake (Figs. 1 & 3; [0013]; [0017-0023]; [0063]; [0076-0077], meal intake detection tray 3 comprising a plurality of sensors (detection units) (e.g., optical sensors, wireless sensors) for obtaining sensing data related to food intake (e.g., installation state of dishes 4a-4d, lifting/consumption of dishes 4a-4d, state of intake device 5 (e.g., spoon))); and a second device configured to output an alarm when a preset food intake time arrives (Figs. 1-3; [0013-0014]; [0022-0023]; [0040]; [0042]; [0048-0049]; [0052], the client 6 comprising a meal induction device 2 (second device) and the meal intake detection tray 3 (first device), wherein the meal induction device 2 aurally induces the user to take a meal corresponding to received meal intake data from a monitoring device, points to a dish, and also transmits in real time a control signal for turning on and off display portions 34a to 34d and intake device installation display portion 37 of the meal intake detection tray), analyze the plurality of sensing data received from the first device and output feedback (Fig. 12; [0022-0023]; [0034]; [0042], where CPU 20J and RAM 20K of the meal induction device 2 constitute a meal intake measurement means, wherein the received outputs by the meal induction 2 are used to measure the meal intake operation (e.g., order, intake interval, and the like in which a user has consumed the dishes) of user 10, and wherein feedback (e.g., controlling the tableware display units 34a to 34d, motor control board 20M for controlling movement of the device 2, and the voice output board 20N that controls a speaker 2H of the device 2) is provided based on the detection outputs of 32a to 32d, the intake device detection units 33a to 33d, and the intake device installation detection unit 36 to interactively instruct/induce the user to take a next meal), and perform a real-time interactive communication based on voice data ([0022-0023]; [0037]; [0049]; [0059-0060], wherein the device 2, in response to the meal intake operation of the user, induces the next food intake via inputted and set audible instruction output using the speaker 2H in order to interactively induce the meal intake to the user). Kuroki may not further explicitly disclose wherein the real-time interactive communication is performed based on voice data inputted from a user. However, Kim, directed to a care system using a silver smart device (e.g., doll/toy) including a speech induction system that can be a chat partner for a user and for prompting a meal (Figs. 1 & 4; [0001]; [0014-0015]; [0044]; [0046]), teaches wherein a toy (silver smart device) recommends a meal, receives a user’s response inputted through a voice input unit, and generates, in response to the analyzed input voice signal, an output voice as an answer (Fig. 4; [0017]; [0048-0049]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to perform the interactive communication of the meal induction device of Kuroki, which is based on user operations (intake operations), further on voice data inputted from a user, as taught by Kim, in order to provide a chat system and/or more interactive meal prompting system which aids with loneliness or depression (Kuroki, [0002]; [0005], wherein nutrition instruction is important in preventative measures against lifestyle-related diseases and dementia, and additionally wherein the decrease in quality of life from depression and loneliness due to an increase in solitary living alone has been a problem which the invention aims to solve; Kim, [0005]; [0010-0011], wherein older people are often alone and are thus more likely to suffer from depression, and therefore, it is necessary to develop a device capable of conversation). Regrading claim 13, Kuroki further discloses wherein the second device is configured to transmit, to a monitoring device, an analysis result obtained by analyzing the plurality of sensing data upon checking from the first device whether the food intake has ended (Figs. 1 & 12; [0014]; [0046]; [0049]; [0051]; [0063]; [0071-0072], wherein the device 2 determines if the intake timer has elapsed and/or if the corresponding tableware 4a to 4d is lifted or the intake device 5 is directed to the corresponding tableware 4a to 4d to determine that the dish 4a-4d has been consumed, and thereafter the meal intake measurement data indicating the meal intake operation of the user (analysis result) is transmitted from the transmitting/receiving unit 20W of the device 2 to the monitoring device 7). Regarding claim 14, Kuroki further discloses wherein the second device is configured to output a meal plan for a fixed period of time based on a health condition of the user ([0013-0014]; [0072-0075]; [0078], wherein a health condition analysis is started after the meal intake induction operation is completed, and a proper dietary intake order and intake interval for the user for a next meal (meal plan for a fixed period of time) is provided by the monitoring device accordingly and induced/outputted by the device 2). While Kuroki may not explicitly disclose wherein the meal plan is created by the second device, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize the components (e.g., CPU 20J, RAM 20K, ROM 20L) of the device 2 (second device) to create the meal plan based on the analyzed results (meal intake measurement data) determined by the device 2 for faster creation/transmission to the user and/or to reduce the system parts required for cost and/or efficiency purposes (Kuroki, [0014]; [0024-0025], wherein the health condition of the user is analyzed based on the meal intake measurement data received from the device 2, and wherein the health condition analysis unit 76, as well as the monitoring device, are constituted by a CPU and memory). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kuroki in view of Kim, as applied to claim 9, and in further view of Kim (KR 2016/00665341 A1) (hereinafter “Studio”). Regarding claim 10, Kuroki may not further explicitly disclose wherein the second device is configured to check a mood of the user based on sensing data on the user and performs the real-time interactive communication based on the mood of the user and the voice data. However, Studio, directed to a caregiver toy for supporting elderly people ([0001]), teaches the caregiver toy configured to collect information on a user’s biometrics and emotional state based on voice data and movement information/actions of the user collected from a microphone and a sensor unit, respectively, of the caregiver toy, wherein real-time interactive communication (voice message) is performed based on the user’s current state, including voice information and/or movement information ([0015]; [0019-0020]; [0024]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize the meal induction device (stuffed toy communication robot) of Kuroki configured for outputting communication/guidance to the user to further obtain sensing data for checking a mood of the user and providing the real-time interactive communication with the user based on the mood of the user based on the sensing data and voice data, as taught by Studio, in order to provide additional and/or more tailored communication/guidance to the user (Studio, [0015]; [0024], wherein, by analyzing the current state of the user, appropriate messages suitable for the user’s state can be output by the caregiver toy). Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Kuroki in view of Kim, as applied to claim 9, and in further view of Dubey et al. (U.S. Pub. 2020/0000258 A1) (hereinafter “Dubey”). Regarding claim 11, Kuroki further discloses wherein an intake device of the first device used to consume a meal may be a spoon, and the first device comprises a plurality of sensors/detection units, including intake device detection units for detecting the intake device ([0017-0018]; [0020], wherein the intake device detection units are reflection type optical sensors that optically detect that the intake device is directed to corresponding tableware installation units 31a-31d). However, Kuroki may not explicitly disclose wherein the first device is a cutlery comprising a spoon provided with a plurality of sensors comprising at least one of a motion sensor, a salinity sensor, a gas sensor, or a temperature sensor. Nevertheless, Dubey, directed to monitoring an eating utensil ([0001]), teaches the eating utensil, implemented as a spoon or fork, comprising sensors including motion sensors (accelerometer and/or gyroscope sensors), as well as optionally a temperature sensor, wherein the accelerometer and/or gyroscope sensors provide for the detection of the movement of the eating utensil ([0019-0020]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to substitute the intake device and corresponding detection units of Kuroki with the eating utensil of Dubey and/or further integrate the eating utensil of Dubey in the first device of Kuroki as an alternative and/or additional technique to monitor food intake, including intake order, based on the detected path of the eating utensil (Kuroki, [0004]; [0017]; [0020-0021]; [0039]; [0042]; [0052]; [0063], wherein the order of meal intake is important for disease prevention and is determined at least in part by monitoring the intake device state; Dubey, [0005]; [0020], wherein the accelerometer sensor and/or gyroscope sensor provide for detection of the movement/path of the eating utensil in order to determine if the path differs from an ideal path). Regarding claim 12, Kuroki further discloses wherein the output feedback, in addition to turning on display portions 34a-34d, includes visual and audible output by the second device ([0022-0023]; [0042]; [0047]; [0049]). Kuroki may not further explicitly disclose wherein the second device is configured to output the feedback based on at least one of a mastication speed of the user, a salinity level contained in a food, whether the food is spoiled, or a temperature of the food based on the plurality of sensing data. However, Dubey teaches wherein feedback is output based on a temperature of the food, which is sensed by a temperature sensor of the eating utensil (Fig. 2; [0020-0022], wherein an LED is used to output feedback based on a temperature (high or low temperature) of the food detected by the eating utensil). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to output feedback regarding a temperature of the food, as taught by Dubey, in the invention of Kuroki in order to provide additional guidance/information to the user (Kuroki, [0017]; [0020]; [0022-0023]; [0042]; [0047]; [0066], where LEDs 34a-34d provide guidance/information to the user, including type of food being consumed, wherein a type of food may be hot food versus cold food, and wherein the feedback provided by the tray (first device) is also provided by the second device (i.e., wherein, in addition to lighting the display portions 34a-34d, the second device provides visual and audible guidance)). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Kuroki in view of Kim, and applied to claim 14, and in further view of Yoo (KR 101552339 B1). Regarding claim 15, Kuroki may not further explicitly disclose wherein the second device is configured to order ingredients according to a purchase request signal to a preset mart server in accordance with the purchase request signal for at least one of ingredients required for the meal plan received from the user. However, Yoo, directed to servicing a personalized food menu (p. 1, paragraph 1 (Title)), teaches wherein information on calculated types and amounts of food ingredients and the costs incurred is provided to a user terminal, the user confirms the location of a local food ingredient distribution and information on the prices of the corresponding food ingredients displayed on the user terminal and requests an order for the food ingredients (a purchase request signal), and accordingly the user terminal is connected to a payment server and payment-completed order information is provided to the corresponding local food ingredient distributor to request a delivery service for the paid food ingredients (p. 11, paragraphs 2-5; p, 16, paragraph 3 (see highlighted paragraphs)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a server which corresponds to a local food ingredient distributor to fulfill a request/order for ingredients, as taught by Yoo, in the invention of Kuroki in order to provide the dishes for the meal intake induction system (Kuroki, Fig. 1; [0014-0015], wherein a delivery company delivers the food (dishes 4a-4d) to the user’s home, the system of Kuroki also including a user terminal 8). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. 9,818,310 B2 – This reference teaches a sensor in a spoon, and further, wherein a start of an eating task or an end of an activity/meal can be determined. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSSA N BRANDLEY whose telephone number is (571)272-4280. The examiner can normally be reached M-F: 8:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dmitry Suhol, can be reached at (571)272-4430. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALYSSA N BRANDLEY/Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Dec 31, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597363
STEERING WHEEL CONNECTOR FOR AUTOMOTIVE SIMULATOR
2y 5m to grant Granted Apr 07, 2026
Patent 12592308
SYSTEM AND METHOD FOR AN ARTIFICIAL INTELLIGENCE ENGINE THAT USES A MULTI-DISCIPLINARY DATA SOURCE TO DETERMINE COMORBIDITY INFORMATION PERTAINING TO USERS AND TO GENERATE EXERCISE PLANS FOR DESIRED USER GOALS
2y 5m to grant Granted Mar 31, 2026
Patent 12564762
PHYSICAL ACTIVITY MONITORING AND MOTIVATING WITH AN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12567341
ORIENTATION ASSISTANCE SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12532953
COLOR CHART AND METHOD FOR THE MANUFACTURE OF SUCH A COLOR CHART
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
94%
With Interview (+38.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 161 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month