Prosecution Insights
Last updated: April 19, 2026
Application No. 18/290,564

ARTIFICIAL INTELLIGENCE-BASED MEAL MONITORING METHOD AND APPARATUS

Non-Final OA §101§102§103
Filed
Nov 14, 2023
Examiner
GARTLAND, SCOTT D
Art Unit
3685
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nuvi Labs Co. Ltd.
OA Round
3 (Non-Final)
11%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
24%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
65 granted / 585 resolved
-40.9% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
41 currently pending
Career history
626
Total Applications
across all art units

Statute-Specific Performance

§101
28.5%
-11.5% vs TC avg
§103
29.9%
-10.1% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
21.1%
-18.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 26 November 2025 has been entered. Status This Office Action is in response to the communication filed on 26 November 2025. Claims 2 and 10 have been cancelled, claims 1, 8-9, and 12-15 have been amended, and no new claims have been added. Therefore, claims 1, 3-9, and 11-15 are pending and presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment A summary of the Examiner’s Response to Applicant’s amendment: Applicant’s amendment overcomes the claim objection(s); therefore, the Examiner withdraws the objection(s). The Examiner has noticed that current claims 1 and 15 each have phrasing and/or dependencies added and/or deleted without edit markings. Since the Examiner has noticed the editing at claims 1 and 15, these particular edits will be interpreted or regarded as an effective amendment; however, no other amending without edit markings is considered as effective for amending. See the claim objections below. It is noted that ONLY claims 1 and 15 are being considered as effectively amended, and ONLY for the noted phrasing addition or deletion. Applicant’s amendment avoids the 35 USC § 112(f) interpretation; therefore, the related claim interpretation and 35 USC § 112(a) and (b) rejections are withdrawn. Applicant’s amendment does not overcome the rejection(s) under 35 USC § 101; therefore, the Examiner maintains the rejection(s) while updating phrasing in keeping with current examination guidelines. Applicant’s amendment does not overcome the prior art rejection(s) under 35 USC §§ 102 or 103; therefore, the Examiner maintains the rejection(s) as below. Applicant’s arguments are found to be not persuasive; please see the Response to Arguments below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 8 November 2024 was filed after the mailing date of the application on 14 November 2023. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Examiner Note Claims 6 and 13 each indicate “the pre-stored classification table is generated by inputting at least one image to an artificial intelligence-based machine learning model as training data and training the machine learning model to learn the respective classes of a plurality of foods and includes at least one class for identifying each of the plurality of foods, wherein the controller updates the pre-stored classification table by further training the machine learning model using the labeled image as the training data” (at claim 6, claim 13 having slightly different phrasing, but the same concept). However, the classification table is “pre-stored” – whether it has been updated or not, when the system or method operates to classify a food item, the table is whatever the table is at that time. Whether that table was generated by artificial intelligence, machine learning, manual labor, arbitrary guessing, picking random phrases, etc. does not appear to matter with regard to the table being used. As such, little if any patentable weight may be granted to the manner in which that table was formed, generated, or updated. Claim Objections Claim 1 is objected to because of the following informalities: claim 1 previously recited “while the subclass is a class for classifying a food which is subdivided into at least two foods” at the previous final element. However claim 1 now deletes the above phrasing without edit markings, and adds “wherein the subclass is a class for classifying a food which is subdivided into at least two foods” with edit markings. This is being considered an informality merely because the Examiner has noticed the change(s) and this action appears fairly readily possible despite the incorrect edit marking. The addition is recognized per the edit markings, and the deletion is being considered as an effective amendment despite no edit markings indicating the deletion. It is noted, however, that if there is any other un-marked editing other than that specifically indicated by the Examiner, it is not being considered as effective for amending the claims. Claim 15 is objected to because of the following informalities: claim 15 now recites “to perform a meal monitoring method of any one of claims 9 and 11 to 14” (as per the edit markings of the current claims), where the previous version of this claim recited “to perform a meal monitoring method of claim 9”. The Examiner notes that several words and possible dependencies have been added by amendment without any edit markings. This is being considered an informality merely because the Examiner has noticed the change(s) and this action appears fairly readily possible despite the incorrect edit marking. Claim 15 is being regarded as the amending being effective as an amendment. It is noted, however, that if there is any other un-marked editing other than that specifically indicated by the Examiner, it is not being considered as effective for amending the claims. Appropriate correction is required at any future submissions – it appears a Notice of Non-Responsive Amendment would be appropriate. Claim Interpretation The Examiner notes that claims 8 and 14 recite a “common menu name”, where this apparently means a “base class” or “largest concept for the corresponding food”, i.e. a type or category of food per the light of the specification (see, e.g., Applicant ¶¶ 0062, 0070). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-9, and 11-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Please see the following Subject Matter Eligibility (“SME”) analysis: For analysis under SME Step 1, the claims herein are directed to an apparatus (claims 1 and 3-8) and method (claims 9 and 11-15), which would be classified under one of the listed statutory classifications (SME Step 1=Yes). For analysis under revised SME Step 2A, Prong 1, independent claim 1 recites a meal monitoring apparatus, the apparatus comprising: at least one camera configured to capture an image including a food tray; a storage storing a pre-stored food menu, a pre-stored classification table, and at least one application program; and a controller configured to execute the at least one application program to: extract a food list from the pre-stored food menu; for each food in the extracted food list, check the pre-stored classification table to determine whether the food corresponds to a mix class or a subclass; when a food is determined to correspond to the mix class, add individual foods that can be combined with the food to form combined foods to the extracted food list; when a food is determined to correspond to the subclass, subdivide the food into at least two individual foods and add the subdivided individual foods to the extracted food list, thereby generating a subdivided food list as food information; when an image of a meal is captured by the at least one camera, classify at least one food included in the captured image based on the subdivided food list; and generate and store food information based on the classified at least one food, wherein the pre-stored classification table defines a relationship for each of a plurality of foods as at least one of a mix class and the subclass, wherein the mix class is a class for classifying a food resulting from a combination of at least two foods, wherein the subclass is a class for classifying a food which is subdivided into at least two foods. Independent claim 9 is analyzed similarly since directed to “a meal monitoring method, performed by a controller of a meal monitoring apparatus, the method comprising” the same or similar operations or activities as at claim 1 above, except that an image is actually obtained “based on at least one camera” at claim 9 and the classifying is of food included in the captured image. The Examiner notes that claim 15 depends from claim 9 (i.e., “a … method of any one of claims 9 and 11 to 14”, where claims 11-14 depend directly or indirectly from claim 9) but recites “A non-transitory computer-readable recording medium being integrated with a computer and storing a computer program for operating the computer to perform a meal monitoring method of any one of claims 9 and 11 to 14” and is therefore analyzed similarly to claims 1 and 9. The other remaining dependent claims (claims 3-8 and 11-14) appear to be encompassed by the abstract idea of the independent claims since they merely indicate labeling a food (claims 3 and 11), that a relationship is defined between the extracted and subdivided food base on correlation (claims 4-5 and 12), that the table was generated by machine learning and re-learning or re-training (claims 6-7 and 13), converting the food menu into a common menu name using a word2vec word embedding model (claims 8 and 14). The underlined portions of the claims are an indication of elements additional to the abstract idea (to be considered below). The claim elements may be summarized as the idea of storing a food list (although the claims imply that food on a tray is recognized). However, the Examiner notes that although this summary of the claims is provided, the analysis regarding subject matter eligibility considers the entirety of the claim elements, both individually and as a whole (or ordered combination). This idea is within the Mental processes (e.g., concepts performed in the human mind such as observation, evaluation, judgment, and/or opinion) grouping(s) of subject matter as based on the observation and evaluation of the food being able to be performed by a person mentally – a person can remember and recognize food items as well as classify them, including as based on a stored menu. Therefore, the claims are found to be directed to an abstract idea. For analysis under revised SME Step 2A, Prong 2, the above judicial exception is not integrated into a practical application because the additional elements do not impose a meaningful limit on the judicial exception when evaluated individually and as a combination. The additional elements are the indications of an apparatus, the apparatus comprising: at least one camera configured to capture an image, a storage storing a menu and classification table, and at least one application program; and a controller configured to execute the at least one application program to: perform the activities, including capturing an image by the at least one camera, (at claim 1 and 9) and A non-transitory computer-readable recording medium being integrated with a computer and storing a computer program for operating the computer to perform (at claim 15) and converting to a common name by using a word2vec word embedding model (at dependent claims 8 and 14). These additional elements do not reflect an improvement in the functioning of a computer or an improvement to other technology or technical field, effect a particular treatment or prophylaxis for a disease or medical condition (there is no medical disease or condition, much less a treatment or prophylaxis for one), implement the judicial exception with, or by using in conjunction with, a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing (there is no transformation/reduction of a physical article), and/or apply or use the judicial exception in some other meaningful way beyond generically linking use of the judicial exception to a particular technological environment. The claims appear to merely apply the judicial exception, include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform the abstract idea. The additional elements appear to merely add insignificant extra-solution activity to the judicial exception and/or generally link the use of the judicial exception to a particular technological environment or field of use. For analysis under SME Step 2B, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, as indicated above, are merely “[a]dding the words ‘apply it’ (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp.” that MPEP § 2106.05(I)(A) indicates to be insignificant activity. There is no indication the Examiner can find in the record regarding any specialized computer hardware or other “inventive” components, but rather, the claims merely indicate computer components which appear to be generic components and therefore do not satisfy an inventive concept that would constitute “significantly more” with respect to eligibility. The server providing information to the user terminal is only described as a generic “server” (see, e.g., Applicant ¶¶ 0039, 0044, 0045, 0048, 0078), the storage and controller are only indicated generically (see, e.g., Applicant ¶¶ 0056-0059), and the user terminal is also merely indicated as generic (Applicant ¶ 0047, including that the user terminal may just be “a black box”). The individual elements therefore do not appear to offer any significance beyond the application of the abstract idea itself, and there does not appear to be any additional benefit or significance indicated by the ordered combination, i.e., there does not appear to be any synergy or special import to the claim as a whole other than the application of the idea itself. The dependent claims, as indicated above, appear encompassed by the abstract idea since they merely limit the idea itself; therefore the dependent claims do not add significantly more than the idea. Therefore, SME Step 2B=No, any additional elements, whether taken individually or as an ordered whole in combination, do not amount to significantly more than the abstract idea, including analysis of the dependent claims. Please see the Subject Matter Eligibility (SME) guidance and instruction materials at https://www.uspto.gov/patent/laws-and-regulations/examination-policy/subject-matter-eligibility, which includes the latest guidance, memoranda, and update(s) for further information. NOTICE In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3-7, 9, 11-13, and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Divakaran et al. (U.S. Patent Application Publication No. 2016/0063734, hereinafter Divakaran). Claim 1: Divakaran discloses a meal monitoring apparatus (see Divakaran at least at, e.g., ¶ 0019, “The food recognition assistant computing system 100 executes one or more feature detection algorithms, including machine learning algorithms, semantic reasoning techniques, similarity algorithms, and/or other technologies to, among other things, in an automated fashion detect, recognize and describe one or more food items that are depicted in a digital image 102”; citation hereafter by number only), the apparatus comprising: at least one camera configured to capture an image including a food tray (0063, “Initially, the system 100 receives one or more input files, e.g. a digital image 102. The input file(s) can be embodied as, for example, a digital image captured by a smartphone or other personal electronic device”, 0089, “a camera coupled to the input/output subsystem; a display device coupled to the input/output subsystem; and memory including instructions executable by the one or more processors to cause the personal mobile electronic device to: with the camera, take a single picture of a plate of food; with the input/output subsystem, transmit the picture of the plate of food to a server computing device; with the input/output subsystem, receive from the server computing device, data identifying a food present in the picture of the plate of food and an algorithmically determined-confidence level associated with the identified food, where the data identifying the food present in the picture of the plate of food results from a multi-scale computer vision analysis of the single picture of the plate of food; and with the display device, display a label indicative of the identified food”); a storage storing a pre-stored food menu, a pre-stored classification table, and at least one application program (0029, “the sample of user-generated images is stored in a food image library 140”, 0030, “the food recognition module 104 can compare the image 102, or a semantic description of the image, to a known menu of the corresponding food dispensary” and “The co-occurrence model 138 relies on common relationships between foods to identify other foods that are likely to be present in the image. For example, if the food recognition module 104 identifies one of the foods present in an image as fried eggs, using the co-occurrence model 138, the food recognition module 104 can then compare the other foods present in the image 102 to foods that are commonly associated with or paired with fried eggs, such as toast or bacon. In this way, the co-occurrence model 138 can help the food recognition module 104 quickly identify meals that have combinations of foods that are frequently consumed together, such as hamburger and French fries or cake and ice cream”, 0063, “The input file(s) may be stored on a local computing device and/or a remote computing device (e.g., in personal cloud, such as through a document storing application like DROPBOX)”, 0089, “a camera coupled to the input/output subsystem; a display device coupled to the input/output subsystem; and memory including instructions executable by the one or more processors to cause the personal mobile electronic device to: with the camera, take a single picture of a plate of food”); and a controller configured to execute the at least one application program (0089, as above) to: extract a food list from the pre-stored food menu (0030, “the food recognition module 104 can compare the image 102, or a semantic description of the image, to a known menu of the corresponding food dispensary”, 0045, “The context model 134 can then search for a known menu of that restaurant and compare the image 102 to known food items served at the identified restaurant based on the menu”); for each food in the extracted food list, check the pre-stored classification table to determine whether the food corresponds to a mix class or a subclass (0019, “the illustrative computing system 100 can, among other things, help users quickly and easily identify the presence of a particular food or combination of foods in the image 102. The computing system 100 can also estimate a portion size of each identified food, and present nutritional information that relates to identified foods and their estimated portion sizes. The computing system 100 can, alternatively or in addition, generate natural language descriptions of food items detected in the image 102 (and/or natural language descriptions of combinations of identified food items)”); when a food is determined to correspond to the mix class, add individual foods that can be combined with the food to form combined foods to the extracted food list (0019, “generate natural language descriptions of food items detected in the image 102 (and/or natural language descriptions of combinations of identified food items)”, 0060, “Each of FIGS. 8A-8D is an example of a salient region grouping produced by the salient region grouping module 230. Element 810 is a piece of pineapple, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 812 is a piece of cantaloupe, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 814 is an individual piece of watermelon, which is depicted in FIGS. 8C and 8D. Element 816 is a grouping of multiple pieces of watermelon, which is depicted in FIGS. 8B and 8D. By iteratively analyzing and combining these groupings, the system 100 can identify not only the individual elements of fruit and their estimated portion sizes, but also a larger grouping that identifies the combination of fruits as a fruit salad”, 0083, “In both the example of FIG. 5A and the example of FIG. 5B, the system 100 was able to accurately detect both simple, individual foods and more complex foods consisting of a combination of different individual foods (e.g., “burger” in FIG. 5A is a complex food including ground meat and a bun, with the ground meat partially obscured; “green salad” and “mixed vegetables” are complex foods made up of a combination of individual food items, detected in the example of FIG. 5B)”); when a food is determined to correspond to the subclass, subdivide the food into at least two individual foods and add the subdivided individual foods to the extracted food list, thereby generating a subdivided food list as food information (0019, “the illustrative computing system 100 can, among other things, help users quickly and easily identify the presence of a particular food or combination of foods in the image 102”, 0060, “Each of FIGS. 8A-8D is an example of a salient region grouping produced by the salient region grouping module 230. Element 810 is a piece of pineapple, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 812 is a piece of cantaloupe, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 814 is an individual piece of watermelon, which is depicted in FIGS. 8C and 8D. Element 816 is a grouping of multiple pieces of watermelon, which is depicted in FIGS. 8B and 8D. By iteratively analyzing and combining these groupings, the system 100 can identify not only the individual elements of fruit and their estimated portion sizes, but also a larger grouping that identifies the combination of fruits as a fruit salad”, 0083, “In both the example of FIG. 5A and the example of FIG. 5B, the system 100 was able to accurately detect both simple, individual foods and more complex foods consisting of a combination of different individual foods (e.g., “burger” in FIG. 5A is a complex food including ground meat and a bun, with the ground meat partially obscured; “green salad” and “mixed vegetables” are complex foods made up of a combination of individual food items, detected in the example of FIG. 5B)”); when an image of a meal is captured by the at least one camera, classify at least one food included in the captured image based on the subdivided food list (0060, “Each of FIGS. 8A-8D is an example of a salient region grouping produced by the salient region grouping module 230. Element 810 is a piece of pineapple, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 812 is a piece of cantaloupe, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 814 is an individual piece of watermelon, which is depicted in FIGS. 8C and 8D. Element 816 is a grouping of multiple pieces of watermelon, which is depicted in FIGS. 8B and 8D. By iteratively analyzing and combining these groupings, the system 100 can identify not only the individual elements of fruit and their estimated portion sizes, but also a larger grouping that identifies the combination of fruits as a fruit salad”, 0083, “The output 116 includes bounding boxes 552, 554, 556, 558 overlaid on a digital picture of a plate of food. Each bounding box 552, 554, 556, 558 identifies a salient region or salient region grouping corresponding to an identified food. The output 116 also includes food description data 106, e.g., labels and confidence values 560, 562, 564, 566, each of which corresponds to one of the bounding boxes 552, 554, 556, 558 displayed on the image. The label 560 corresponds to the bounding box 558, the label 562 corresponds to the bounding box 556, the label 564 corresponds to the bounding box 554, and the label 566 corresponds to the bounding box 552. In some embodiments, the bounding boxes and labels are color-coded so that it is easy for the user to visually correlate the food description data with the associated portions of the image. In both the example of FIG. 5A and the example of FIG. 5B, the system 100 was able to accurately detect both simple, individual foods and more complex foods consisting of a combination of different individual foods (e.g., “burger” in FIG. 5A is a complex food including ground meat and a bun, with the ground meat partially obscured; “green salad” and “mixed vegetables” are complex foods made up of a combination of individual food items, detected in the example of FIG. 5B)”); and generate and store food information based on the classified at least one food (0060, “Each of FIGS. 8A-8D is an example of a salient region grouping produced by the salient region grouping module 230. Element 810 is a piece of pineapple, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 812 is a piece of cantaloupe, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 814 is an individual piece of watermelon, which is depicted in FIGS. 8C and 8D. Element 816 is a grouping of multiple pieces of watermelon, which is depicted in FIGS. 8B and 8D. By iteratively analyzing and combining these groupings, the system 100 can identify not only the individual elements of fruit and their estimated portion sizes, but also a larger grouping that identifies the combination of fruits as a fruit salad”, 0083, “The output 116 includes bounding boxes 552, 554, 556, 558 overlaid on a digital picture of a plate of food. Each bounding box 552, 554, 556, 558 identifies a salient region or salient region grouping corresponding to an identified food. The output 116 also includes food description data 106, e.g., labels and confidence values 560, 562, 564, 566, each of which corresponds to one of the bounding boxes 552, 554, 556, 558 displayed on the image. The label 560 corresponds to the bounding box 558, the label 562 corresponds to the bounding box 556, the label 564 corresponds to the bounding box 554, and the label 566 corresponds to the bounding box 552. In some embodiments, the bounding boxes and labels are color-coded so that it is easy for the user to visually correlate the food description data with the associated portions of the image. In both the example of FIG. 5A and the example of FIG. 5B, the system 100 was able to accurately detect both simple, individual foods and more complex foods consisting of a combination of different individual foods (e.g., “burger” in FIG. 5A is a complex food including ground meat and a bun, with the ground meat partially obscured; “green salad” and “mixed vegetables” are complex foods made up of a combination of individual food items, detected in the example of FIG. 5B)”), wherein the pre-stored classification table defines a relationship for each of a plurality of foods as at least one of a mix class and the subclass (0060, “Each of FIGS. 8A-8D is an example of a salient region grouping produced by the salient region grouping module 230. Element 810 is a piece of pineapple, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 812 is a piece of cantaloupe, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 814 is an individual piece of watermelon, which is depicted in FIGS. 8C and 8D. Element 816 is a grouping of multiple pieces of watermelon, which is depicted in FIGS. 8B and 8D. By iteratively analyzing and combining these groupings, the system 100 can identify not only the individual elements of fruit and their estimated portion sizes, but also a larger grouping that identifies the combination of fruits as a fruit salad”, 0083, “In both the example of FIG. 5A and the example of FIG. 5B, the system 100 was able to accurately detect both simple, individual foods and more complex foods consisting of a combination of different individual foods (e.g., “burger” in FIG. 5A is a complex food including ground meat and a bun, with the ground meat partially obscured; “green salad” and “mixed vegetables” are complex foods made up of a combination of individual food items, detected in the example of FIG. 5B)”), wherein the mix class is a class for classifying a food resulting from a combination of at least two foods (0060, “Each of FIGS. 8A-8D is an example of a salient region grouping produced by the salient region grouping module 230. Element 810 is a piece of pineapple, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 812 is a piece of cantaloupe, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 814 is an individual piece of watermelon, which is depicted in FIGS. 8C and 8D. Element 816 is a grouping of multiple pieces of watermelon, which is depicted in FIGS. 8B and 8D. By iteratively analyzing and combining these groupings, the system 100 can identify not only the individual elements of fruit and their estimated portion sizes, but also a larger grouping that identifies the combination of fruits as a fruit salad”, 0083, “In both the example of FIG. 5A and the example of FIG. 5B, the system 100 was able to accurately detect both simple, individual foods and more complex foods consisting of a combination of different individual foods (e.g., “burger” in FIG. 5A is a complex food including ground meat and a bun, with the ground meat partially obscured; “green salad” and “mixed vegetables” are complex foods made up of a combination of individual food items, detected in the example of FIG. 5B)”), wherein the subclass is a class for classifying a food which is subdivided into at least two foods (0060, “Each of FIGS. 8A-8D is an example of a salient region grouping produced by the salient region grouping module 230. Element 810 is a piece of pineapple, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 812 is a piece of cantaloupe, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 814 is an individual piece of watermelon, which is depicted in FIGS. 8C and 8D. Element 816 is a grouping of multiple pieces of watermelon, which is depicted in FIGS. 8B and 8D. By iteratively analyzing and combining these groupings, the system 100 can identify not only the individual elements of fruit and their estimated portion sizes, but also a larger grouping that identifies the combination of fruits as a fruit salad”, 0083, “In both the example of FIG. 5A and the example of FIG. 5B, the system 100 was able to accurately detect both simple, individual foods and more complex foods consisting of a combination of different individual foods (e.g., “burger” in FIG. 5A is a complex food including ground meat and a bun, with the ground meat partially obscured; “green salad” and “mixed vegetables” are complex foods made up of a combination of individual food items, detected in the example of FIG. 5B)”). Claim 3: Divakaran discloses the apparatus of claim 1, wherein, when the at least one food is classified, the controller classifies at least one food included in the captured image based on the subdivided food list and performs labeling on at least one food included in the captured image (0040, food classifier where “four very visually similar food items may be classified using four different labels, e.g., beef, beef brisket, beef stew, and steak”, 0042, as describing the labeling process, 0056, “when there are multiple foods on the plate, the foods have to be delineated before the individual foods can be identified. Also, many foods are composed of different ingredients that can be seen on the plate. For example, salad is composed of lettuce, tomatoes, cucumbers, and other vegetables, all of which are individually identifiable in the image 205. The multi-scale segmentation module 210 uses a multi-scale segmentation technique to identify the salad, because a single scale segmentation would not be as helpful in identifying a complex food like salad”, 0062, “The food description data 1-06 may include, e.g., a food identification label (e.g., a food name or type)”, 0083, “The output 116 includes bounding boxes 552, 554, 556, 558 overlaid on a digital picture of a plate of food. Each bounding box 552, 554, 556, 558 identifies a salient region or salient region grouping corresponding to an identified food. The output 116 also includes food description data 106, e.g., labels and confidence values 560, 562, 564, 566, each of which corresponds to one of the bounding boxes 552, 554, 556, 558 displayed on the image. The label 560 corresponds to the bounding box 558, the label 562 corresponds to the bounding box 556, the label 564 corresponds to the bounding box 554, and the label 566 corresponds to the bounding box 552. In some embodiments, the bounding boxes and labels are color-coded so that it is easy for the user to visually correlate the food description data with the associated portions of the image. In both the example of FIG. 5A and the example of FIG. 5B, the system 100 was able to accurately detect both simple, individual foods and more complex foods consisting of a combination of different individual foods (e.g., “burger” in FIG. 5A is a complex food including ground meat and a bun, with the ground meat partially obscured; “green salad” and “mixed vegetables” are complex foods made up of a combination of individual food items, detected in the example of FIG. 5B)”). Claim 4: Divakaran discloses the apparatus of claim 1, wherein, when the food information is generated, the controller defines a relationship between the extracted food list and the subdivided food list based on correlation (0029, “The food identification models 130 are developed or “trained” by correlating a suitably large sample of user-generated images of food with known foods, where the sample of user-generated images is stored in a food image library 140. In some embodiments, an input image 102 (e.g., a pictures of a plate of food taken by a mobile device user) is algorithmically compared to the library 140 of pre-determined food images until matching content is found. In the illustrative embodiment, the food identification models 130 correlate semantic descriptions of the food with known descriptions of food, in order to identify the food in the image 102”, 0083, “In the example of FIG. 5B, a display screen 550 of a personal mobile electronic device is shown, displaying illustrative output 116 of the system 100. The output 116 includes bounding boxes 552, 554, 556, 558 overlaid on a digital picture of a plate of food. Each bounding box 552, 554, 556, 558 identifies a salient region or salient region grouping corresponding to an identified food. The output 116 also includes food description data 106, e.g., labels and confidence values 560, 562, 564, 566, each of which corresponds to one of the bounding boxes 552, 554, 556, 558 displayed on the image. The label 560 corresponds to the bounding box 558, the label 562 corresponds to the bounding box 556, the label 564 corresponds to the bounding box 554, and the label 566 corresponds to the bounding box 552. In some embodiments, the bounding boxes and labels are color-coded so that it is easy for the user to visually correlate the food description data with the associated portions of the image”). Claim 5: Divakaran discloses the apparatus of claim 4, wherein the correlation is determined by identifying at least one food constituting a specific food included in the extracted food list among those foods included in the subdivided food list and according to a proportion of the identified at least one food in the specific food (0024, “In FIG. 6A, area 610 represents an area of an image in which a concentration of French fries was detected while area 612 represents an area of the same image in which a concentration of egg omelet was detected by the HoG algorithm. In FIG. 6B, area 614 represents an area of an image in which a concentration of green salad was detected while area 616 represents another area of the same image in which a concentration of cooked broccoli was detected by the HoG algorithm”, 0060, “Each of FIGS. 8A-8D is an example of a salient region grouping produced by the salient region grouping module 230. Element 810 is a piece of pineapple, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 812 is a piece of cantaloupe, which is detected in the groupings of FIGS. 8A, 8C, and 8D. Element 814 is an individual piece of watermelon, which is depicted in FIGS. 8C and 8D. Element 816 is a grouping of multiple pieces of watermelon, which is depicted in FIGS. 8B and 8D. By iteratively analyzing and combining these groupings, the system 100 can identify not only the individual elements of fruit and their estimated portion sizes, but also a larger grouping that identifies the combination of fruits as a fruit salad”, 0083, “In both the example of FIG. 5A and the example of FIG. 5B, the system 100 was able to accurately detect both simple, individual foods and more complex foods consisting of a combination of different individual foods (e.g., “burger” in FIG. 5A is a complex food including ground meat and a bun, with the ground meat partially obscured; “green salad” and “mixed vegetables” are complex foods made up of a combination of individual food items, detected in the example of FIG. 5B)”). Claim 6: Divakaran discloses the apparatus of claim 3, wherein the pre-stored classification table is generated by inputting at least one image to an artificial intelligence-based machine learning model as training data and training the machine learning model to learn the respective classes of a plurality of foods and includes at least one class for identifying each of the plurality of foods, wherein the controller updates the pre-stored classification table by further training the machine learning model using the labeled image as the training data (0029, “The food identification models 130 are developed or “trained” by correlating a suitably large sample of user-generated images of food with known foods, where the sample of user-generated images is stored in a food image library 140. In some embodiments, an input image 102 (e.g., a pictures of a plate of food taken by a mobile device user) is algorithmically compared to the library 140 of pre-determined food images until matching content is found. In the illustrative embodiment, the food identification models 130 correlate semantic descriptions of the food with known descriptions of food, in order to identify the food in the image 102”, 0040, “the food image library 140 is developed by training a supervised learning model such as a multiple class support vector machine (SVM) for each category of food. For example, a stochastic gradient may be used to train the classifier. The classifier training is done through an iterative process of algorithmically analyzing known food images, comparing the results of the algorithm to the known foods, and updating the model so that the learned classifier is more discriminative”). Claim 7: Divakaran discloses the apparatus of claim 6, wherein the at least one class includes at least one group including foods of the same category, and the at least one group is further subdivided through the re-learning (0040, “the food image library 140 is developed by training a supervised learning model such as a multiple class support vector machine (SVM) for each category of food. For example, a stochastic gradient may be used to train the classifier. The classifier training is done through an iterative process of algorithmically analyzing known food images, comparing the results of the algorithm to the known foods, and updating the model so that the learned classifier is more discriminative”). Claims 9, 11-13 and 15 are rejected on the same basis as claims 1-7 above since Divakaran discloses an artificial intelligence-based monitoring method performed by a apparatus, a meal monitoring method, the method comprising the same operations or activities as at claims 1-7 above, using the citations as above (for claims 9 and 11-13) and a non-transitory computer-readable recording medium being integrated with a computer and storing a computer program for operating the computer to perform a meal monitoring method of any one of claims 9 and 11 to 14 (Divakaran at least at 0018, 0049, 0063, 0096). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 8 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Divakaran in view of Tunstall-Pedoe et al. (U.S. Patent Application Publication No. 2023/0259705, hereinafter Tunstall-Pedoe). Claims 8 and 14: Divakaran discloses the apparatus and method of claims 1 and 19, but does not appear to explicitly disclose wherein, when a food list is extracted from the pre-stored food menu, the controller converts the food menu into a common menu name using a word2vec word embedding model, wherein the common menu name is a name of a base class included in the pre-stored classification table or a menu name corresponding to the base class. Where Divakaran identifies what the claims indicate as a “common menu name” – see Divakaran at, e.g., 0014, 0024, 0056, 0060, 0083 as discussing what Divakaran calls “complex” dishes or foods such as a salad) and the use of models such as “food identification models” (Divakaran at least at 0020 and 0021) of various types to analyze or recognize words (see Divakaran at least at 0055), Divakaran does not appear to disclose the specific use of a word2vec word embedding model. Tunstall-Pedoe, however, teaches “Word embeddings such as word2vec or GloVe is a technique known by those skilled in the relevant art in which large volumes of text are analysed to determine words that have similar meaning and usage. Various examples make use of this technique to determine the similarity of the natural language words and their suitability for substitution in a known ground truth translation. For example, an analysis of English would determine that Camembert and Brie were very similar items as their word embeddings would be very near each other. This means a ground truth translation including Brie would almost certainly stand with the word Brie substituted for Camembert” (Tunstall-Pedoe at 0509), and “FIG. 3 shows a conversation within the app where nutritional data is being communicated with the app. In a preferred example conversation is between the user, the AI, any number of human medical professionals and human nutritionists who derive semantic nutritional data from photos and descriptions of food and drink entered by the user when they are consumed” and “The semantic nutritional data not only represents exactly what was consumed and when but also represents uncertainty—e.g. from not knowing the exact composition of the food, to uncertainty about the portion size from the images” (Tunstall-Pedoe at 0617). Therefore, the Examiner understands and finds that to use a word2vec embedding model to find a common menu name is applying a known technique to a known device, method, or product ready for improvement to yield predictable results so as to find similar words (per Tunstall-Pedhoe). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine or modify the food identification of Divakaran with the use of the known word2vec analysis in Tunstall-Pedhoe in order to find a common menu name so as to find similar words (per Tunstall-Pedhoe). The rationale for combining in this manner is that to use a word2vec embedding model to find a common menu name is applying a known technique to a known device, method, or product ready for improvement to yield predictable results so as to find similar words as explained above. Response to Arguments Applicant's arguments filed 26 November 2025 have been fully considered but they are not persuasive. Applicant first argues the 112(a), (b), (f) interpretation and rejections, as well as the claim objections (Remarks at 7-9). However, these interpretations, rejections, and objections are removed at the amendment; therefore, the arguments are considered moot and not persuasive. Applicant then argues the 101 rejection (Remarks at 9-10), first arguing that “the claims are clearly integrated into a practical application, in that the claims recite a specific technical improvement in how a computing device processes data to recognize food.” (Id. at 10). However, the claims do not appear to recite any way “a computing device processes data to recognize food”, the claims merely says the controller executes an application program to extract from a list from a pre-stored menu, refer to a pre-stored classification table, and “classify … one food included in the captured image”. Any food recognition is merely implied, and there is no change or improvement of any kind regarding the recognition of any food. Further, the claims merely appear to have the end result of “generate and store food information”, where the “food information” is indicated by the claims to merely be the “subdivided food list as food information” (at the “when a food is determined to correspond to the subclass” element). It appears from the claims that the mixed class food analysis is ignored – the claims indicate that “when an image of a meal is captured”, the classifying of a “food included in the image [is] based on the subdivided food list”, where this “subdivided food list” is claimed as “generat[ed]” by “add[ing] the subdivided individual foods to the extracted food list” and “the extracted food list” is from “the pre-stored food menu” and does NOT include the “combined foods” that are/were added to the extracted food list at the “when a food is determined to correspond to the mix class” element. So, the claims literally just “generate and store” the “subdivided food list as food information” – nothing apparently more meaningful seems to be required. This indicates a fairly classic representation of merely performing with a computer that which a person can do mentally – look at a menu and the classifications so as to recognize a food item, i.e., applying the abstract idea of recognizing food on a food tray. It appears that literally any and all forms of any technology that a computer may use to attempt to perform the activities is included, and no technological improvement is indicated. Applicant argues that the claims constitute “a specific implementation that improves the computer's ability to recognize objects by narrowing down the candidate classes based on context (the menu) and expansion rules (mix/subclass)” (Id.). However, as indicated above, the claims do not actually recite recognizing food, nor do the claims appear to narrow down any classes – the claims appear to merely add food to the menu listing and then store the “subdivided food list as food information”. Applicant then argues the prior art rejections (Remarks at 11-13), alleging Divakaran does not disclose the extracting from a menu, checking a classification table, and generating a subdivided list, but that “Divakaran merely discloses how to train a classifier, not the claimed operation phase of processing a specific menu list (Id. at 11, at point “(1)”, emphasis at original). However, there is no described or claimed specific form, type, or structure regarding what the stored menu or classification table must look like or be. Divakaran, as cited, indicates recognizing that various component parts of (for example) a salad are individual foods that can be, or have been or could be, mixed or combined together to form the complex foods discussed – this indicates a recognition of both the individual components and the combination forming a “mixed class” and that they can be separated into individual classes of “mix class” and/or “subclass” (using the claim terminology). Further the “subdivided food list”, i.e., the individual components of, for example, the salad, are considered individually such that the “subdivided food list” must be formed and recognized – otherwise Divakaran could not function the way it is indicated to function. Applicant then argues that “while Divakaran utilizes a static, general-purpose library for classification, the feature of "extracting a food list from the pre-stored food menu" and "generating a subdivided food list as food information" prior to image capture/classification of the amended claim 1 is neither disclosed nor suggested in Divakaran and Das” (Id. at 11-12, at point “(2)”, emphasis at original). However, as cited to at the rejection, a menu is extracted from and as explained above, the subdivided food list is generated. Applicant then argues that Divakaran does not check a classification table (Id. at 12-13, at point “(3)”). However, as noted above, there is no specific form or type of classification table required (or described), and as also noted above, the recognition and use/analysis of both the individual components (of a “subclass”) and the complex food (of a “mix class”) require that those individual and overall components be recognized. Therefore, the Examiner is not persuaded by Applicant’s arguments. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Big Data definition search results page, from Google, downloaded from https://www.google.com/search?q=big+data+definition&safe=active&sca_esv=50afecde28fc5d24&ei=YdQUaIq9LqjawN4PkLCS4AM&ved=0ahUKEwjK8cKt_YSNAxUoLdAFHRCYBDwQ4dUDCBA&uact=5&oq=big+data+definition&gs_lp=Egxnd3Mtd2l6LXNlcnAiE2JpZyBkYXRhIGRlZmluaXRpb24yDhAAGIAEGJECGLEDGIoFMgsQABiABBiRAhiKBTIFEAAYgAQyBRAAGIAEMgsQABiABBiRAhiKBTILEAAYgAQYkQIYigUyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAESIk8UJcWWPE4cAR4AZABApgB6QGgAekPqgEGMi4xMy4xuAEDyAEA-AEBmAISoALODcICChAAGLADGNYEGEfCAg0QABiABBiwAxhDGIoFwgINEAAYgAQYsQMYQxiKBcICChAAGIAEGEMYigXCAhAQABiABBixAxhDGIMBGIoFwgIIEAAYgAQYsQPCAgUQLhiABMICBhAAGBYYHsICCxAAGIAEGIYDGIoFwgIIEAAYgAQYogTCAgUQABjvBcICChAAGIAEGLEDGA3CAgcQABiABBgNmAMAiAYBkAYKkgcENS4xM6AH7mOyBwQxLjEzuAezDQ&sclient=gws-wiz-serp on 3 May 2025, indicating various descriptions of what the term “big data” may mean. Word embedding search results page, from Google, downloaded from https://www.google.com/search?q=word+embedding&safe=active&sca_esv=50afecde28fc5d24&ei=GDgVaK-NJP2-p84PgMC-uQc&oq=word+embe&gs_lp=Egxnd3Mtd2l6LXNlcnAiCXdvcmQgZW1iZSoCCAAyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAESJszUABYzBBwAHgBkAEAmAGAAaAB4waqAQM2LjO4AQHIAQD4AQGYAgmgAqgHwgIKEAAYgAQYQxiKBcICEBAAGIAEGLEDGEMYgwEYigXCAhEQLhiABBixAxjRAxiDARjHAcICCxAuGIAEGNEDGMcBwgILEAAYgAQYsQMYgwHCAgsQLhiABBixAxiDAcICCxAAGIAEGJECGIoFwgINEAAYgAQYsQMYQxiKBcICDhAuGIAEGLEDGNEDGMcBwgIIEAAYgAQYsQPCAgcQABiABBgKmAMAkgcDNi4zoAfFObIHAzYuM7gHqAc&sclient=gws-wiz-serp on 3 May 2025, indicating what the term “word embedding” may mean. Rayner (U.S. Patent Application Publication No. 2014/0172313) also appears to be a anticipation reference, indicating at least “the health-modulating device of the disclosure is capable of determining the type and/or amount of food being consumed. By "determining the type of food" is meant any manner by which one item of food can be distinguished from another type of food, such as by physical appearance, e.g., color, physical feel, weight; biological type, e.g., carbohydrate, protein, fat; caloric content or nutritional content; food group classification, e.g., meat, vegetable, diary, etc., as well as specific identification of the particular food item, such as milk, water, beef, chicken, etc., and the like” (Rayner at 0032). Mossier et al. (U.S. Patent Application Publication No. 2022/0020471, hereinafter Mossier) also appears to be a anticipation reference, indicating at least “Inspection may be done when the tray is delivered to an identified patient, the tray having a serving of food including, for example a glass (114) and a plate (112) with the serving of food, or the meal, comprising one or more food components. The tray (100) may further have cutlery (116) and may still further have other non-foodstuffs, like a napkin, for instance. According to the embodiment, the inspection and analysis unit comprises at least one image capture device (124), such as a camera, for capturing one or more images of the tray and its contents including the meal” (Mossier at 0035). Goto (U.S. Patent Application Publication No. 2021/0158502) discusses that “An analyzing apparatus extracts, from a food image, food information relating to types and served states of foods included in the food image and an edible portion of each food, calculates an area ratio between the edible portion of the same kind of food extracted from a plurality of food images acquired at different timings, stores conversion information for converting the area ratio into a volume ratio corresponding to each type and served state of food, and converts the area ratio of each food into a volume ratio using the stored conversion information corresponding to the food whose area ratio is to be converted from among the stored conversion information based on the food information.” (Goto at Abstract). Pouladzadeh, et al., "Measuring Calorie and Nutrition From Food Image," in IEEE Transactions on Instrumentation and Measurement, vol. 63, no. 8, pp. 1947-1956, Aug. 2014, doi: 10.1109/TIM.2014.2303533. Downloaded 2 September 2025 from https://ieeexplore.ieee.org/abstract/document/6748066, indicating that “As people across the globe are becoming more interested in watching their weight, eating more healthy, and avoiding obesity, a system that can measure calories and nutrition in every day meals can be very useful. In this paper, we propose a food calorie and nutrition measurement system that can help patients and dietitians to measure and manage daily food intake. Our system is built on food image processing and uses nutritional fact tables. Recently, there has been an increase in the usage of personal mobile technology such as smartphones or tablets, which users carry with them practically all the time. Via a special calibration technique, our system uses the built-in camera of such mobile devices and records a photo of the food before and after eating it to measure the consumption of calorie and nutrient components. Our results show that the accuracy of our system is acceptable and it will greatly improve and facilitate current manual calorie measurement techniques.” (at Abstract). Li et al., Deep Cooking: Predicting Relative Food Ingredient Amounts from Images, downloaded 27 February 2026 from https://arxiv.org/pdf/1910.00100, dated 26 September 2019, indicating “In this paper, we study the novel problem of not only predicting ingredients from a food image, but also predicting the relative amounts of the detected ingredients. We propose two prediction based models using deep learning that output sparse and dense predictions, coupled with important semi-automatic multi-database integrative data pre-processing, to solve the problem. Experiments on a dataset of recipes collected from the Internet show the models generate encouraging experimental results” (at Abstract), and including using Word2vec (at § 4.1). Stojanov et al., A Fine-Tuned Bidirectional Encoder Representations From Transformers Model for Food Named-Entity Recognition: Algorithm Development and Validation. J Med Internet Res. 2021 Aug 9;23(8):e28229. doi: 10.2196/28229. PMID: 34383671; PMCID: PMC8415558. Downloaded from https://pmc.ncbi.nlm.nih.gov/articles/PMC8415558/ on 27 February 2026, indicating “We introduce FoodNER, which is a collection of corpus-based food named-entity recognition methods. It consists of 15 different models obtained by fine-tuning 3 pretrained BERT models on 5 groups of semantic resources: food versus nonfood entity, 2 subsets of Hansard food semantic tags, FoodOn semantic tags, and Systematized Nomenclature of Medicine Clinical Terms food semantic tags. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT D GARTLAND whose telephone number is (571)270-5501. The examiner can normally be reached M-F 8:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Abdi can be reached at 571-272-6702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SCOTT D GARTLAND/ Primary Examiner, Art Unit 3685
Read full office action

Prosecution Timeline

Nov 14, 2023
Application Filed
May 03, 2025
Non-Final Rejection — §101, §102, §103
Aug 08, 2025
Response Filed
Sep 02, 2025
Final Rejection — §101, §102, §103
Nov 26, 2025
Request for Continued Examination
Dec 04, 2025
Response after Non-Final Action
Feb 27, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586088
ANTI-FRAUD FINANCIAL TRANSACTIONS SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12544314
SYSTEM AND METHOD FOR MEDICATION COMPLIANCE
2y 5m to grant Granted Feb 10, 2026
Patent 12079822
System, Method, and Computer Program Product for False Decline Mitigation
2y 5m to grant Granted Sep 03, 2024
Patent 12062063
SYSTEM FOR A PRODUCT BUNDLE INCLUDING DIGITAL PROMOTION REDEEMABLE TOWARD THE PRODUCT BUNDLE AND RELATED METHODS
2y 5m to grant Granted Aug 13, 2024
Patent 11961119
ARCHIVE OFFER PERSONALIZATION
2y 5m to grant Granted Apr 16, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
11%
Grant Probability
24%
With Interview (+12.4%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month