Prosecution Insights
Last updated: April 19, 2026
Application No. 18/661,324

METHODS AND SYSTEMS FOR DETERMINING A COMPATIBLE SUBSTANCE

Non-Final OA §102§103
Filed
May 10, 2024
Examiner
LEE, JONATHAN S
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Kpn Innovations LLC
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
493 granted / 585 resolved
+22.3% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
19 currently pending
Career history
604
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
28.1%
-11.9% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-4, 9-14, 19, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chheda et al. (Diet and Health Advisory Mechanism from Facial features using Machine Learning, 2019, International Conference on Information Technology, Pages 251-256), hereinafter “Chheda”. Regarding claim 1, Chheda teaches: A system for determining a compatible substance, the system comprising a computing device configured to (See the Abstract.): using a camera, capture a first image (See the image input in Fig. 1.); generate a first body measurement by (See page 253, left column: “Using the facial features as inputs, BMI of the individual is determined, which is a measure of visceral fat [3].”): training a body measurement machine learning model on a training dataset including a plurality of example images correlated to a plurality of example body measurements (See the CNN in Fig. 1 and see the paragraph bridging pages 253-254: “The extracted feature vectors were used to train a custom classification model that determines the BMI class for a given set of features.”); and generating the first body measurement as a function of the first image using the trained body measurement machine learning model (See the CNN in Fig. 1 and see the paragraph bridging pages 253-254: “The extracted feature vectors were used to train a custom classification model that determines the BMI class for a given set of features.”); determine a first compatible substance as a function of the first body measurement (See “MAP ITEMS TO BODY MASS INDEX” in Fig. 1.); and generate a user interface, wherein the user interface configures a user device to display the first compatible substance (See page 253, left column: “The output will be a potential diet plan alongside some possible health issues that the system has detected after analysing the images given as input.” Further see page 254, right column: “This plan could be integrated with Google calendar and various other commonly used plugins in order to make it more routinely and known.”). Regarding claim 2, Chheda teaches: The system of claim 1, wherein the body measurement comprises a body mass metric (See page 253, left column: “Using the facial features as inputs, BMI of the individual is determined, which is a measure of visceral fat [3].”). Regarding claim 3, Chheda teaches: The system of claim 1, wherein the computing device is further configured to: receive a subject health datum (See the paragraph bridging the left and right columns on page 253: “The user has to provide necessary details besides image of his/her facial and bodily details in order to customize diet suggestions.”); capture a second image as a function of the subject health datum (See page 253, left column: “The aim is to provide health advisory alongside some basic dietary suggestions based on images of face given as input separately or as a time-series data.”); generate a second body measurement as a function of the second image using the trained body measurement machine learning model; determine a second compatible substance as a function of the second body measurement; and using the user interface, configure the user device to display the second compatible substance (See the flowchart of Fig. 1 given time-series data (meeting the claimed “second image”).). Regarding claim 4, Chheda teaches: The system of claim 1, wherein the computing device is further configured to: identify a body measurement impact ingredient; generate a nutrient plan as a function of the body measurement impact ingredient; and using the user interface, configure the user device to display the nutrient plan (See Table 1 on page 254. This table formulates the diet plan, which is displayed in Google calendar.). Regarding claim 9, Chheda teaches: The system of claim 1, wherein the first body measurement comprises a body fat distribution (See page 253, left column: “Using the facial features as inputs, BMI of the individual is determined, which is a measure of visceral fat [3].”). Regarding claim 10, Chheda teaches: The system of claim 9, wherein the computing device is further configured to: determine a medical condition risk datum as a function of the body fat distribution; and using the user interface, configure the user device to display the medical condition risk datum (See page 254, right column: “Making note of contrasting items, eg: A particular vitamin can perhaps be a solution to a feature that happens to be an issue but at the same time it could cause another problem.”). Chheda teaches claim 11 for the reasons given in the treatment of claim 1. Chheda teaches claim 12 for the reasons given in the treatment of claim 2. Chheda teaches claim 13 for the reasons given in the treatment of claim 3. Chheda teaches claim 14 for the reasons given in the treatment of claim 4. Chheda teaches claim 19 for the reasons given in the treatment of claim 9. Chheda teaches claim 20 for the reasons given in the treatment of claim 10. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 5-8 and 15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chheda (Diet and Health Advisory Mechanism from Facial features using Machine Learning, 2019, International Conference on Information Technology, Pages 251-256) in view of Ugail et al. (MotiVar: Motivating Weight Loss Through A Personalised Avatar, 2019, 13th International Conference on Software, Knowledge, Information Management and Applications, Pages 1-5). Claim 5 is met by the combination of Chheda and Ugail, wherein Chheda teaches: The system of claim 1, wherein the computing device is further configured to: Chheda does not disclose the following; however, Ugail teaches: generate a first digital avatar as a function of the first body measurement; and using the user interface, configure the user device to display the first digital avatar (See page 1, right column: “To achieve this, we propose the creation of virtual self avatars using simple body measurements or such measurements derived from digital photographic data (of the subject or the individual) and routinely collected weight data (height, weight, sex, ethnicity, BMI, waist and chest circumference) to produce a realistic looking avatar for the individual.”). Chheda and Ugail together teach the limitations of claim 5. Ugail is directed to a similar field of art (weight management intervention). Therefore, Chheda and Ugail are combinable. Modifying the system and method of Chheda by adding the capability to “generate a first digital avatar as a function of the first body measurement; and using the user interface, configure the user device to display the first digital avatar”, as taught by Ugail, would yield the expected and predictable result of providing to the user a visual indication of weight loss progress. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Chheda and Ugail in this way. Claim 6 is met by the combination of Chheda and Ugail, wherein The combination of Chheda and Ugail teaches: The system of claim 1, wherein the computing device is further configured to: And Ugail further teaches: The system of claim 5, wherein the first digital avatar comprises a three-dimensional avatar (See page 2, left column: “Thus, one could envisage a scenario where a generic parameterised 3-dimensional shape of a body shape and associated features is available for a user”.). See the motivation to combine in the treatment of claim 5. Claim 7 is met by the combination of Chheda and Ugail, wherein The combination of Chheda and Ugail teaches: The system of claim 5, wherein the computing device is further configured to: And Ugail further teaches: using the user interface, receive a body measurement adjustment datum; generate a second body measurement as a function of the body measurement adjustment datum; generate a second digital avatar as a function of the second body measurement; and using the user interface, configure the user device to display the second digital avatar (See page 1, right column: “To achieve this, we propose the creation of virtual self avatars using simple body measurements or such measurements derived from digital photographic data (of the subject or the individual) and routinely collected weight data (height, weight, sex, ethnicity, BMI, waist and chest circumference) to produce a realistic looking avatar for the individual.” The claimed “second body measurement” is encompassed by the routinely collected data. Then see page 1, right column: “The rendering and algorithms should take into account for changing body morphology to represent weight gain, weight maintenance and weight loss.”). See the motivation to combine in the treatment of claim 5. Claim 8 is met by the combination of Chheda and Ugail, wherein The combination of Chheda and Ugail teaches: The system of claim 5, wherein the computing device is further configured to: And Ugail further teaches: generate a health improvement body measurement estimation; generate a second digital avatar as a function of the health improvement body measurement estimation; and using the user interface, configure the user device to display the second digital avatar (See page 1, right column: “The rendering and algorithms should take into account for changing body morphology to represent weight gain, weight maintenance and weight loss.”). See the motivation to combine in the treatment of claim 5. Claim 15 is met by the combination of Chheda and Ugail for the reasons given in the treatment of claim 5. Claim 16 is met by the combination of Chheda and Ugail for the reasons given in the treatment of claim 6. Claim 17 is met by the combination of Chheda and Ugail for the reasons given in the treatment of claim 7. Claim 18 is met by the combination of Chheda and Ugail for the reasons given in the treatment of claim 8. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN S LEE whose telephone number is (571)272-1981. The examiner can normally be reached 11:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jonathan S Lee/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

May 10, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602807
METHOD FOR SUBPIXEL DISPARITY CALCULATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602785
TRAINING A MACHINE LEARNING MODEL TO ASSESS EMBRYO CHARACTERISTICS FROM VIDEO IMAGE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597108
METHOD AND APPARATUS TO PERFORM A WIRELINE CABLE INSPECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597110
IMAGE RECOGNITION METHOD, APPARATUS AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12584727
DIMENSION MEASUREMENT METHOD AND DIMENSION MEASUREMENT DEVICE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
94%
With Interview (+9.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month