Prosecution Insights
Last updated: April 19, 2026
Application No. 18/273,192

PREDICTION SYSTEM, CONTROL METHOD, AND CONTROL PROGRAM

Non-Final OA §102§103
Filed
Jul 19, 2023
Examiner
MCLEAN, NEIL R
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Kyocera Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
90%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
545 granted / 686 resolved
+17.4% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
21 currently pending
Career history
707
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
50.8%
+10.8% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 686 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 2. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Preliminary Amendment 3. The Preliminary Amendment submitted on 07/19/2023 containing amendments to the claims are acknowledged. Oath/Declaration 4. The receipt of Oath/Declaration is acknowledged. Information Disclosure Statement 5. The information disclosure statements (IDS) submitted on 07/19/2023, 10/18/2024, and 04/08/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Drawings 6. The drawing(s) filed on 07/19/2023 are accepted by the Examiner. Status of Claims 7. Claims 1-20 are pending in this application. Claims 1, 3-5, 7-8, 10-13, 15-16, and 18-20 were amended in the 07/19/2023 Preliminary Amendment. Claim Interpretation 8. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 9. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. 10. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. 11. Such claim limitation(s) is/are: “a prediction information acquirer” in claims 1, and 13; “a prediction image generation unit” in claims 1, and 2; “a prediction image generation model” in claims 2, 10, and 11; “a prediction information generation unit” in claim 13; “a prediction information generation model” in claims 13, 14, 15; “a basic information acquirer” in claim 14; and “an intervention effect prediction unit” in claims 16, and 17. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. 12. Regarding Claim 19 (drawn to a method) and Claim 20 (drawn to a non-transitory computer-readable medium). Claim limitation “a prediction image generation model” in claim 19; and Claim limitations “prediction information acquirer” and “prediction image generation unit” in claim 20 are not interpreted under 35 USC 112(f). Claim Rejections - 35 USC § 102 13. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 14. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 15. Claims 1-5, 8, 12-13, 16, 19 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by OHKUMA et al. (US 2019/0380659). Regarding Claim 1: OHKUMA discloses a prediction system (OHKUMA: ‘a future-image predicting device for predicting a future-image of a target person caused by a predetermined disease’ Abstract) comprising: a prediction information acquirer (OHKUMA: Fig. 1 ‘future-image predicting device 1’ [0035-0036]) configured to acquire: (a) a subject image representing a target region of a subject at a first time (OHKUMA: Fig. 1 ‘Health-related data and appearance data taken from the target person is entered into the input unit 2…image data of the target person, such as photograph data is entered as the appearance data’ [0037-0038]), and (b) first prediction information regarding the target region at a second time after a predetermined period has elapsed from the first time (OHKUMA: ‘risk level and change tendency for each health related data is stored and evaluated’ [0022]) ; e.g., ‘The “change tendency in appearance” for aging is also individually set for each element in the appearance and is set on each passing year’ [0042]); and a prediction image generation unit configured to generate a prediction image from the first prediction information and the subject image by predicting a condition of the target region at the second time and the prediction image (OHKUMA: ‘control unit 4 generates a future-image of the target person based on the health-related data and the appearance data’ [0022-0023; [0043]). Regarding Claim 2: OHKUMA further discloses the prediction system according to claim 1, wherein the prediction image generation unit comprises a prediction image generation model configured to generate the prediction image by using the subject image and the first prediction information (OHKUMA: e.g., ‘future-image predicting device 1 is designed to have a learning function for calculating the “risk level” and the “change tendency in appearance”…are stored…each time they are calculated.’ [0068]). Regarding Claim 3: OHKUMA further discloses the prediction system according to claim 1,wherein the prediction image comprises an image imitating at least a part of the subject image (OHKUMA: e.g., Fig. 7 ‘current image on left and future image on right shows the face of the subject and parts of the face that change such as wrinkles’ [0060; 0041]). Regarding Claim 4: OHKUMA further discloses the prediction system according to claim 1, wherein the subject image comprises an appearance image representing the target region (OHKUMA: e.g., Fig. 7 ‘current image on left and future image on right shows the face of the subject and parts of the face that change such as wrinkles’ [0060; 0041]). Regarding Claim 5: OHKUMA further discloses the prediction system according to claim 1,wherein the subject image comprises a medical image representing the target region (OHKUMA: e.g., ‘An organ image…is entered into the input unit as the organ data of the target person.’ [0025]). Regarding Claim 8: OHKUMA further discloses the prediction system according to claim 1, wherein the prediction image comprises an image obtained by predicting an effect on the target region of a disorder occurring in the target region (OHKUMA: e.g., ‘ FIG. 4 shows future-image X of the target person caused by Metabolic syndrome which reads on the claimed ‘disorder’. In FIG. 4, as the result of Metabolic syndrome progressed, those signs are…“accelerated aging” and “weight gain” which can be caused due to metabolic deterioration, “face hemiplegia” which can be followed by Cerebral infarction, “clouded eye lens” due to cataract which can be followed by severe Diabetes.’ [0050]). Regarding Claim 12: OHKUMA further discloses the prediction system according to claim 8 wherein the first prediction information comprises information regarding a shape and an appearance of the target region associated with the disorder of the target region (OHKUMA: e.g., Fig. 4 showing ‘face hemiplegia’ and ‘clouded eye lens’ due to cataract; [0050]). Regarding Claim 13: OHKUMA further discloses the prediction system according to claim 1, further comprising a prediction information generation unit configured to generate the first prediction information from the subject image and output the first prediction information to the prediction information acquirer, wherein the prediction information generation unit comprises a prediction information generation model configured to estimate the first prediction information from the subject image (OHKUMA: ‘the future-image predicting device 1 is designed to have a learning function for calculating the “risk level of Metabolic syndrome” and the “change tendency in appearance”. In this case, the “risk level of Metabolic syndrome” and the “change tendency in appearance” are stored each time they are calculated.’ [0068]; wherein ‘a learning function’ reads on the claimed ‘model’). Regarding Claim 16: OHKUMA further discloses the prediction system according to claim 1, further comprising an intervention effect prediction unit configured to output second prediction information indicating a method for intervention in the subject and an effect of the intervention by using the first prediction information as an input (OHKUMA: ‘future-image predicting device 1 according to the present embodiment determines the risk of the predetermined disease on the target person, based on the health-related data of the target person such as the biological data and life-habit data. Then, the future-image predicting device 1 generates, based on the determined risk, the future-image X of the target person caused by the predetermined disease. This allows the target person to know visually the future-image of one's own caused if the current health management is continued. As the result, the target person can be strongly motivated to improve awareness and behavior for health management.’ [0052]). Regarding Claim 19: OHKUMA discloses a control method for a prediction system (OHKUMA: ‘a future-image predicting device for predicting a future-image of a target person caused by a predetermined disease’ Abstract), the control method comprising: acquiring (OHKUMA: Fig. 1 ‘future-image predicting device 1’ [0035-0036]) (a) a subject image representing a target region of a subject at a first time (OHKUMA: Fig. 1 ‘Health-related data and appearance data taken from the target person is entered into the input unit 2…image data of the target person, such as photograph data is entered as the appearance data’ [0037-0038]), and (b) first prediction information regarding the target region at a second time after a predetermined period has elapsed from the first time (OHKUMA: ‘risk level and change tendency for each health related data is stored and evaluated’ [0022]) ; e.g., ‘The “change tendency in appearance” for aging is also individually set for each element in the appearance and is set on each passing year’ [0042]); and generating a prediction image from the first prediction information and the subject image by predicting a condition of the target region at the second time (OHKUMA: ‘control unit 4 generates a future-image of the target person based on the health-related data and the appearance data’ [0022-0023; [0043]) and outputting the prediction image (OHKUMA: Fig. 5 ‘future-image X is displayed on display unit 5’ [0049-0051]), wherein the prediction system comprises a prediction image generation model configured to generate the prediction image by using the subject image and the first prediction information (OHKUMA: e.g., ‘future-image predicting device 1 is designed to have a learning function for calculating the “risk level” and the “change tendency in appearance”…are stored…each time they are calculated.’ [0068]). Regarding Claim 20: OHKUMA further discloses a non-transitory computer-readable medium storing a control program for causing a computer to operate as the prediction system according to claim 1, the control program causing the computer to: operate as the prediction information acquirer, and operate as the prediction image generation unit (OHKUMA: ‘computer executing a program on a non-transitory medium’ [0187]). Claim Rejections - 35 USC § 103 16. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 17. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 18. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. 19. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 20. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over OHKUMA et al. (US 2019/0380659) in view of Baenen et al. (US 11,410,341). Regarding Claim 6: OHKUMA further discloses the prediction system according to claim 5, wherein the medical image comprises (OHKUMA: ‘health-related data measured by a measuring device such as a 3D body scanner…with respect to body parts such as waist, thigh, or organ parts such as blood vessels’ [0056]). OHKUMA does not expressly disclose wherein the medical image comprises at least one selected from the group consisting of an X-ray image, a CT image, an MRI image, a PET image, and an ultrasonic image. Baenen discloses disclose wherein the medical image comprises at least one selected from the group consisting of an X-ray image, a CT image, an MRI image, a PET image, and an ultrasonic image (Baenen: ‘Imaging devices (e.g., gamma camera, positron emission tomography (PET) scanner, computed tomography (CT) scanner, X-Ray machine, fluoroscopy machine, magnetic resonance (MR) imaging machine, ultrasound scanner, etc.) generate medical images…representative of the parts of the body (e.g., organs, tissues, etc.) to diagnose and/or treat diseases.’ Col. 3, lines 14-29). OHKUMA in view of Baenen are combinable because they are from the same field of endeavor of image processing; e.g., both disclose measuring, and reporting functional or anatomical characteristics on various locations of a medical image to identify regions of interest with the medical image. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose wherein the medical image comprises at least one selected from the group consisting of an X-ray image, a CT image, an MRI image, a PET image, and an ultrasonic image. The suggestion/motivation for doing so is to help improve diagnostic accuracy as disclosed by Baenen (Col. 3, lines 42-43). Therefore, it would have been obvious to combine OHKUMA with Baenen to obtain the invention as specified in claim 6. 21. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over OHKUMA et al. (US 2019/0380659) in view of Xiao et al. (US 8,792,968). Regarding Claim 7: OHKUMA discloses the prediction system according to claim 1, but does not expressly disclose wherein the subject image comprises a captured image of any one of a whole body, a head, an upper body, a lower body, an upper limb, and a lower limb of the subject. Xiao discloses wherein the subject image comprises a captured image of any one of a whole body (Xiao: Figs. 21O and 21P front of whole body; Figs. 21Q and 21R back of whole body; Col. 19, lines 34-38), a head (Xiao: Fig. 21A ‘front, right side, left side, and back of the head’ Col. 18, lines 31-37), an upper body (Xiao: Fig. 21M; Col. 19, lines 27-36), a lower body (Xiao: Fig. 21M; Col. 19, lines 27-36), an upper limb (Xiao: Fig. 21K; Col. 19, lines 17-21), and a lower limb (Xiao: Fig. 21K; Col. 19, lines 17-21) of the subject. OHKUMA in view of Xiao are combinable because they are from the same field of endeavor of image processing; e.g., both disclose capturing, processing and displaying images related to the human anatomy. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose wherein the subject image comprises a captured image of any one of a whole body, a head, an upper body, a lower body, an upper limb, and a lower limb of the subject. The suggestion/motivation for doing so is to help in clinical evaluation. Therefore, it would have been obvious to combine OHKUMA with Xiao to obtain the invention as specified in claim 7. 22. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over OHKUMA et al. (US 2019/0380659) in view of Pedersen et al. (US 10,722,562). Regarding Claim 9: OHKUMA further discloses the prediction system according to claim 8, wherein the disorder comprises at least one selected from the group consisting of obesity (OHKUMA: [0050]), (OHKUMA: [0050]), Pederson discloses wherein the disorder comprises at least one selected from the group consisting of obesity, alopecia (Pederson: Col. 719,line 28), cataracts, periodontal disease (Pederson: Col. 743, lines 65-66), rheumatoid arthritis (Pederson: Col. 747, lines 44-45), Heberden's node (Pederson: Col. 732, line 55), hallux valgus (Pederson: Col. 739, line 38), osteoarthritis (Pederson: Col. 743, line 1), spondylosis deformans (Pederson: Col. 749, line 67), compression fracture (Pederson: Col. 371, lines 29-30) and sarcopenia (Pederson: Col. 20, Table T). OHKUMA in view of Pederson are combinable because they are from the same field of endeavor of image processing; e.g., both disclose methods of diagnosing and treatment of diseases. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose wherein the disorder comprises at least one selected from the group consisting of obesity, alopecia, cataracts, periodontal disease, rheumatoid arthritis, Heberden's node, hallux valgus, osteoarthritis, spondylosis deformans, compression fracture and sarcopenia. The suggestion/motivation for doing so is to allow a much better treatment of diseases at an earlier state as disclosed by Pederson in the Background of Invention. Therefore, it would have been obvious to combine OHKUMA with Pederson to obtain the invention as specified in claim 9. 23. Claims 10, 15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over OHKUMA et al. (US 2019/0380659) in view of RIM et al. (US 2019/0221313). Regarding Claim 10: OHKUMA discloses the prediction system according to claim 2, but does not expressly disclose wherein the prediction image generation model comprises a neural network trained by using, as teacher data, a plurality pieces of image data each representing a target region as teacher data. RIM discloses wherein the prediction image generation model comprises a neural network trained by using, as teacher data, a plurality pieces of image data each representing a target region as teacher data (RIM: Fig. 8 ‘trained neural network model is trained by using data set’s comprising an image of a region and labeled data’; Fig. 10 shows the labeled data; wherein the images used in the data sets represent retinal/fundus images; [0102-0107; 0114-0115]). OHKUMA in view of RIM are combinable because they are from the same field of endeavor of image processing; e.g., both disclose methods of using medical images and data to aid in diagnosis of patient(s). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose wherein the prediction image generation model comprises a neural network trained by using, as teacher data, a plurality pieces of image data each representing a target region as teacher data. The suggestion/motivation for doing so is to improve the accuracy of prediction, and increase the processing speed using a deep learning trained model as disclosed by RIM at [0005]. Therefore, it would have been obvious to combine OHKUMA with RIM to obtain the invention as specified in claim 10. Regarding Claim 15: OHKUMA further discloses the prediction system according to claim 13, wherein the prediction information generation model comprises target region (OHKUMA: e.g., ‘ FIG. 4 shows future-image X of the target person caused by Metabolic syndrome which reads on the claimed ‘disorder’. In FIG. 4, as the result of Metabolic syndrome progressed, those signs are…“accelerated aging” and “weight gain” which can be caused due to metabolic deterioration, “face hemiplegia” which can be followed by Cerebral infarction, “clouded eye lens” due to cataract which can be followed by severe Diabetes.’ [0050]), and the patient information comprises information that comprises condition information indicating a condition of a target region of each of the patients acquired at a plurality of past times and where the condition information for each of the patients is associated with information indicating a time when the condition information is acquired (OHKUMA: ‘risk level and change tendency for each health related data is stored and evaluated’ [0022]) ; e.g., ‘The “change tendency in appearance” for aging is also individually set for each element in the appearance and is set on each passing year’ [0042]). RIM discloses wherein the prediction information generation model comprises a neural network trained by using teacher data, the teacher data being patient information (RIM: Fig. 8 ‘trained neural network model is trained by using data set’s comprising an image of a region and labeled data’; Fig. 10 shows the labeled data wherein the labeled data can include gender information etc. [0116]; [0102-0107; 0114-0115]). OHKUMA in view of RIM are combinable because they are from the same field of endeavor of image processing; e.g., both disclose methods of using medical images and data to aid in diagnosis of patient(s). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose wherein the prediction information generation model comprises a neural network trained by using teacher data, the teacher data being patient information. The suggestion/motivation for doing so is to improve the accuracy of prediction, and increase the processing speed using a deep learning trained model as disclosed by RIM at [0005]. Therefore, it would have been obvious to combine OHKUMA with RIM to obtain the invention as specified in claim 15. Regarding Claim 17: OHKUMA further discloses the prediction system of wherein the intervention effect prediction unit comprises, as an intervention effect prediction model (OHKUMA: [0068]), the effect information comprises information that comprises condition information indicating a condition of a target region of each of the patients acquired at a plurality of past times and where the condition information for each of the patients is associated with intervention information indicating an intervention applied to each of the patients (OHKUMA: ‘future-image predicting device 1 according to the present embodiment determines the risk of the predetermined disease on the target person, based on the health-related data of the target person such as the biological data and life-habit data. Then, the future-image predicting device 1 generates, based on the determined risk, the future-image X of the target person caused by the predetermined disease. This allows the target person to know visually the future-image of one's own caused if the current health management is continued. As the result, the target person can be strongly motivated to improve awareness and behavior for health management.’ [0052]; this reads on the claimed intervention effect prediction model’). OHKUMA does not espressly disclose a neural network trained by using effect information as teacher data. RIM discloses a neural network trained by using effect information as teacher data (RIM: Fig. 8 ‘trained neural network model is trained by using data set’s comprising an image of a region and labeled data’; Fig. 10 shows the labeled data wherein the labeled data can include gender information etc. [0116]; [0102-0107; 0114-0115]). OHKUMA in view of RIM are combinable because they are from the same field of endeavor of image processing; e.g., both disclose methods of using medical images and data to aid in diagnosis of patient(s). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose a neural network trained by using effect information as teacher data. The suggestion/motivation for doing so is to improve the accuracy of prediction, and increase the processing speed using a deep learning trained model as disclosed by RIM at [0005]. Therefore, it would have been obvious to combine OHKUMA with RIM to obtain the invention as specified in claim 17. 24. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over OHKUMA et al. (US 2019/0380659) in view of Hsieh et al. (US 10,438,354). Regarding Claim 11: OHKUMA discloses the prediction system according to claim 2, but does expressly disclose wherein the prediction image generation model comprises a generative adversarial network or an auto encoder. Hsieh discloses wherein the prediction image generation model comprises a generative adversarial network or an auto encoder. (Hsieh: Fig. 13 ‘AI methodology selection 1324’ can choose a generative adversarial network (GAN) from ‘AI Catalog 1326’; Col. 16, lines 21-38); (Hsieh: ‘an auto-encoder technique provides unsupervised learning of efficient codings, such as in an artificial neural network. Using an auto-encoder technique, a representation or encoding can be learned for a set of data. Auto-encoding can be used to learn a model of data and/or other dimensionality reduction using an encoder and decoder to process the data to construct layers (including hidden layers) and connections between layers to form the neural network.’ Col. 21, lines 10-18). OHKUMA in view of Hsieh are combinable because they are from the same field of endeavor of image processing; e.g., both disclose methods of using medical images and data to aid in diagnosis and treatment of patient(s). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose wherein the prediction image generation model comprises a generative adversarial network or an auto encoder. The suggestion/motivation for doing so is to aid physicians as disclosed by Hsieh in the Background of Invention. Hsieh further discloses that physicians have more patients, less time, and are eager for assistance dealing with huge amounts of supporting data, hence the use of a GAN or auto encoder to deal with large amounts of data. Therefore, it would have been obvious to combine OHKUMA with Hsieh to obtain the invention as specified in claim 11. 25. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over OHKUMA et al. (US 2019/0380659) in view of Itu et al. (US 2018/0247020). Regarding Claim 14: OHKUMA further discloses the prediction system according to claim 13, further comprising a basic information acquirer configured to acquire basic information comprising at least one selected from the group consisting of a sex, an age, (OHKUMA: ‘health-related data can include, for example, basic data of the target person such as “age” and “sex”; biological data such as “blood data”, “blood pressure”, and “abdominal girth”; life-habit data such as “alcohol drinking”, “smoking”, and “exercise”; disease data such as “past medical history”, “disease under medical treatment”, and “disease history of family”. Not only data obtained from medical checkups but any data which can be relevant to diseases may be included in the health-related data.’ [0038]; ‘weight gain’ [0050]), and information indicating a condition of the target region of the subject at the first time (OHKUMA: ‘future-image X is generated by changing/modifying the image data (appearance data) of the target person based on both the “change tendency in appearance” determined in and the “change tendency in appearance” for aging.’ [0049]) , wherein the prediction information generation model (OHKUMA: ‘the future-image predicting device 1 is designed to have a learning function for calculating the “risk level of Metabolic syndrome” and the “change tendency in appearance”. In this case, the “risk level of Metabolic syndrome” and the “change tendency in appearance” are stored each time they are calculated.’ [0068]; wherein ‘a learning function’ reads on the claimed ‘model’) is configured to estimate the first prediction information from the subject image of the subject and the basic information of the subject (OHKUMA: Fig. 5 flowchart ‘control unit 4 generates a future-image X, based on both the appearance data entered into the input unit 2 and the “change tendency in appearance” determined in S2 (S3). At this time, it is preferable that the “relation between aging and change tendency in appearance” stored in the storing unit 3 is also taken into account.’ [0049]). OHKUMA does not expressly disclose acquire basic information comprising at least one selected from the group consisting of a sex, an age, a height, a weight of the subject. Itu discloses acquire basic information comprising at least one selected from the group consisting of a sex, an age, a height, a weight of the subject (Itu: Fig. 1 flowchart ‘At step 105, non-invasive patient data is received such as, for example, demographics and patient history (e.g., age, ethnicity, sex, weight, height, fracture history, family history, smoking, alcohol, glucocorticoids, rheumatoid arthritis, etc.)’ [0070]). OHKUMA in view of Itu are combinable because they are from the same field of endeavor of image processing; e.g., both disclose methods of using medical images and data to aid in diagnosis and treatment of patient(s). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to disclose acquire basic information comprising at least one selected from the group consisting of a sex, an age, a height, a weight of the subject. The suggestion/motivation for doing so is to estimate the risk of e.g., bone fracture as disclosed by Itu at ¶ [0023]. Itu further discloses that even if some patient data is in the range of normal values, that it can still represent an indicator for osteoporosis. Therefore, it would have been obvious to combine OHKUMA with Hsieh to obtain the invention as specified in claim 14. 26. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over OHKUMA et al. (US 2019/0380659) in view of Baker et al. (US 2019/0200915). Regarding Claim 18: OKHUMA further discloses the prediction system according to claim 16, but does not expressly disclose wherein the method for the intervention comprises at least one selected from the group consisting of dietetic therapy, exercise therapy, drug therapy, orthotic therapy, rehabilitation, and surgical therapy. Baker discloses wherein the method for the intervention comprises at least one selected from the group consisting of dietetic therapy, exercise therapy, drug therapy, orthotic therapy, rehabilitation, and surgical therapy (Baker: ‘wherein the therapy comprises one or more of the following: drug-based therapies, surgery, psychotherapy, physical therapy, life-style recommendations, rehabilitation measures, nutritional diets.’ Claim 17; ‘Subjects are allowed to wear regular footwear and an assistive device and/or orthotic as needed. The test is typically performed daily.’ [0249]; wherein ‘physical therapy and life-style recommendations’ reads on the claimed ‘exercise therapy’). OHKUMA in view of Baker are combinable because they are from the same field of endeavor of image processing; e.g., both disclose methods of pointing out to a patient how they can benefit from a therapy, as disclosed by Baker in the Abstract and Background of Invention. Therefore, it would have been obvious to combine OHKUMA with Hsieh to obtain the invention as specified in claim 18. Conclusion 27. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kurasawa et al. (WO 2020/044824) relates to estimating a target value of a health state for bringing a human health state closer to an ideal health state. Kurasawa further discloses measuring the health status of the past multiple days and the health status presented to the user on the same multiple days in order to bring the user's health status closer to the ideal health status in the future. Then, an intervention content estimation model that outputs a target value of a health condition to be recommended next to the user and an expected value of achieving the target when the user inputs the target value is generated by deep reinforcement learning. Thereafter, using the intervention content estimation model, the target value of the next health condition to be recommended and the target achievement expectation value are output and presented to the user as the intervention content. 28. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NEIL R MCLEAN whose telephone number is (571)270-1679. The examiner can normally be reached Monday-Thursday, 6AM - 4PM, PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi M Sarpong can be reached at 571.270.3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NEIL R MCLEAN/ Primary Examiner, Art Unit 2681
Read full office action

Prosecution Timeline

Jul 19, 2023
Application Filed
Oct 30, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586172
STRUCTURE DIAGNOSTIC CASE PRESENTATION DEVICE, METHOD, AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12587845
ULTRASONIC DIAGNOSTIC APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12580071
SYSTEMS AND METHODS TO PROCESS ELECTRONIC IMAGES WITH AUTOMATIC PROTOCOL REVISIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12566270
APPARATUS FOR ASSISTING DRIVING OF VEHICLE AND METHOD THEREOF
2y 5m to grant Granted Mar 03, 2026
Patent 12568181
METHOD AND DEVICE OF VIDEO VIRTUAL BACKGROUND IMAGE PROCESSING AND COMPUTER APPARATUS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
90%
With Interview (+10.5%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 686 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month