Prosecution Insights
Last updated: April 19, 2026
Application No. 18/673,322

INFORMATION PROCESSING APPARATUS, OPERATION METHOD OF INFORMATION PROCESSING APPARATUS, OPERATION PROGRAM OF INFORMATION PROCESSING APPARATUS, PREDICTION MODEL, LEARNING APPARATUS, AND LEARNING METHOD

Non-Final OA §102§103
Filed
May 24, 2024
Examiner
LEE, JONATHAN S
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
493 granted / 585 resolved
+22.3% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
19 currently pending
Career history
604
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
28.1%
-11.9% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 and 4-11 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Liu et al. (Joint Classification and Regression via Deep Multi-Task Multi-Channel Learning for Alzheimer’s Disease Diagnosis, May 2019, IEEE Transactions on Biomedical Engineering, Vol. 66, No. 5, Pages 1195-1206), hereinafter “Liu”. Regarding claim 1, Liu teaches: An information processing apparatus comprising (See the Abstract.): a processor, wherein the processor acquires a medical image showing an organ of a subject (See input MR image of a brain in Figs. 1 and 4.) and disease-related data of the subject (See the demographic information (age, gender, education) in Figs. 1 and 4.), subdivides the medical image into a plurality of patch images (See the patch extraction in Fig. 1.), uses a prediction model (See Fig. 4.) including a feature amount extraction unit that extracts a feature amount from the patch images (See page 1196, left column: “First, we propose to automatically extract discriminative image patches from MR images, based on the anatomical landmarks identified in a data-driven manner.”) and the disease-related data (See page 1196, right column: “Finally, we can take advantage of multiple demographic factors of studied subjects via the proposed framework, with the demographic information (i.e., age, gender, and education) embedded into the process of model training.”) and a correlation information extraction unit that extracts at least correlation information between the plurality of patch images (See page 1200, left column, regarding obtaining global information from local information of L patches/landmarks: “To model the global information of MRI, we concatenate the outputs of L FC8 layers and further add two additional FC layers (i.e., FC9, and FC10) to the network.” The examiner asserts that correlations between patches are obtained with the fully connected layers FC9 and FC10 to model the global information.) and correlation information between the plurality of patch images and the disease-related data (See page 1200, left column: “Moreover, we feed a concatenated representation comprising the output of FC10 and three demographic factors (i.e., age, gender, and education) into two FC layers (i.e., FC11, and FC12).”), and inputs the patch images and the disease-related data to the prediction model and outputs a prediction result regarding a disease from the prediction model (See the input of patch images and demographic information to the model and output of disease class and clinical score in Fig. 4.). Regarding claim 4, Liu teaches: The information processing apparatus according to claim 1, wherein the disease is dementia (See the classes in Fig. 4, Alzheimer’s disease.), the medical image is an image showing a brain of the subject (See Figs. 1 and 4.), and the processor extracts a first region image including a hippocampus, an amygdala, and an entorhinal cortex and a second region image including a temporal lobe and a frontal lobe from the medical image, and subdivides the first region image and the second region image into the plurality of patch images (See Fig. 3 anatomy and selection of landmarks for patch extraction.). Regarding claim 5, Liu teaches: The information processing apparatus according to claim 1, wherein the disease is dementia (See the classes in Fig. 4, Alzheimer’s disease.), the medical image is morphological image test data (See MRI images in Figs. 1-4.), and the disease-related data includes at least one of an age, a sex, blood/cerebrospinal fluid test data, genetic test data, or cognitive function test data of the subject (See age/gender in Figs. 1 and 4.). Regarding claim 6, Liu teaches: The information processing apparatus according to claim 5, wherein the morphological image test data is a tomographic image obtained by a nuclear magnetic resonance imaging method (See the MR image in Figs. 1-4.). Regarding claim 7, Liu teaches: An operation method of an information processing apparatus, the operation method comprising (See the Abstract.): acquiring a medical image showing an organ of a subject (See input MR image of a brain in Figs. 1 and 4.) and disease-related data of the subject (See the demographic information (age, gender, education) in Figs. 1 and 4.); subdividing the medical image into a plurality of patch images (See the patch extraction in Fig. 1.); using a prediction model (See Fig. 4.) including a feature amount extraction unit that extracts a feature amount from the patch images (See page 1196, left column: “First, we propose to automatically extract discriminative image patches from MR images, based on the anatomical landmarks identified in a data-driven manner.”) and the disease-related data (See page 1196, right column: “Finally, we can take advantage of multiple demographic factors of studied subjects via the proposed framework, with the demographic information (i.e., age, gender, and education) embedded into the process of model training.”) and a correlation information extraction unit that extracts at least correlation information between the plurality of patch images (See page 1200, left column, regarding obtaining global information from local information of L patches/landmarks: “To model the global information of MRI, we concatenate the outputs of L FC8 layers and further add two additional FC layers (i.e., FC9, and FC10) to the network.” The examiner asserts that correlations between patches are obtained with the fully connected layers FC9 and FC10 to model the global information.) and correlation information between the plurality of patch images and the disease-related data (See page 1200, left column: “Moreover, we feed a concatenated representation comprising the output of FC10 and three demographic factors (i.e., age, gender, and education) into two FC layers (i.e., FC11, and FC12).”); and inputting the patch images and the disease-related data to the prediction model and outputting a prediction result regarding a disease from the prediction model (See the input of patch images and demographic information to the model and output of disease class and clinical score in Fig. 4.). Regarding claim 8, Liu teaches: A non-transitory computer-readable storage medium storing an operation program of an information processing apparatus, the program causing a computer to execute (See the Abstract.): acquiring a medical image showing an organ of a subject (See input MR image of a brain in Figs. 1 and 4.) and disease-related data of the subject (See the demographic information (age, gender, education) in Figs. 1 and 4.); subdividing the medical image into a plurality of patch images (See the patch extraction in Fig. 1.); using a prediction model (See Fig. 4.) including a feature amount extraction unit that extracts a feature amount from the patch images (See page 1196, left column: “First, we propose to automatically extract discriminative image patches from MR images, based on the anatomical landmarks identified in a data-driven manner.”) and the disease-related data (See page 1196, right column: “Finally, we can take advantage of multiple demographic factors of studied subjects via the proposed framework, with the demographic information (i.e., age, gender, and education) embedded into the process of model training.”) and a correlation information extraction unit that extracts at least correlation information between the plurality of patch images (See page 1200, left column, regarding obtaining global information from local information of L patches/landmarks: “To model the global information of MRI, we concatenate the outputs of L FC8 layers and further add two additional FC layers (i.e., FC9, and FC10) to the network.” The examiner asserts that correlations between patches are obtained with the fully connected layers FC9 and FC10 to model the global information.) and correlation information between the plurality of patch images and the disease-related data (See page 1200, left column: “Moreover, we feed a concatenated representation comprising the output of FC10 and three demographic factors (i.e., age, gender, and education) into two FC layers (i.e., FC11, and FC12).”); and inputting the patch images and the disease-related data to the prediction model and outputting a prediction result regarding a disease from the prediction model (See the input of patch images and demographic information to the model and output of disease class and clinical score in Fig. 4.). Regarding claim 9, Liu teaches: A non-transitory computer-readable storage medium storing a prediction model for causing a computer to function to output a prediction result regarding a disease in response to an input of a plurality of patch images obtained by subdividing a medical image showing an organ of a subject and disease-related data of the subject, the prediction model comprising (See the Abstract.): a feature amount extraction unit that extracts a feature amount from the patch images (See page 1196, left column: “First, we propose to automatically extract discriminative image patches from MR images, based on the anatomical landmarks identified in a data-driven manner.”) and the disease-related data (See page 1196, right column: “Finally, we can take advantage of multiple demographic factors of studied subjects via the proposed framework, with the demographic information (i.e., age, gender, and education) embedded into the process of model training.”); and a correlation information extraction unit that extracts at least correlation information between the plurality of patch images (See page 1200, left column, regarding obtaining global information from local information of L patches/landmarks: “To model the global information of MRI, we concatenate the outputs of L FC8 layers and further add two additional FC layers (i.e., FC9, and FC10) to the network.” The examiner asserts that correlations between patches are obtained with the fully connected layers FC9 and FC10 to model the global information.) and correlation information between the plurality of patch images and the disease-related data (See page 1200, left column: “Moreover, we feed a concatenated representation comprising the output of FC10 and three demographic factors (i.e., age, gender, and education) into two FC layers (i.e., FC11, and FC12).”). Regarding claim 10, Liu teaches: A learning apparatus that provides a prediction model with a learning medical image and a learning disease-related data as learning data, and trains the prediction model to obtain a prediction result regarding a disease as an output in response to an input of a plurality of patch images obtained by subdividing a medical image showing an organ of a subject and disease-related data of the subject (See the Abstract.), wherein the prediction model includes a feature amount extraction unit that extracts a feature amount from the patch images (See page 1196, left column: “First, we propose to automatically extract discriminative image patches from MR images, based on the anatomical landmarks identified in a data-driven manner.”) and the disease-related data (See page 1196, right column: “Finally, we can take advantage of multiple demographic factors of studied subjects via the proposed framework, with the demographic information (i.e., age, gender, and education) embedded into the process of model training.”), and a correlation information extraction unit that extracts at least correlation information between the plurality of patch images (See page 1200, left column, regarding obtaining global information from local information of L patches/landmarks: “To model the global information of MRI, we concatenate the outputs of L FC8 layers and further add two additional FC layers (i.e., FC9, and FC10) to the network.” The examiner asserts that correlations between patches are obtained with the fully connected layers FC9 and FC10 to model the global information.) and correlation information between the plurality of patch images and the disease-related data (See page 1200, left column: “Moreover, we feed a concatenated representation comprising the output of FC10 and three demographic factors (i.e., age, gender, and education) into two FC layers (i.e., FC11, and FC12).”). Regarding claim 11, Liu teaches: A learning method of providing a prediction model with a learning medical image and a learning disease-related data as learning data, and training the prediction model to obtain a prediction result regarding a disease as an output in response to an input of a plurality of patch images obtained by subdividing a medical image showing an organ of a subject and disease-related data of the subject (See the Abstract.), wherein the prediction model includes a feature amount extraction unit that extracts a feature amount from the patch images (See page 1196, left column: “First, we propose to automatically extract discriminative image patches from MR images, based on the anatomical landmarks identified in a data-driven manner.”) and the disease-related data (See page 1196, right column: “Finally, we can take advantage of multiple demographic factors of studied subjects via the proposed framework, with the demographic information (i.e., age, gender, and education) embedded into the process of model training.”), and a correlation information extraction unit that extracts at least correlation information between the plurality of patch images (See page 1200, left column, regarding obtaining global information from local information of L patches/landmarks: “To model the global information of MRI, we concatenate the outputs of L FC8 layers and further add two additional FC layers (i.e., FC9, and FC10) to the network.” The examiner asserts that correlations between patches are obtained with the fully connected layers FC9 and FC10 to model the global information.) and correlation information between the plurality of patch images and the disease-related data (See page 1200, left column: “Moreover, we feed a concatenated representation comprising the output of FC10 and three demographic factors (i.e., age, gender, and education) into two FC layers (i.e., FC11, and FC12).”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2 and 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu (Joint Classification and Regression via Deep Multi-Task Multi-Channel Learning for Alzheimer’s Disease Diagnosis, May 2019, IEEE Transactions on Biomedical Engineering, Vol. 66, No. 5, Pages 1195-1206) in view of Li et al. (A Method for Predicting Alzheimer’s Disease Based on the Fusion of Single Nucleotide Polymorphisms and Magnetic Resonance Feature Extraction, 2021, ML-CDS 2021, Pages 105-115), hereinafter “Li”. Claim 2 is met by the combination of Liu and Li, wherein Liu teaches: The information processing apparatus according to claim 1, wherein Liu does not disclose the following; however, Li discloses: the prediction model includes a transformer encoder that takes in input data in which the patch images and the disease-related data are mixed and extracts the feature amount (See Fig. 1, concatenation of genetic data and MRI data, and page 106: “The current research has not yet found a suitable paradigm for extracting features from genetic data, and our study provides a novel way of extracting features from genetic data, i.e., using the multi-headed attention mechanism of transformer [17] to extract features from genetic data. In addition, in our experiments, we also apply the soft thresholding to extract features from MRI data.”). Liu and Li together disclose the limitations of claim 2. Li is directed to a similar field of art (multimodal Alzheimer’s disease prediction model). Therefore, Liu and Li are combinable. Modifying the system and method of Liu by adding “a transformer encoder that takes in input data in which the patch images and the disease-related data are mixed and extracts the feature amount”, as taught by Li, would yield the expected and predictable result of improved capture of global information by the transformer. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Liu and Li in this way. Claim 3 is met by the combination of Liu and Li, wherein The combination of Liu and Li teaches: The information processing apparatus according to claim 2, wherein And Li further discloses: the feature amount extraction unit includes a self-attention mechanism layer of the transformer encoder (See page 106: “The current research has not yet found a suitable paradigm for extracting features from genetic data, and our study provides a novel way of extracting features from genetic data, i.e., using the multi-headed attention mechanism of transformer [17] to extract features from genetic data.), and the correlation information extraction unit includes a linear transformation layer that linearly transforms the input data to the self-attention mechanism layer to obtain first transformation data, an activation function application layer that applies an activation function to the first transformation data to obtain second transformation data (See Fig. 2.), and a calculation unit that calculates a product of output data from the self-attention mechanism layer and the second transformation data for each element as the correlation information (See Fig. 1.). See the motivation to combine in the treatment of claim 2. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN S LEE whose telephone number is (571)272-1981. The examiner can normally be reached 11:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jonathan S Lee/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602807
METHOD FOR SUBPIXEL DISPARITY CALCULATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602785
TRAINING A MACHINE LEARNING MODEL TO ASSESS EMBRYO CHARACTERISTICS FROM VIDEO IMAGE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597108
METHOD AND APPARATUS TO PERFORM A WIRELINE CABLE INSPECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597110
IMAGE RECOGNITION METHOD, APPARATUS AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12584727
DIMENSION MEASUREMENT METHOD AND DIMENSION MEASUREMENT DEVICE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
94%
With Interview (+9.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month