Prosecution Insights
Last updated: April 18, 2026
Application No. 18/179,324

LEARNING APPARATUS, LEARNING METHOD, IMAGE PROCESSING APPARATUS, ENDOSCOPE SYSTEM, AND PROGRAM

Final Rejection §103
Filed
Mar 06, 2023
Examiner
HUYNH, VAN D
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
630 granted / 721 resolved
+25.4% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
746
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
30.9%
-9.1% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 721 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1-2, 5-6, 8, 14-15, 17, and 19-20 are amended. Claims 1-20 are pending in this application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-7 and 9-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al., US 2020/0226752 in view of Gregson et al., US 2019/0164287. Regarding claim 1, Lee discloses a learning apparatus (fig. 2; para 0072-0073; a medical image processing apparatus) comprising at least one processor (fig. 2, element 220; para 0073 and 0081; one or more processors), the processor being configured to generate a first learning model (fig. 6, element 630; para 0127; a second neural network (i.e., a first learning model)) by performing first learning using normality data (fig. 6, element 320; para 0127; a normal medical image) as learning data (fig. 6; para 0128; the second neural network may be trained by using, as training data, normal medical image) or by performing first learning using only mask data as learning data, wherein the normality mask data is generated by performing masking on a part of the normality data (the Examiner takes the position of rejecting the “normality data” above based on the “or” condition, therefore this limitation “normality mask data” is not addressed), and generate second training data (fig. 6, element 632; para 0127; a first medical image) to be applied to a second learning model (fig. 6, element 662; para 0130; a third neural network (i.e., second learning model)) that identifies identification target data (fig. 6; para 0130; the third neural network may perform processing for extracting characteristics of a lesion region in the first medical image), by using output data output from the first learning model (fig. 6, element 632; para 0127 and 0129; the second neural network generates and outputs the first medical image) in response to input of abnormality data to the first learning model (fig. 6, elements 622; para 0127; the second neural network may receive the virtual lesion image (i.e., abnormality data) to generate the first medical image). Lee discloses claim 1 as enumerated above, but Lee does not explicitly disclose using only normality data as claimed. However, Gregson discloses an anomaly detection system is trained on a plurality of training images. Each of the plurality of training images represents a tissue sample that is substantially free of abnormalities. Accordingly, the anomaly detection system is trained to represent a normal model, in which only tissue samples substantially free of abnormality (fig. 3; para 0013, 0016, and 0037). Therefore, taking the combined disclosures of Lee and Gregson as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate an anomaly detection system is trained on a plurality of training images. Each of the plurality of training images represents a tissue sample that is substantially free of abnormalities. Accordingly, the anomaly detection system is trained to represent a normal model, in which only tissue samples substantially free of abnormality as taught by Gregson into the invention of Lee for the benefit of providing automated screening of histopathology tissue samples via an analysis of a normal model (Gregson: para 0002). Regarding claim 2, the learning apparatus according to claim 1, Lee in the combination further disclose wherein the processor is configured to generate the first learning model that outputs, for input data having a masked part, output data in which the masked part is compensated for (fig. 4, element 318; para 0114-0116). Regarding claim 3, the learning apparatus according to claim 1, Lee in the combination further disclose wherein the processor is configured to generate the first learning model that reduces a dimension of input data and outputs output data for which the reduced dimension is restored (para 0098). Regarding claim 4, the learning apparatus according to claim 1, Lee in the combination further disclose wherein the processor is configured to generate the first learning model that outputs output data having a size the same as a size of input data (para 0104). Regarding claim 5, the learning apparatus according to claim 1, Lee in the combination further disclose wherein the processor is configured to generate the first learning model to which a generative adversarial network is applied (para 0128), by performing the first learning using only the normality mask data as learning data (the examiner takes the position of rejecting the normality image as stated in claim 1 above based on the “or” condition, therefore the normality mask data is not addressed). Regarding claim 6, the learning apparatus according to claim 1, Lee in the combination further disclose wherein the processor is configured to generate the first learning model to which an autoencoder is applied, by performing the first learning using only the normality data as learning data (fig. 6, element 630; para 0128; an autoencoder is a type of neural network). Regarding claim 7, the learning apparatus according to claim 1, Lee in the combination further disclose wherein the processor is configured to generate the second training data by using a difference between input data and output data for the first learning model (fig. 6, element 632; para 0127). Regarding claim 9, the learning apparatus according to claim 1, Lee in the combination further disclose wherein the processor is configured to generate the second learning model (fig. 6, element 662; para 0130) by performing second learning using a set of the abnormality data (fig. 3, element 342 and fig. 6, element 654; para 0100 and 0129) and the second training data (fig. 3, element 318 and fig. 6, element 632; para 0099 and 0129) as learning data (fig. 6; para 0129-0132). Regarding claim 10, the learning apparatus according to claim 9, Lee in the combination further disclose wherein the processor is configured to perform the second learning using a set of the normality data (figs. 3 and 6, element 320; para 0100 and 0127) and first training data (fig. 6, elements 602, 604, and 622; para 0125-0128; first input and virtual lesion image) corresponding to the normality data as learning data (fig. 6; para 0127-0128). Regarding claim 11, the learning apparatus according to claim 10, Lee in the combination further disclose wherein the processor is configured to perform the second learning for the second learning model (fig. 6, element 662; para 0130) by using as the second training data (fig. 3, element 318 and fig. 6, element 632; para 0100 and 0129), a hard label that has discrete training values indicating the normality data (figs. 3 and 6, element 320; para 0100 and 0127) and the abnormality data (fig. 3, element 344 and fig. 6, element 654; para 0102 and 0129) and that is applied to the first learning and a soft label that has continuous training values indicating an abnormality-likeness and that is generated by using output data from the first learning model (figs. 3 and 6; para 0106-0108 and 0127-0132). Regarding claim 12, the learning apparatus according to claim 11, Lee in the combination further disclose wherein the processor is configured to perform the second learning a plurality of times (figs. 4 and 6; para 0113), and not increase a weight used for the hard label and not decrease a weight used for the soft label as the number of times the second learning is performed increases (para 0114). Regarding claim 13, the learning apparatus according to claim 8, Lee in the combination further disclose wherein the processor is configured to generate the second learning model to which a convolutional neural network is applied (fig. 6, element 662; para 0130). Regarding claim 14, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons. Regarding claim 15, Lee discloses an image processing apparatus (fig. 2; para 0072-0073; a medical image processing apparatus) comprising at least one processor (fig. 2, element 220; para 0073 and 0081; one or more processors), the processor being configured to generate a second learning model (fig. 6, element 662; para 0130; a third neural network (i.e., second learning model)) by performing second learning using a set of second training data (fig. 6, element 632; para 0129; a first medical image) and an abnormality image (fig. 6, element 654; para 0129; abnormal medical image) as learning data (fig. 6; para 0129-0132; the third neural network is trained using at least one or a combination of the first medical image, the abnormal medical image, or a determination result value from the discriminator), the second training data being generated by using an output image output from a first learning model (fig. 6, element 632; para 0127 and 0129; the second neural network (i.e., a first learning model) generates and outputs the first medical image) in response to input of an abnormality image to the first learning model (fig. 6, elements 622; para 0127; the second neural network may receive the virtual lesion image (i.e., abnormality data) to generate the first medical image), the second training data being applied to the second learning model that identifies presence or absence of an abnormality in an identification target image (fig. 6; para 0130; the third neural network may perform processing for extracting characteristics of a lesion region (i.e., presence of an abnormality) in the first medical image), the first learning model being generated by performing first learning using a normality image (fig. 6, element 320; para 0127; a normal medical image) as learning data (fig. 6; para 0128; the second neural network may be trained by using, as training data, normal medical image) or by performing first learning using only a normality mask image as learning data, wherein the normality mask image is generated by performing masking on a part of the normality image (the Examiner takes the position of rejecting the “normality data” above based on the “or” condition, therefore this limitation “normality mask data” is not addressed), and determine whether an identification target image is a normality image by using the second learning model (para 0130-0132; the third neural network may output a determination result value indicating a result of determining whether the first medical image is a real medical image. For example, the determination result value may correspond to the probability that the first medical image is a real medical image or a value representing ‘true’ or ‘false’). Lee discloses claim 15 as enumerated above, but Lee does not explicitly disclose using only normality data as claimed. However, Gregson discloses an anomaly detection system is trained on a plurality of training images. Each of the plurality of training images represents a tissue sample that is substantially free of abnormalities. Accordingly, the anomaly detection system is trained to represent a normal model, in which only tissue samples substantially free of abnormality (fig. 3; para 0013, 0016, and 0037). Therefore, taking the combined disclosures of Lee and Gregson as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate an anomaly detection system is trained on a plurality of training images. Each of the plurality of training images represents a tissue sample that is substantially free of abnormalities. Accordingly, the anomaly detection system is trained to represent a normal model, in which only tissue samples substantially free of abnormality as taught by Gregson into the invention of Lee for the benefit of providing automated screening of histopathology tissue samples via an analysis of a normal model (Gregson: para 0002). Regarding claim 16, the image processing apparatus according to claim 15, Lee in the combination further disclose wherein the second learning model (fig. 3, element 330 and fig. 6, element 662) performs segmentation of an abnormal part for the identification target image (figs. 3 and 6; para 0103-0107 and 0130). Regarding claim 17, Lee discloses a system (fig. 2; para 0072-0073; a medical image processing system) comprising: at least one processor (fig. 2, element 220; para 0073 and 0081; one or more processors), the processor being configured to generate a second learning model (fig. 6, element 662; para 0130; a third neural network (i.e., second learning model)) by performing second learning using a set of second training data (fig. 6, element 632; para 0129; a first medical image) and an abnormality image (fig. 6, element 654; para 0129; abnormal medical image) as learning data (fig. 6; para 0129-0132; the third neural network is trained using at least one or a combination of the first medical image, the abnormal medical image, or a determination result value from the discriminator), the second training data being generated by using an output image output from a first learning model (fig. 6, element 632; para 0127 and 0129; the second neural network (i.e., a first learning model) generates and outputs the first medical image) in response to input of an abnormality image to the first learning model (fig. 6, elements 622; para 0127; the second neural network may receive the virtual lesion image (i.e., abnormality data) to generate the first medical image), the second training data being applied to the second learning model that identifies presence or absence of an abnormality in an identification target image (fig. 6; para 0130; the third neural network may perform processing for extracting characteristics of a lesion region (i.e., presence of an abnormality) in the first medical image), the first learning model being generated by performing first learning using a normality image (fig. 6, element 320; para 0127; a normal medical image) as learning data (fig. 6; para 0128; the second neural network may be trained by using, as training data, normal medical image) or by performing first learning using only a normality mask image as learning data, wherein the normality mask image is generated by performing masking on a part of a normality image (the Examiner takes the position of rejecting the “normality data” above based on the “or” condition, therefore this limitation “normality mask data” is not addressed), and determine presence or absence of an abnormality in an endoscopic image acquired from the endoscope, by using the second learning model (fig. 6; para 0130; the third neural network may perform processing for extracting characteristics of a lesion region (i.e., presence of an abnormality) in the first medical image). Lee discloses claim 17 as enumerated above, but Lee does not explicitly disclose an endoscope system and using only normality data as claimed. However, Gregson discloses the images can be whole slide images, single frame capture images from a microscope mounted camera, or images taken during endoscopic procedures (i.e., endoscopic procedures imply the use of an endoscope system). An anomaly detection system is trained on a plurality of training images. Each of the plurality of training images represents a tissue sample that is substantially free of abnormalities. Accordingly, the anomaly detection system is trained to represent a normal model, in which only tissue samples substantially free of abnormality (fig. 3; para 0013, 0015-0016, and 0037). Therefore, taking the combined disclosures of Lee and Gregson as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the images can be whole slide images, single frame capture images from a microscope mounted camera, or images taken during endoscopic procedures (i.e., endoscopic procedures imply the use of an endoscope system). An anomaly detection system is trained on a plurality of training images. Each of the plurality of training images represents a tissue sample that is substantially free of abnormalities. Accordingly, the anomaly detection system is trained to represent a normal model, in which only tissue samples substantially free of abnormality (Gregson: para 0002). Regarding claim 18, the endoscope system according to claim 17, Lee in the combination further disclose wherein the processor is configured to perform the second learning by applying the second training data (fig. 6, element 632; para 0129; a first medical image) that is generated by using the first learning model for which the first learning is performed by applying an endoscopic image that is a normal mucous membrane image as the normality image (fig. 6, element 320; para 0127; a normal medical image), and by applying an endoscopic image that includes a lesion region as the abnormality image (fig. 6, element 654; para 0129; abnormal medical image). Regarding claim 19, the endoscope system according to claim 18, Lee in the combination further disclose wherein the processor is configured to perform the second learning by using a set of the second training data (fig. 6, element 632; para 0129; a first medical image) and the abnormality image (fig. 6, element 654; para 0129; abnormal medical image) and a set of a normality image (fig. 6, element 320; para 0127) and first training data (fig. 6, elements 602, 604, and 622; para 0125-0128; first input and virtual lesion image) corresponding to the normality image as learning data (fig. 6; para 0127-0128), and generate the second learning model that performs segmentation of an abnormal part in an identification target image (figs. 3 and 6; para 0103-0107 and 0130), the second training data corresponding to the abnormality image (fig. 6, element 654; para 0129) and generated by normalizing difference data that is a difference between the abnormality image and an output image output from the first learning model (figs. 3 and 6; para 0103-0107 and 0130) in response to input of an abnormality mask image that is generated by performing masking on an abnormal part of the abnormality image to the first learning model (figs. 3-4, element 314 and fig. 6, element 622; para 0098, 0113, and 0127; a virtual lesion image), the first learning performed for the first learning model being learning for restoring the normal mucous membrane image from a normal mucous membrane mask image generated by performing masking on a part of the normal mucous membrane image be and for generating a normality restoration image (figs. 3-4 and 6; para 0098-0099, 0113-0116, and 0127). Regarding claim 20, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons. Allowable Subject Matter Claim 8 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art made of record and considered pertinent to the applicant's disclosure, taken individually or in combination, does not teach the claimed invention having the following limitations, in combination with the remaining claimed limitations. Regarding dependent claim 8, the prior art does not teach or suggest the claimed invention having “generate abnormality mask data that is generated by performing masking on an abnormal part of the abnormality data, and generate the second training data by performing normalization on difference data, wherein the difference data is a difference between the abnormality data input to the first learning model and output data output in response to input of the abnormality mask data to the first learning model”, and a combination of other limitations thereof as recited in the claims. Response to Arguments Applicant's arguments with respect to claims 1-20 have been considered but are moot in view of the new ground(s) of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAN D HUYNH/Primary Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Mar 06, 2023
Application Filed
Nov 21, 2025
Non-Final Rejection — §103
Feb 26, 2026
Response Filed
Apr 08, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602798
METHOD AND APPARATUS FOR GENERATING SUBJECT-SPECIFIC MAGNETIC RESONANCE ANGIOGRAPHY IMAGES FROM OTHER MULTI-CONTRAST MAGNETIC RESONANCE IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602784
MEDICAL DEVICE FOR TRANSCRIPTION OF APPEARANCES IN AN IMAGE TO TEXT WITH MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12594046
METHOD AND APPARATUS FOR ASSISTING DIAGNOSIS OF CARDIOEMBOLIC STROKE BY USING CHEST RADIOGRAPHIC IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12586186
JAUNDICE ANALYSIS SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12582345
Systems and Methods for Identifying Progression of Hypoxic-Ischemic Brain Injury
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.4%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 721 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month