Prosecution Insights
Last updated: April 19, 2026
Application No. 18/410,215

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Non-Final OA §102§103§112§DP
Filed
Jan 11, 2024
Examiner
FITZPATRICK, ATIBA O
Art Unit
2677
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
775 granted / 881 resolved
+26.0% vs TC avg
Minimal +5% lift
Without
With
+4.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
908
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 881 resolved cases

Office Action

§102 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 1 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 5 of copending Application No. 18/410,187 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because limitations of instant application claim 1 are all present in copending application claim 5 – understanding the claim 5 also includes all limitations of base claim 1 and intervening claim 4. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim 1 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 5 of copending Application No. 18/574,109 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because limitations of instant application claim 1 are all present in copending application claim 5 – understanding the claim 5 also includes all limitations of base claim 1 and intervening claim 4. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Independent claim 1 recites that, “generate plural inference results outputted from plural inference models by inputting the endoscopic image into the plural inference models regarding an attention region of the examination target in the endoscopic image, wherein the plural inference models are different from one another in lesion types of the model”. Thus, the “plural inference results” are derived from “plural inference models”. However, dependent claim 4 recite, “the plural inference results outputted from an inference model”. This claim references the same “the plural inference results” as recited above in claim 1, but now recites that they are output from a singular, “an inference model”, which is a discrepancy. One of ordinary skill in the art cannot know whether “the plural inference results” are output from a single inference model or plural inference models. Dependent claims 6, 7, and 8 have similar discrepancies. Depending claim 5 does not remedy these deficiencies. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-6 and 9-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by machine translation of KR 20200080626 A (Lee). As per claim 1, Lee teaches an image processing device comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to (Lee: Fig. 1 (shown below): mainly 140, 150; para 55; para 59: “The storage unit 140 may be configured to store the medical image received through the receiving unit 110, and to store an instruction of the device 100 for providing information on lesion diagnosis set through the input unit 120.”; para 60: “the processor 150 may use a prediction model configured to extract features from the medical image and classify the lesions based on the features. In particular, the processor 150 may be based on a plurality of prediction models that have been retrained based on the medical image for selective learning to provide more reliable information”; PNG media_image1.png 581 753 media_image1.png Greyscale ): acquire an endoscopic image obtained by photographing an examination target (Lee: para 6: “an endoscopy image”; para 18: “an endoscope image of a suspected disease site”; para 56: “At this time, the receiver 110 may receive a medical image of an X-ray image, an ultrasound image, a computed tomography image, a magnetic resonance image, a positron tomography image, and an endoscope image of a suspected disease site”; para 67: “the medical image receiving unit 210 may be configured to receive a medical image for a target site. In this case, the medical image receiving unit 210 may receive a medical image of an X-ray image, an ultrasound image, a computed tomography image, a magnetic resonance image, a positron tomography image, and an endoscope image of a suspected disease site.”; para 68: “receiving unit 210 may be an ultrasonic probe or an endoscopic probe”; para 103: “endoscope image”); generate plural inference results outputted from plural inference models by inputting the endoscopic image into the plural inference models regarding an attention region of the examination target in the endoscopic image, wherein the plural inference models are different from one another in lesion types of the model training data used for machine learning of the plural inference models (Lee: para 47: “a combination of two or more models selected from the above models can be used”; para 50: “the term “ensemble classifier” may mean a classifier having a configuration in which two or more classifiers selected from the above-described classifiers are combined in series or in parallel.”; para 51: “a plurality of ensemble classifiers can be used to classify lesions. In this case, the ensemble classifier may be configured to classify lesions for different features for each classifier.”: “Lesions for different features” is interpreted as different lesion types. Para 61: “the processor 150 may use a plurality of feature extraction models configured to extract features for a medical image and a plurality of classifiers configured to stochastically classify lesions based on the features”; Para 93: “the lesion classification unit 230 may be composed of a combination of two or more models selected from the models”; para 95: “According to another feature of the present invention, the lesion classifier 230 may be configured as an ensemble classifier in which two or more classifiers selected from the above-described classifiers are combined in series or in parallel”; para 96: “the lesion classifying unit 230 may be configured with a plurality of ensemble classifiers configured to classify lesions for different features for each classifier.”; Para 105: “each of the plurality of feature extraction models in the step of extracting features (S320) may be configured to extract at least one feature different from each other for each feature extraction model.”; Para 11: “in step S330 of classifying the lesions, each of the plurality of classifiers is configured to classify the lesions based on the combined features in which the features output by the plurality of feature extraction models are combined”; Para 115: “in step S330 of classifying lesions, lesions based on a plurality of features are determined by an ensemble classifier composed of a plurality of classifiers.”; Para 116: “in the step of classifying the lesion (S330 ), a plurality of combined features 326 obtained as a result of the step of extracting the feature (S320) are respectively provided to the plurality of ensemble classifiers 342. Can be entered. Then, a lesion probability 334 and an average lesion probability 335 are calculated by each of the plurality of ensemble classifiers 342, and classification lesions 336 for each ensemble classifier 342 may be determined”; Para 117: “predetermined plurality of lesions is determined by the plurality of ensemble classifiers.”; Para 118: “in step S330 of classifying the lesions, the probability of each of the plurality of lesions for the feature is calculated by the plurality of ensemble classifiers, and based on the probability of each of the plurality of lesions calculated by the plurality of ensemble classifiers. The average lesion probability for each lesion may be calculated. The target site can then be determined to have one of a plurality of lesions based on the average lesion probability.” Figs. 3a, 3d, and 3e (shown below)); and integrate plural inference results (Lee: abstract: “a step of predicting a lesion for the individual by using a classifier configured to predict the lesion based on the medical image; a step of generating a suspicious lesion expression image indicating a degree of interest in predicting the lesion of the classifier during a process of predicting the lesion in the medical image; and a step of providing the predicted lesion and the suspicious lesion expression image”; para 48: “when a plurality of classifiers are applied in classifying lesions, an average of a result value output by each classifier is calculated, and the lesion may be finally determined based on the average value.”; para 52: “when a plurality of ensemble classifiers are applied in classifying lesions, an average of a result value output by each ensemble classifier is calculated, and a lesion may be finally determined based on the average value”; para 94: “when the lesion classifying unit 230 is composed of a plurality of classifiers, an average for a result value output by each classifier is calculated, and the lesion can be finally determined based on the average value”; para 97: “when the lesion classifying unit 230 is composed of a plurality of ensemble classifiers, an average for a result value output by each ensemble classifier is calculated, and the lesion can be finally determined based on the average value”; para 112: “in the step of classifying the lesion (S330 ), one combined feature 326 obtained as a result of the step of extracting the feature (S320) is input to each of the plurality of classifiers 332 Can be. Thereafter, a probability that the lesion is a predetermined lesion, that is, a lesion probability 334, may be calculated for the combined feature 326 by each of the plurality of classifiers 332. Then, the average lesion probability 335 for the plurality of lesion probabilities 334 calculated by each of the plurality of classifiers 332 is calculated”; para 114: “in the step of classifying the lesion (S330), the probability of each of the plurality of lesions for the feature is calculated by the plurality of classifiers, and based on the probability of each of the plurality of lesions calculated by the plurality of classifiers. The average lesion probability for each lesion can be calculated. The target site can then be determined to have one of a plurality of lesions based on the average lesion probability.”; para 116: “Then, the average classification lesion probability 337 for the classification lesion 336 for each ensemble classifier 342 may be calculated. Finally, if the average classification lesion probability 337 for the lesion is greater than or equal to a predetermined level, it may be finally classified as having a predetermined lesion for a target site. That is, information on the final lesion 338 determined as a result of classifying the lesion (S330) may be provided. At this time, the probability for the final lesion 338 may also be provided as information regarding lesion diagnosis.”; Para 118 (as referenced above); PNG media_image2.png 705 861 media_image2.png Greyscale PNG media_image3.png 534 1596 media_image3.png Greyscale PNG media_image4.png 411 1464 media_image4.png Greyscale PNG media_image5.png 383 1442 media_image5.png Greyscale ). As per claim 2, Lee teaches the image processing device according to claim 1, wherein the at least one processor is configured to further execute the instructions to convert the endoscopic image into plural images by data augmentation, and wherein the at least one processor is configured to execute the instructions to generate an inference result regarding the attention region from each of the plural images (Lee: Para 63: “the processor 150 uses at least two feature extraction models selected from AlexNet, OverFeat, VGG, VGG-verydeep, ResNet, GoogleNet, and Inception, and supports Vector Machine (SVM), Random Forests (RF), linear (LDA) Among discriminant analysis (QDA), quadratic discriminant analysis (QDA), decision tree, XG Boost (extreme gradient boosting), logistic regression, logistic regression, NN (nearest neighbor) and GMM (Gaussian mixture model)”: Of the listed models, AlexNet, OverFeat, VGG, VGG-verydeep, ResNet, GoogleNet, and Inception are all CNNs that generate feature maps after each convolutional layer. Feature maps are augmented images according to the convolution kernels. Para 71: “According to a feature of the present invention, the feature extraction unit 220 may be based on a feature extraction model configured to extract features for a medical image. In this case, the feature extraction model may be a model modified to extract features before final prediction of the lesion”; Para 104: “each of the plurality of feature extraction models may extract the feature with respect to the medical image received as a result of the step of receiving a medical image (S310 )”; Para 105: “each of the plurality of feature extraction models in the step of extracting features (S320) may be configured to extract at least one feature different from each other for each feature extraction model.”; Fig. 3b (shown above)). As per claim 3, Lee teaches the image processing device according to claim 2, wherein the at least one processor is configured to execute the instructions to acquire the inference result outputted from an inference model by inputting each of the plural images into the inference model, and wherein the inference model is a model obtained through machine learning of a relation between an image to be inputted to the inference model and the attention region in the image (Lee: See arguments and citations offered in rejecting claim 2 above: The attention region is the image region considered in classification for a lesion or not a lesion). As per claim 4, Lee teaches the image processing device according to claim 1, wherein the at least one processor is configured to execute the instructions to acquire the plural inference results outputted from an inference model by inputting the endoscopic image into the inference model by plural times while changing setting conditions of the inference model, and wherein the inference model is a model obtained through machine learning of a relation between an image to be inputted to the model and the attention region in the image (Lee: See arguments and citations offered in rejecting claim 1 above; Para 110: “According to a feature of the present invention, in the step of classifying the lesion (S330), the lesion probability for the feature is calculated by the plurality of classifiers, and the lesion for the target site is classified based on the probability of the lesion”; Para 112: “Finally, if the average lesion probability for a lesion is 335 or higher, a predetermined lesion may be classified as a target site. That is, information on the classification lesion 336 may be provided as a result of classifying the lesion (S330 ). At this time, the probability for the classified lesion 336 may be provided together as information regarding lesion diagnosis”; Para 114: “More specifically, in the step of classifying the lesion (S330), the probability of each of the plurality of lesions for the feature is calculated by the plurality of classifiers, and based on the probability of each of the plurality of lesions calculated by the plurality of classifiers. The average lesion probability for each lesion can be calculated. The target site can then be determined to have one of a plurality of lesions based on the average lesion probability”; The changing setting conditions are the different decision hyperplanes and thresholds for the different models). As per claim 5, Lee teaches the image processing device according to claim 4, wherein the setting condition is a threshold parameter for determining whether or not the attention region is present (Lee: See arguments and citations offered in rejecting claim 4 above). As per claim 6, Lee teaches the image processing device according to claim 5, wherein the at least one processor is configured to execute the instructions to at least acquire the inference results obtained from the inference model when the threshold parameter in which a recall is prioritized and the threshold parameter in which a precision is prioritized are respectively set to the inference model (Lee: See arguments and citations offered in rejecting claim 5 above; Para 98: “For example, the device 100, 200 for providing information on lesion diagnosis of the present invention is configured as a single predictive model according to a configuration feature using a plurality of feature extraction models and a plurality of classifiers, so that lesions without feature extraction It is possible to classify lesions with higher accuracy, precision and specificity than devices configured to classify.”; Para 132: “Particularly, the feature is extracted from the feature adjustment models of fine-tuned AlexNet, OverFeat, and VGG, and the result of classifying the lesions (With fine-tuning) shows that accuracy, precision, and sensitivity are greatly improved”; Para 156: “More specifically, features (or combination features) extracted from at least one selected from the six models were applied to an ensemble classifier combined with SVM and RF, from which accuracy, precision, and specificity for lesion classification results Was evaluated”; Para 160: “Further, in the case of [Vv][R], which is a combination of two feature extraction models of VGG-verydeep ([Vv]) with output layer of fc2 and ResNet ([R]) with one output layer, and ensemble classifier applied , Sensitivity is 92%, which is significantly higher than when using a single feature extraction model and a single classifier.”; recall = sensitivity; precision = sensitivity. Different ones of the individual classifiers of the ensemble classifiers have different specificity and sensitivity.). As per claim 9, Lee teaches the image processing device according to claim 1, wherein the at least one processor is configured to further execute the instructions to detect the attention region based on an image into which the plural inference results are integrated (Lee: See arguments and citations offered in rejecting claim 1 above: The attention region is detected in the image that exhibited features resulting in the inference results.). As per claim 10, Lee teaches the image processing device according to claim 9, wherein the at least one processor is configured to execute the instructions to display or output, by audio, information regarding a result of the detection (Lee: See arguments and citations offered in rejecting claim 1 above; Para 58: “the output unit 130 may visually display the medical image received by the reception unit 110. Furthermore, the output unit 130 is configured to output features extracted by the processor 150 to be described later, or to visually display information about lesions classified by the classifier, to provide information on diagnosis of lesions to medical personnel”). As per claim 11, Lee teaches the image processing device according to claim 10, wherein the at least one processor is configured to execute the instructions to output information regarding the result of the detection (the following is recited as intended use and is not required) to assist examiner's decision making (Lee: See arguments and citations offered in rejecting claim 10 above; Para 58: “the output unit 130 may visually display the medical image received by the reception unit 110. Furthermore, the output unit 130 is configured to output features extracted by the processor 150 to be described later, or to visually display information about lesions classified by the classifier, to provide information on diagnosis of lesions to medical personnel”). As per claim(s) 12, arguments made in rejecting claim(s) 1 are analogous. As per claim(s) 13, arguments made in rejecting claim(s) 1 are analogous. Lee also teaches A non-transitory computer readable storage medium storing a program executed by a computer, the program causing the computer (Lee: Fig. 1 (shown above): mainly 140, 150; para 55; para 59: “The storage unit 140 may be configured to store the medical image received through the receiving unit 110, and to store an instruction of the device 100 for providing information on lesion diagnosis set through the input unit 120.”; para 60: “the processor 150 may use a prediction model configured to extract features from the medical image and classify the lesions based on the features. In particular, the processor 150 may be based on a plurality of prediction models that have been retrained based on the medical image for selective learning to provide more reliable information”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Lee as applied to claim 3 above, and further in view of US 20220301153 A1 (Chen). As per claim 7, Lee teaches the image processing device according to claim 3, wherein the at least one processor is configured to execute the instructions to integrate the plural inference results (Lee: See arguments and citations offered in rejecting claim 3 above). Lee does not teach while weighting each of the plural inference results based on a degree of similarity between each of the plural images and a training image, used for machine learning of the inference model. Chen teaches while weighting each of the plural inference results based on a degree of similarity between each of the plural images and a training image, used for machine learning of the inference model (Chen: Para 7: “The actions further include providing a plurality of ML models, each ML model including an input layer and a pooling layer; generating, by the plurality of ML models, a plurality of respective outputs; and generating a final output by calculating an ensemble of the plurality of the respective outputs. The final output is one of a weighted average of the plurality of the respective outputs and a root mean-squared value of the plurality of the respective outputs.”; Para 50: “the two or more machine learning models can be trained using the same training examples (training data). For example, model 1 and model 2 can use the same modified Inception V4 architecture and can be trained using the same training examples (training data). Note that even though model 1 and model 2 have the same architecture and are trained using the same training examples, the parameters of the two models are likely to be different”; Para 51: “the final prediction result can be the mean value, the maximum value, the medium value, the weighted average value, the root mean squared value of the multiple prediction results, or other representative value of the multiple prediction results. For example, when model 1 (210) generates a prediction score of 2.0 and model 2 (220) generates a prediction score of 3.0, the ensemble module 204 can generate a final DR prediction 206 with an average score of 2.5.”; Para 83: “The system generates a final output by calculating an ensemble of the plurality of the respective outputs (512). In some implementations, the final output can be a weighted average of the plurality of the respective outputs, or the final output can be a root mean-squared value of the plurality of the respective outputs… By using the ensemble of the multiple ML models, the system can generate more accurate prediction results (e.g., higher sensitivity and/or higher specificity for detecting DR in fundus images)”; Para 84: “The system can train each ML model using a plurality of training examples. The system can receive a plurality of training examples, each training example includes a medical image and a label indicating one or more medical conditions in the medical image. For example, each training example can include a fundus image, a label indicating whether the fundus image has a DR condition depicted therein.”; Para 85: “The fundus images can be labeled by ophthalmologists.”; Para 86: “The system can use the training examples to train the ML model. The system can generate, for each medical image in the training example, a prediction result (e.g., a likelihood of DR being depicted in a fundus image). The system can compare the prediction results to the labels in the training examples. The system can calculate a loss which can measure the difference between the prediction results and the labels in the training examples.”; Para 91: “the system can train multiple ML models using the same training examples. The system can further determine, using the training examples, an ensemble method to generate a final output from respective outputs generated from the multiple ML models. For example, the system can determine the weights of a weighted average ensemble using the training examples”; The weighting is according the similarity to the label of the training image – as a part of the supervised training process; PNG media_image6.png 389 932 media_image6.png Greyscale PNG media_image7.png 612 469 media_image7.png Greyscale ). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Chen into Lee since both Lee and Chen suggest a practical solution and field of endeavor of ensemble learning for detecting disease or lesions in medical images wherein the results of individual models making up the ensemble are averaged in general and Chen additionally provides teachings that can be incorporated into Lee in that the average is weighted according to the training examples. The teachings of Chen can be incorporated into Lee in that the average is weighted according to the training examples. One of ordinary skill in the art would recognize the advantage of prioritize component models with higher performance, leading to improved overall accuracy and better generalization. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim 8, Lee teaches the image processing device according to claim 3, wherein the at least one processor is configured to execute the instructions to integrate the plural inference result (Lee: See arguments and citations offered in rejecting claim 3 above). Lee does not teach while weighting each of the plural inference results based on a degree of similarity between each of the plural inference results and correct answer data used for machine learning of the inference model. Chen teaches while weighting each of the plural inference results based on a degree of similarity between each of the plural inference results and correct answer data used for machine learning of the inference model (Chen: See arguments and citations offered in rejecting claim 7 above: The label of the training image data is the correct answer). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Chen into Lee since both Lee and Chen suggest a practical solution and field of endeavor of ensemble learning for detecting disease or lesions in medical images wherein the results of individual models making up the ensemble are averaged in general and Chen additionally provides teachings that can be incorporated into Lee in that the average is weighted according to the training examples. The teachings of Chen can be incorporated into Lee in that the average is weighted according to the training examples. One of ordinary skill in the art would recognize the advantage of prioritize component models with higher performance, leading to improved overall accuracy and better generalization. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Atiba Fitzpatrick whose telephone number is (571) 270-5255. The examiner can normally be reached on M-F 10:00am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for Atiba Fitzpatrick is (571) 270-6255. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Atiba Fitzpatrick /ATIBA O FITZPATRICK/ Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jan 11, 2024
Application Filed
Nov 26, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602854
SYSTEM AND METHOD FOR MEDICAL IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12586195
OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC APPARATUS, OPHTHALMIC INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579649
RADIATION IMAGE PROCESSING APPARATUS AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12555237
CLOSEUP IMAGE LINKING
2y 5m to grant Granted Feb 17, 2026
Patent 12548221
SYSTEMS AND METHODS FOR AUTOMATIC QUALITY CONTROL OF IMAGE RECONSTRUCTION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
93%
With Interview (+4.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 881 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month