Prosecution Insights
Last updated: April 19, 2026
Application No. 18/008,407

NEURAL NETWORK LEARNING METHOD USING AUTO ENCODER AND MULTIPLE INSTANCE LEARNING AND COMPUTING SYSTEM PERFORMING THE SAME

Non-Final OA §101§103§112
Filed
Dec 05, 2022
Examiner
KIM, HARRISON CHAN YOUNG
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Deep Bio Inc.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
3 granted / 6 resolved
-5.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
4.9%
-35.1% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The preliminary amendments filed 12/05/2022 and 12/14/2022 have been entered. This action is made non-final. Claims 1-17 are pending. Claims 1, 6, 13 and 16 are independent claims. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: a storage module configured to store the neural network… a patch unit determination module configured to input each of a plurality of diagnosis patches…and obtain a determination result… an output module configured to output a heat map in claim 9, a storage module configured to store an auto-encoder… an extraction module configured to… extract an instance… a training module of training the neural network in claims 13 and 14, the extraction module is configured to input data in claim 15, and a storage module which stores… an extraction module configured to… extract… a training module configured to train in claims 16 and 17. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Each module will be interpreted as a functional and structural combination of hardware and software as described in the specification PARA60. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 15 recites the limitation "the extraction module" in line 7. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 and 12-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. Claim 1 is directed to a process (Step 1: YES). Step 2A prong 1: Does the claim recite a judicial exception? Claim 1 recites: A neural network training method… wherein the neural network training method comprises: an extracting step… and a training step… wherein the extracting step includes:… calculating probabilities for each data instance included in the data bag (calculating probabilities for data instances is a mathematical calculation or mental process (i.e., estimating the likelihood of the image patch being of one class versus another)); and a step of determining part of each data instance included in the data bag as an instance for training based on probabilities for each data instance included in the data bag (determining a data instance as an instance for training based on probabilities is a mental process, i.e., selecting instances with high probabilities) and a determination result… with respect to at least part of each data instance included in the data bag (considering a determination result in the training instance determination step is a mental process). These steps can be performed mentally or are mathematical calculations (Step 2A prong 1: YES). Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, considered individually and in combination, integrate the judicial exception into a practical application? Claim 1 recites: performed in a computing system which includes an auto-encoder for determining whether an inputted data instance is in a first state or a second state and a neural network which outputs probabilities where the inputted data instance is in the first state or the second state, … of, for each of a plurality of data bags labeled with any one of the first state or the second state, extracting an instance for training which is part of data instances included in the data bag; …of training the neural network based on an instance for training corresponding to each of the plurality of data bags … a step of inputting each data instance included in the data bag to the neural network in training and… of the auto-encoder. Performing the method in a computing system with an auto-encoder and neural network is recited at a high level of generality and an attempt to use the neural network models without placing any limits on how they operate. Extracting an instance for training and inputting data into a network are extra-solution activity of data gathering that does not add a meaningful limitation to the classification method. Training a neural network based on an instance for training is recited at a high level of generality and an attempt to use the neural network model without placing any limits on how it operates. Obtaining a determination result from an autoencoder is extra-solution activity of data gathering that also does not add a meaningful limitation to the classification method (Step 2A prong 2: NO). Step 2B: These elements are recited at such a high level of generality that they fail to integrate the abstract idea into a practical application, since they provide nothing more than mere instructions to implement an abstract idea on a generic computer (MPEP 2106.05(f)) or only amount to data gathering or outputting without significantly more (MPEP 2106.05(g)). These limitations, taken either alone or in combination, fail to provide an inventive concept (Step 2B: NO). Thus, the claim is not patent eligible. Regarding claims 2-5, they recite limitations which further narrow the abstract idea by specifying more details of the mental and mathematical process that occurs (Claim 2, determining the state of the data instance based on a difference between autoencoder input and output is a mental process (i.e., judging or evaluating the difference between input and output images); Claim 3, inputting data instances in a particular order is still insignificant extra-solution activity, and determining “top part” of instances is a mental process; Claim 4, repeating the combination of abstract ideas and additional elements is still directed to the abstract ideas; Claim 5, using images as data bags and image patches as data instances in the method is an additional element(s) specifying a field of use without significantly more). Regarding claim 6, Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. Claim 6 is directed to a process (Step 1: YES). Step 2A prong 1: Does the claim recite a judicial exception? Claim 6 recites: A neural network training method… wherein the neural network training method comprises: an extracting step… and a training step…, wherein the extracting step includes:… and calculating probabilities for each patch forming the image for training (calculating probabilities for image patches is a mathematical calculation or mental process); and a step of determining part of each patch forming the image for training as a patch for training based on probabilities for each patch forming the image for training (determining an image patch as a patch for training based on probabilities is a mental process, i.e., selecting patches with high probabilities of being relevant) and a determination result of the auto-encoder with respect to at least part of each patch forming the image for training (considering a determination result in the training patch determination step is a mental process). These steps can be performed mentally or are mathematical calculations (Step 2A prong 1: YES). Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, considered individually and in combination, integrate the judicial exception into a practical application? Claim 6 recites: performed in a computing system which includes an auto-encoder for determining whether an inputted patch is in a first state or a second state-here, the patch is one of those into which an image is segmented in a certain size-; and a neural network which outputs probabilities where the inputted patch is in the first state or the second state… of, for each of a plurality of images for training labeled with any one of the first state or the second state, extracting a patch for training which is part of patches forming the image for training… of training the neural network based on the patch for training corresponding to each of the plurality of images for training… a step of inputting each patch forming the image for training to the neural network in training. Applying the method to images and image patches is an additional element(s) specifying a field of use without significantly more. Performing the method in a computing system with an auto-encoder and neural network is recited at a high level of generality and an attempt to use the neural network models without placing any limits on how they operate. Extracting an image patch for training and inputting patches into a network are extra-solution activity of data gathering/inputting that does not add a meaningful limitation to the classification method. Training a neural network based on a patch for training is recited at a high level of generality and an attempt to use the neural network model without placing any limits on how it operates. Obtaining a determination result from an autoencoder is extra-solution activity of data gathering that also does not add a meaningful limitation to the classification method (Step 2A prong 2: NO). Step 2B: These elements are recited at such a high level of generality that they fail to integrate the abstract idea into a practical application, since they provide nothing more than mere instructions to implement an abstract idea on a generic computer (MPEP 2106.05(f)), amount to data gathering or outputting without significantly more (MPEP 2106.05(g)), or limit the field of use without significantly more (MPEP 2106.05(h)). These limitations, taken either alone or in combination, fail to provide an inventive concept (Step 2B: NO). Thus, the claim is not patent eligible. Regarding claim 7, it recites similar limitations to claim 3 and is rejected on the same grounds – see above. Regarding claim 8, it recites limitations which further narrow the abstract idea by specifying more details of the mental and mathematical process that occurs (using the training method on lesion or non-lesion images is limiting the field of use without significantly more). Regarding claim 9, Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. Claim 9 is directed to an apparatus (Step 1: YES). Step 2A prong 1: Does the claim recite a judicial exception? Claim 9 recites: A determination system using a neural network, comprising:… obtain a determination result corresponding to each of the plurality of diagnosis patches (obtaining a determination result for diagnosis patches is a mental process – i.e., inspecting the patch for irregularities). These steps can be performed mentally or are mathematical calculations (Step 2A prong 1: YES). Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, considered individually and in combination, integrate the judicial exception into a practical application? Claim 9 recites: a storage module configured to store the neural network trained by the neural network training method described in claim 6; a patch unit determination module configured to input each of a plurality of diagnosis patches into which a given determination target image is segmented to the neural network and… and an output module configured to output a heat map of a determination target image based on a determination result of each of the plurality of diagnosis patches obtained by the patch unit diagnosis module. Storing data is well understood, routine and conventional activity that does not add a meaningful limitation to the determination system (see MPEP 2106.05(d)(II)). Inputting and outputting data are insignificant extra-solution activity of data gathering/outputting that does not add a meaningful limitation to the determination system. Step 2B: These elements are recited at such a high level of generality that they fail to integrate the abstract idea into a practical application, since they only amount to data gathering or outputting without significantly more (MPEP 2106.05(g)), or well‐understood, routine, and conventional functions claimed in a generic manner (MPEP 2106.05(d)). These limitations, taken either alone or in combination, fail to provide an inventive concept (Step 2B: NO). Thus, the claim is not patent eligible. Regarding claim 12, it is an apparatus implementing the method of claim 1 and is rejected on the same grounds – see above. Regarding claims 13 and 16, they recite apparatuses implementing the methods of claims 1 and 6 respectively and are rejected on the same grounds – see above. Regarding claim 14, it recites similar limitations to claim 2 and is rejected on the same grounds – see above. Regarding claim 15, it recites similar limitations to claim 3 and is rejected on the same grounds – see above. Regarding claim 17, it recites similar limitations to claim 7 and is rejected on the same grounds – see above. Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of the “computer program” encompasses software per se. The specification discloses that “Examples of the program instruction include not only machine language code made by a compiler but also high- level language code which may be run by a device” (specification, PARA122). A claim whose BRI covers both statutory and non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. See MPEP 2106.03(II). Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of the “computer readable recording medium” encompasses signals per se. The specification discloses that “Examples of the computer-readable recording medium include magnetic media such as a hard disk, a floppy disk, and a magnetic tape, optical media such as CD-ROM and DVD, magneto-optical media such as a floptical disk, a hardware device which is specially configured to store and perform program instructions such as ROM, RAM, and a flash memory” (specification, PARA121). A claim whose BRI covers both statutory and non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. See MPEP 2106.03(II). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 4-6, 8-14, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (“Discriminative Pattern Mining for Breast Cancer Histopathology Image Classification via Fully Convolutional Autoencoder”, 2019), herein Li, in view of AlRegib et al. (US 20220222817 A1), herein AlRegib. Regarding claim 1, Li teaches: A neural network training method performed in a computing system which includes an auto-encoder for determining whether an inputted data instance (pg. 4, III. Methodology, A. System Overview, Specifically, for each training image… we extract… overlapping image patches) is in a first state or a second state (pg. 1, Abstract, a fully convolutional autoencoder is used to learn the dominant structural patterns among normal image patches) and a neural network which outputs probabilities where the inputted data instance is in the first state or the second state (pg. 1, Abstract, the proposed method mines contrast patterns between normal and malignant images in a weak-supervised manner and generate a probability map of abnormalities), wherein the neural network training method comprises: an extracting step of, for each of a plurality of data bags labeled with any one of the first state or the second state, extracting an instance for training which is part of data instances included in the data bag (pg. 4, III. Methodology, A. System Overview, Specifically, for each training image… we extract… overlapping image patches… Patches from normal image… are assigned label yi,j = −1. However, since patches from malignant image… may contain normal tissues only, patch labels yi,j are unknown with a positive constraint that at least one patch contains cancerous cells); and a training step of training the neural network based on an instance for training corresponding to each of the plurality of data bags (pg. 8, IV, System Implementation and Training Details, In this way, the AE network almost never sees two exactly identical training patches, because at each epoch training patches are randomly transformed), wherein the extracting step includes: a step of inputting each data instance included in the data bag to the neural network in training and calculating probabilities for each data instance included in the data bag (pg. 4, III. Methodology, The learnt mapping function F, achieved by the trained autoencoder and the one-class SVM with the 1-layer NN, is operated on each patch, generating a patch label and a value between [0, 1] indicating the probability that the patch contains malignant tumor); and a step of determining part of each data instance included in the data bag as an instance for training based on… a determination result of the auto-encoder with respect to at least part of each data instance included in the data bag (pg. 6, Pattern Learning with True-Normal Patches, In other words, the discriminative and contrast patterns in this problem are embedded in autoencoder’s residues Δx, which is quantified by the absolute value of the difference between AE’s input and output – the residues are used for training as described on pg. 5, Fig. 2, caption, Based on AE’s reconstruction residues, we propose to use one-class SVM to learn the regions taken by true-normal patches). Li fails to teach: a step of determining part of each data instance included in the data bag as an instance for training based on probabilities for each data instance included in the data bag. However, in the same field of endeavor, AlRegib teaches: a step of determining part of each data instance included in the data bag as an instance for training based on probabilities for each data instance included in the data bag (PARA32, In the training, all the convolutional layers were frozen, and the fully connected layer was trained with image patches labeled as pupil and no pupil – and – PARA33, After obtaining a class for each patch, the pupil patches were sorted based on their classification confidence and the median location of the top-5 pupil patches was computed with the highest confidence – i.e., the training uses the patches with the highest probability of one classification). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use data instance probabilities to determine instances for training as disclosed by AlRegib in the method disclosed by Li in order to more efficiently train a network (PARA31, One can quickly transfer learned features to a new task using a smaller number of training images). Regarding claim 2, Li teaches: The neural network training method of claim 1, wherein the computing system is configured to determine whether the data instance inputted to the auto-encoder is in the first state or the second state, based on a difference between the data instance inputted to the auto- encoder and output data outputted from the auto-encoder (pg. 6, Pattern Learning with True-Normal Patches, In other words, the discriminative and contrast patterns in this problem are embedded in autoencoder’s residues Δx, which is quantified by the absolute value of the difference between AE’s input and output). Regarding claim 4, Li teaches: The neural network training method of claim 1, further comprising a step of repetitively performing 1 epoch or more of the extracting step and the training step (pg. 7-8, IV, System Implementation and Training Details, B. Training Data Augmentation, At each learning epoch, transformations with randomly selected parameters among the augmentation operations are generated and applied to original training patches. Then the augmented patches are feed to the network). Regarding claim 5, Li teaches: The neural network training method of claim 1, wherein each of the plurality of data bags is a whole image, and a data instance included in each of the plurality of data bags is each image patch into which the whole image corresponding to the data bag is segmented in a certain size (pg. 2, I. Introduction, Note that though we do not know whether patches from images labeled as malignant really contain malignant cells/structures, patches from normal images do not contain cancerous cells certainly). Regarding claim 6, Li teaches: A neural network training method performed in a computing system which includes an auto-encoder for determining whether an inputted patch (pg. 4, III. Methodology, A. System Overview, Specifically, for each training image… we extract… overlapping image patches) is in a first state or a second state-here, the patch is one of those into which an image is segmented in a certain size- (pg. 1, Abstract, a fully convolutional autoencoder is used to learn the dominant structural patterns among normal image patches); and a neural network which outputs probabilities where the inputted patch is in the first state or the second state (pg. 1, Abstract, the proposed method mines contrast patterns between normal and malignant images in a weak-supervised manner and generate a probability map of abnormalities), wherein the neural network training method comprises: an extracting step of, for each of a plurality of images for training labeled with any one of the first state or the second state, extracting a patch for training which is part of patches forming the image for training (pg. 4, III. Methodology, A. System Overview, Specifically, for each training image… we extract… overlapping image patches… Patches from normal image… are assigned label yi,j = −1. However, since patches from malignant image… may contain normal tissues only, patch labels yi,j are unknown with a positive constraint that at least one patch contains cancerous cells); and a training step of training the neural network based on the patch for training corresponding to each of the plurality of images for training (pg. 8, IV, System Implementation and Training Details, In this way, the AE network almost never sees two exactly identical training patches, because at each epoch training patches are randomly transformed), wherein the extracting step includes: a step of inputting each patch forming the image for training to the neural network in training, and calculating probabilities for each patch forming the image for training (pg. 4, III. Methodology, The learnt mapping function F, achieved by the trained autoencoder and the one-class SVM with the 1-layer NN, is operated on each patch, generating a patch label and a value between [0, 1] indicating the probability that the patch contains malignant tumor); and a step of determining part of each patch forming the image for training as a patch for training based on… a determination result of the auto-encoder with respect to at least part of each patch forming the image for training (pg. 6, Pattern Learning with True-Normal Patches, In other words, the discriminative and contrast patterns in this problem are embedded in autoencoder’s residues Δx, which is quantified by the absolute value of the difference between AE’s input and output – the residues are used for training as described on pg. 5, Fig. 2, caption, Based on AE’s reconstruction residues, we propose to use one-class SVM to learn the regions taken by true-normal patches). Li fails to teach: determining part of each patch forming the image for training as a patch for training based on probabilities for each patch forming the image for training. However, in the same field of endeavor, AlRegib teaches: determining part of each patch forming the image for training as a patch for training based on probabilities for each patch forming the image for training (PARA32, In the training, all the convolutional layers were frozen, and the fully connected layer was trained with image patches labeled as pupil and no pupil – and – PARA33, After obtaining a class for each patch, the pupil patches were sorted based on their classification confidence and the median location of the top-5 pupil patches was computed with the highest confidence. In recognition tasks, top-5 accuracy is a commonly used). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use data instance probabilities to determine instances for training as disclosed by AlRegib in the method disclosed by Li in order to more efficiently train a network (PARA31, One can quickly transfer learned features to a new task using a smaller number of training images). Regarding claim 8, Li further teaches: The neural network training method of claim 6, wherein each of images for training is any one of an image including a lesion due to a certain disease or an image not including the lesion, the first state is a normal state where the lesion is not present, and the second state is an abnormal state where the lesion is present (pg. 7, III. Methodology, However, in clinical practice, any abnormalities, suspect lesions in particular, should trigger an alarm. Based on this belief, instead of using majority voting, we propose a much stricter rule to combine patch diagnosis results, that is, an image is labeled as benign when all patches are classified as normal). Regarding claim 9, Li further teaches: A determination system using a neural network, comprising: a storage module configured to store the neural network trained by the neural network training method described in claim 6; a patch unit determination module (pg. 7, IV. System Implementation and Training Details, The proposed method is implemented using python 3.6.6 – the method disclosed by Li is computer implemented, which includes code instructions, memory, processors, etc.) configured to input each of a plurality of diagnosis patches into which a given determination target image is segmented to the neural network and obtain a determination result corresponding to each of the plurality of diagnosis patches; and an output module configured to output a heat map of a determination target image based on a determination result of each of the plurality of diagnosis patches obtained by the patch unit diagnosis module (pg. 7, III. Methodology, Finally, image classification and a probability map are inferred from obtained patch labels, also see pg. 5, Fig. 2). Regarding claim 10, Li further teaches: A computer program installed in a data processing device and recorded on a medium for performing the method described in claim 1 (pg. 7, IV. System Implementation and Training Details, The proposed method is implemented using python 3.6.6). Regarding claim 11, Li further teaches: A computer readable recording medium on which a computer program for performing the method described in claim 1 is recorded (pg. 7, IV. System Implementation and Training Details, The proposed method is implemented using python 3.6.6). Regarding claim 12, Li further teaches: A computing system, comprising: a processor and a memory, wherein if the memory is performed by the processor, the computing system performs the method described in claim 1 (pg. 7, IV. System Implementation and Training Details, The proposed method is implemented using python 3.6.6). Regarding claim 13, it is a system implementing the method of claim 1 and is rejected on the same grounds – see above. Regarding claim 14, it recites similar limitations to claim 2 and is rejected on the same grounds – see above. Regarding claim 16, it is a system implementing the method of claim 6 and is rejected on the same grounds – see above. Allowable Subject Matter Claims 3, 7, 15 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kim et al. (US 20170236271 A1), which discloses patch based lesion classification, and Tizhoosh et al. (US 20200176102 A1), which discloses selecting a representative patch for an image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARRISON CHAN YOUNG KIM whose telephone number is (571)272-0713. The examiner can normally be reached Monday - Thursday 9:00 am - 6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CESAR PAULA can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HARRISON C KIM/ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Dec 05, 2022
Application Filed
Oct 02, 2025
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
83%
With Interview (+33.3%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month