Prosecution Insights
Last updated: April 19, 2026
Application No. 18/138,688

SYSTEM AND METHOD FOR CLASSIFYING IMAGES USING CONTRAPOSITIVE MACHINE LEARNING

Final Rejection §101§103
Filed
Apr 24, 2023
Examiner
TSAI, TSUNG YIN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
2692873 Ontario Inc.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
804 granted / 984 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
1015
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 984 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims: claims 1-40 are examined below. Response to Arguments Applicant's arguments filed 10/16/2025 have been fully considered but they are not persuasive. Applicant argue – (pages 9-11) Applicant argued the lack of teaching of new claim amendment regarding first and second classification score that are probability complement that are outputted by parallel neural networks. Please see Remarks for further detail. Examiner response – Examiner respectfully disagrees. An update search found that Powell et al (US 2019/0042933) teaches multiple neural networks may operate simultaneously/parallel that output complementary probabilities in paragraph 0024 and 0066. Such that the combine teaching of Colley et al (US 2021/0090694) in view of Powell et al (US 2019/0042933) teaches the new claim concept and language. Please see the Office Action below for more detail. Claim Rejections - 35 USC § 101 Applicant’s arguments, see Claims, filed 10/16/2025, with respect to xx non-statutory subject matter for claims 38 and 40 have been fully considered and are persuasive. The 35 USC 101 rejection has been withdrawn. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-23 and 28-40 are rejected under 35 U.S.C. 103 as being unpatentable over Colley et al (US 2021/0090694) in view of Powell et al (US 2019/0042933). Claim 1: Colley et al (US 2021/0090694) teaches the following subject matter: A method of classifying, comprising: receiving a sample (0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer); generating, by a first neural network using the sample (1543 detail system structure includes machine learning features in advance parallelization to reduce inefficiencies; paragraph 2996 also detail parallel or substantially concurrently processing by flowchart, flow diagram, structure or block implement in hardware, software or both): a first classification of a positive label versus a negative label (paragraph 2870 teaches classifying label to be positive or negative; paragraph 2905 details that both positive and negative are classified as an equivocal); generating, by a second neural network using the sample (above paragraph 1543 teaches parallel of machine learning where the other will be the second machine learning (second neural network)): a second classification of the negative label versus not the negative label (paragraph 2870 teaches classifying label to be negative, positive or equivocal (view as “not the negative”)); and generating, by a category classification module using the first classification and the second classification: a category of the sample (0013 ad 0018 teaches what condition of “cancer state” by the tumor characteristics; paragraph 1828-1830 detail predict MSI “Microsatellite instability” state of skin cancer with 1830 further detail using the neural network to predict the MSI). Colley et al teaches first and second scores above and parallel processing (paragraph 1543 teaches machine learning features with advance parallelization to reduce inefficiencies at each bottleneck), but the following is taught by Powell et al (US 2019/0042933): wherein the first neural network and the second neural network are in parallel to process the received sample to generate the first classification and the second classification respectively (0024 teaches multiple neural networks may operate simultaneously, whereas in other embodiments the output that are all classifier-type network, where 0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities) to determine that the first classification score is a probability complement to the second classification score (0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities). Colley et al and Powell et al are both in image analysis, especially the use of parallel/simultaneously/concurrent neural network processing for classification score, such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Colley et al by Powell et al using of parallel improve upon neural-network predictive modeling by incorporating multiple specialized neural networks into a larger neural network that, in aggregate, is capable of analyzing large amounts of structured data, unstructured data as disclosed by Powell et al in 0024. Claim 2: Colley et al teaches: The method as claimed in claim 1, wherein, when the first classification is the positive label and the second classification is not the negative label, the category of the sample by the category classification module is the positive label (paragraph 2870 and 2905 teaches classify label to be (negative, equivocal, or positive base on first and second threshold values, where one combination output is positive and equivocal (not the negative label)). Claim 3: Colley et al teaches: The method as claimed in claim 1, wherein, when the first classification is the negative label and the second classification is the negative label, the category of the sample by the category classification module is the negative label (paragraph 2870 and 2905 teaches classify label to be (negative, equivocal, or positive base on first and second threshold values, where one combination output negative and negative). Claim 4: Colley et al teaches: The method as claimed in claim 1, wherein, when the first classification is the negative label and the second classification is not the negative label, the category of the sample by the category classification module is outside a distribution (paragraph 2870 and 2905 teaches classify label to be (negative, equivocal, or positive base on first and second threshold values, where one combination output is negative and equivocal (not the negative label) and anything outside the thresholds are outside the distribution). Claim 5: Colley et al teaches: The method as claimed in claim 1, wherein, when the first classification is the positive label and the second classification is the negative label, the category of the sample by the category classification module is ambiguous (paragraph 2870 and 2905 teaches classify label to be (negative, equivocal, or positive base on first and second threshold values, where one combination output is positive and negative, where equivocal according to 2905 is view as ambiguous). Claim 9: Colley et al teaches: The method as claimed in claim 1, wherein the category classification module is rules based (0285 detail machine learning with hard and soft rules; 1670 teaches clear set of rules to find unique solution). Clam 10: Colley et al teaches: The method as claimed in claim 1, wherein the category classification module is a machine learning model (addition to 0285 detail machine learning with hard and soft rules, paragraph 1611 teaches the machine learning/neural network/classifier with rule set). Claim 11: Colley et al teaches: The method as claimed in claim 1, wherein the first neural network and the second neural network each comprise at least one of: a support vector machine (SVM), linear regression, or a convolutional neural network (CNN) (figure 13 and 1857 teaches array using support vector machine (SVM) as well as convolutional neural network (CNN); paragraph 2177 further detail use of machine learning techniques that may be used in place of clustering include support vector machine learning, decision tree learning, associated rule learning, Bayesian techniques, and rule-based machine learning which are all rule based). Claim 12: Colley et al teaches: The method as claimed in claim 1, wherein the sample includes a medical image (0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer). Claim 13: Colley et al teaches: The method as claimed in claim 12, wherein the medical image is a dermatological image (0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin (dermatological)). Claim 14: Colley et al teaches: The method as claimed in claim 13, wherein the positive label is a diagnosis, a likely diagnosis, suitability for diagnosis, testing being required, or a recommended treatment (paragraph 0006 teaches obtaining and employing data related to physical and genomic patient characteristics as well as diagnosis, treatments and treatment efficacy to provide a suite of tools to healthcare providers, researchers and other interested parties enabling those entities to develop new cancer state- treatment-results insights and/or improve overall patient healthcare and treatment plans for specific patients.). Claim 15: Colley et al teaches: The method as claimed in claim 1, further comprising: performing, by a machine learning model, segmentation of the sample to identify morphological segments in the sample, wherein the category is generated for at least one of the morphological segments (paragraph 1840 detail segmentation to further identifies the patch for analyzing by the deep learning to analyze changes to topology and morphology in medical image). Claim 16: Colley et al teaches: The method as claimed in claim 15, wherein the category is generated for all of the morphological segments (paragraph 1840 detail segmentation and topology and morphology in medical image). Claim 17: Colley et al teaches: The method as claimed in claim 1, wherein the method is performed by a processing device (1543 detail system structure includes machine learning features in advance parallelization to reduce inefficiencies; paragraph 2996 also detail parallel or substantially concurrently processing by flowchart, flow diagram, structure or block implement in hardware, software or both. Paragraph 0266 detail hardware and system such as processor; 0324 teaches processor for classification label and prediction). Claim 18: Colley et al teaches: The method as claimed in claim 1, further comprising receiving the sample from a video conference software application (paragraph 1804 teaches video analysis of the capture single image; paragraph 1947 teaches videomics, a collection of features comprising the study of a video analysis capture of a single image evolving through time of mutations). Claim 19: Colley et al (US 2021/0090694) teaches the following subject matter: A method for a machine learning model including a first neural network and a second neural network, the method comprising (0176 teaches training of the neural nodes by iteratively updating weights of each node): receiving a dataset comprising a first set of samples each having a positive label and a second set of samples each having a negative label (paragraph 0178 teaches training data sets of images with each pixel labeled as a tissue class, which requires too much annotation time and processing time to be practical; 0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer); training the first neural network using the first set of samples and the second set of samples to perform a first classification of the positive label versus the negative label (1543 detail system structure includes machine learning features in advance parallelization to reduce inefficiencies; paragraph 2996 also detail parallel or substantially concurrently processing by flowchart, flow diagram, structure or block implement in hardware, software or both; paragraph 2870 teaches classifying label to be positive or negative; paragraph 2905 details that both positive and negative are classified as an equivocal); training the second neural network using the first set of samples and the second set of samples to perform a second classification of the negative label versus not the negative label (above paragraph 1543 teaches parallel of machine learning where the other will be the second machine learning (second neural network); paragraph 2870 teaches classifying label to be negative, positive or equivocal (view as “not the negative”)); and providing a category classification module which is configured to generate a category using the first classification and the second classification (0013 ad 0018 teaches what condition of “cancer state” by the tumor characteristics; paragraph 1828-1830 detail predict MSI “Microsatellite instability” state of skin cancer with 1830 further detail using the neural network to predict the MSI). Colley et al teaches first and second scores above and parallel processing (paragraph 1543 teaches machine learning features with advance parallelization to reduce inefficiencies at each bottleneck), but the following is taught by Powell et al (US 2019/0042933): wherein the first neural network and the second neural network are in parallel to process the received sample to generate the first classification and the second classification respectively (0024 teaches multiple neural networks may operate simultaneously, whereas in other embodiments the output that are all classifier-type network, where 0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities) to determine that the first classification score is a probability complement to the second classification score (0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities). Colley et al and Powell et al are both in image analysis, especially the use of parallel/simultaneously/concurrent neural network processing for classification score, such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Colley et al by Powell et al using of parallel improve upon neural-network predictive modeling by incorporating multiple specialized neural networks into a larger neural network that, in aggregate, is capable of analyzing large amounts of structured data, unstructured data as disclosed by Powell et al in 0024. Claim 20: Colley et al teaches: The method as claimed in claim 19, wherein, when the first classification is the positive label and the second classification is not the negative label, the category classification module is configured to generate the category as being the positive label (paragraph 2870 and 2905 teaches classify label to be (negative, equivocal, or positive base on first and second threshold values, where one combination output is positive and equivocal (not the negative label)). Claim 21: Colley et al teaches: The method as claimed in claim 19, wherein, when the first classification is the negative label and the second classification is the negative label, the category classification module is configured to generate the category as being the negative label (paragraph 2870 and 2905 teaches classify label to be (negative, equivocal, or positive base on first and second threshold values, where one combination output negative and negative). Claim 22: Colley et al teaches: The method as claimed in claim 19, wherein, when the first classification is the negative label and the second classification is not the negative label, the category classification module is configured to generate the category as being outside a distribution (paragraph 2870 and 2905 teaches classify label to be (negative, equivocal, or positive base on first and second threshold values, where one combination output is negative and equivocal (not the negative label) and anything outside the thresholds are outside the distribution). Claim 23: Colley et al teaches: The method as claimed in claim 19, wherein, when the first classification is the positive label and the second classification is the negative label, the category classification module is configured to generate the category as being ambiguous (paragraph 2870 and 2905 teaches classify label to be (negative, equivocal, or positive base on first and second threshold values, where one combination output is positive and negative, where equivocal according to 2905 is view as ambiguous). Claim 28: Colley et al teaches: The method as claimed in claim 19, wherein the category classification module is rules based (0285 detail machine learning with hard and soft rules; 1670 teaches clear set of rules to find unique solution). Claim 29: The method as claimed in claim 19, further comprising training the category classification module to generate the category using the first classification and the second classification (addition to 0285 detail machine learning with hard and soft rules, paragraph 1611 teaches the machine learning/neural network/classifier with rule set). Claim 30: Colley et al teaches: The method as claimed in claim 19, wherein the machine learning model comprises at least one of: a support vector machine (SVM), linear regression, or a convolutional neural network (CNN) (figure 13 and 1857 teaches array using support vector machine (SVM) as well as convolutional neural network (CNN); paragraph 2177 further detail use of machine learning techniques that may be used in place of clustering include support vector machine learning, decision tree learning, associated rule learning, Bayesian techniques, and rule-based machine learning which are all rule based). Claim 31: Colley et al teaches: The method as claimed in claim 19, wherein the dataset includes medical images (0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer; above teaches use of database as part of training). Claim 32: Colley et al teaches: The method as claimed in claim 31, wherein the medical images are dermatological images (0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin (dermatological)). Claim 33: Colley et al teaches: The method as claimed in claim 32, wherein each of the dermatological images are labelled with a diagnosis, a likely diagnosis, suitability for diagnosis, testing being required, or a recommended treatment (paragraph 0006 teaches obtaining and employing data related to physical and genomic patient characteristics as well as diagnosis, treatments and treatment efficacy to provide a suite of tools to healthcare providers, researchers and other interested parties enabling those entities to develop new cancer state- treatment-results insights and/or improve overall patient healthcare and treatment plans for specific patients). Claim 34: Colley et al teaches: The method as claimed in claim 19, the method further comprising: training the machine learning model to perform segmentation to identify morphological segments, wherein the category classification module is configured to generate the category of at least one of the morphological segments (paragraph 1840 detail segmentation to further identifies the patch for analyzing by the deep learning to analyze changes to topology and morphology in medical image). Claim 35: Colley et al teaches: The method as claimed in claim 34, wherein the category classification module is configured to generate the category of all of the morphological segments (paragraph 1840 detail segmentation and topology and morphology in medical image). Claim 36: Colley et al teaches: The method as claimed in claim 19, wherein the method is performed by a processing device (1543 detail system structure includes machine learning features in advance parallelization to reduce inefficiencies; paragraph 2996 also detail parallel or substantially concurrently processing by flowchart, flow diagram, structure or block implement in hardware, software or both. Paragraph 0266 detail hardware and system such as processor; 0324 teaches processor for classification label and prediction). Claim 37: Colley et al (US 2021/0090694) teaches the following subject matter: A system for training a machine learning model including a first neural network and a second neural network, the system comprising (figure 1 teaches a system): a processing device; and a memory accessible by the processing device, the memory storing machine- executable instructions that, when executed by the processing device, cause the processing device to (Paragraph 0266 detail hardware and system such as processor; 0324 teaches processor for classification label and prediction with memory): receive a dataset comprising a first set of samples each having a positive label and a second set of samples each having a negative label (paragraph 0178 teaches training data sets of images with each pixel labeled as a tissue class, which requires too much annotation time and processing time to be practical; 0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer); train the first neural network using the first set of samples and the second set of samples to perform a first classification of the positive label versus the negative label (1543 detail system structure includes machine learning features in advance parallelization to reduce inefficiencies; paragraph 2996 also detail parallel or substantially concurrently processing by flowchart, flow diagram, structure or block implement in hardware, software or both; paragraph 2870 teaches classifying label to be positive or negative; paragraph 2905 details that both positive and negative are classified as an equivocal); train the second neural network using the first set of samples and the second set of samples to perform a second classification of the negative label versus not the negative label (above paragraph 1543 teaches parallel of machine learning where the other will be the second machine learning (second neural network); paragraph 2870 teaches classifying label to be negative, positive or equivocal (view as “not the negative”)); and provide a category classification module which is configured to generate a category using the first classification and the second classification (0013 ad 0018 teaches what condition of “cancer state” by the tumor characteristics; paragraph 1828-1830 detail predict MSI “Microsatellite instability” state of skin cancer with 1830 further detail using the neural network to predict the MSI). Colley et al teaches first and second scores above and parallel processing (paragraph 1543 teaches machine learning features with advance parallelization to reduce inefficiencies at each bottleneck), but the following is taught by Powell et al (US 2019/0042933): wherein the first neural network and the second neural network are in parallel to process the received sample to generate the first classification and the second classification respectively (0024 teaches multiple neural networks may operate simultaneously, whereas in other embodiments the output that are all classifier-type network, where 0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities) to determine that the first classification score is a probability complement to the second classification score (0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities). Colley et al and Powell et al are both in image analysis, especially the use of parallel/simultaneously/concurrent neural network processing for classification score, such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Colley et al by Powell et al using of parallel improve upon neural-network predictive modeling by incorporating multiple specialized neural networks into a larger neural network that, in aggregate, is capable of analyzing large amounts of structured data, unstructured data as disclosed by Powell et al in 0024. Claim 38: Colley et al (US 2021/0090694) teaches the following subject matter: A non-transitory computer readable medium containing instructions for causing a processing device to perform a method for a machine learning model including a first neural network and a second neural network, the instructions comprising (para 0266 detail hardware and system such as processor; 0324 teaches processor for classification label and prediction where paragraph 1277 detail of non-transitory computer-readable media): instructions for receiving a dataset comprising a first set of samples each having a positive label and a second set of samples each having a negative label (paragraph 0178 teaches training data sets of images with each pixel labeled as a tissue class, which requires too much annotation time and processing time to be practical; 0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer); instructions for training the first neural network using the first set of samples and the second set of samples to perform a first classification of the positive label versus the negative label (1543 detail system structure includes machine learning features in advance parallelization to reduce inefficiencies; paragraph 2996 also detail parallel or substantially concurrently processing by flowchart, flow diagram, structure or block implement in hardware, software or both; paragraph 2870 teaches classifying label to be positive or negative; paragraph 2905 details that both positive and negative are classified as an equivocal); instructions for training the second neural network using the first set of samples and the second set of samples to perform a second classification of the negative label versus not the negative label (above paragraph 1543 teaches parallel of machine learning where the other will be the second machine learning (second neural network); paragraph 2870 teaches classifying label to be negative, positive or equivocal (view as “not the negative”)); and instructions for providing a category classification module which is configured to generate a category using the first classification and the second classification (0013 ad 0018 teaches what condition of “cancer state” by the tumor characteristics; paragraph 1828-1830 detail predict MSI “Microsatellite instability” state of skin cancer with 1830 further detail using the neural network to predict the MSI). Colley et al teaches first and second scores above and parallel processing (paragraph 1543 teaches machine learning features with advance parallelization to reduce inefficiencies at each bottleneck), but the following is taught by Powell et al (US 2019/0042933): wherein the first neural network and the second neural network are in parallel to process the received sample to generate the first classification and the second classification respectively (0024 teaches multiple neural networks may operate simultaneously, whereas in other embodiments the output that are all classifier-type network, where 0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities) to determine that the first classification score is a probability complement to the second classification score (0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities). Colley et al and Powell et al are both in image analysis, especially the use of parallel/simultaneously/concurrent neural network processing for classification score, such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Colley et al by Powell et al using of parallel improve upon neural-network predictive modeling by incorporating multiple specialized neural networks into a larger neural network that, in aggregate, is capable of analyzing large amounts of structured data, unstructured data as disclosed by Powell et al in 0024. Claim 39: Colley et al (US 2021/0090694) teaches the following subject matter: A system for classifying, the system comprising (figure 1 teaches a system): a processing device; and a memory accessible by the processing device, the memory storing machine- executable instructions that, when executed by the processing device, cause the processing device to (para 0266 detail hardware and system such as processor; 0324 teaches processor for classification label and prediction where paragraph 1277 detail of non-transitory computer-readable media): receive a sample (0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer); generate, by a first neural network using the sample: a first classification of a positive label versus a negative label (paragraph 0178 teaches training data sets of images with each pixel labeled as a tissue class, which requires too much annotation time and processing time to be practical; 0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer); generate, by a second neural network using the sample: a second classification of the negative label versus not the negative label (above paragraph 1543 teaches parallel of machine learning where the other will be the second machine learning (second neural network); paragraph 2870 teaches classifying label to be negative, positive or equivocal (view as “not the negative”)); and generate, by a category classification module using the first classification and the second classification: a category of the sample (0013 and 0018 teaches what condition of “cancer state” by the tumor characteristics; paragraph 1828-1830 detail predict MSI “Microsatellite instability” state of skin cancer with 1830 further detail using the neural network to predict the MSI). Colley et al teaches first and second scores above and parallel processing (paragraph 1543 teaches machine learning features with advance parallelization to reduce inefficiencies at each bottleneck), but the following is taught by Powell et al (US 2019/0042933): wherein the first neural network and the second neural network are in parallel to process the received sample to generate the first classification and the second classification respectively (0024 teaches multiple neural networks may operate simultaneously, whereas in other embodiments the output that are all classifier-type network, where 0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities) to determine that the first classification score is a probability complement to the second classification score (0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities). Colley et al and Powell et al are both in image analysis, especially the use of parallel/simultaneously/concurrent neural network processing for classification score, such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Colley et al by Powell et al using of parallel improve upon neural-network predictive modeling by incorporating multiple specialized neural networks into a larger neural network that, in aggregate, is capable of analyzing large amounts of structured data, unstructured data as disclosed by Powell et al in 0024. Claim 40: Colley et al (US 2021/0090694) teaches the following subject matter: A non- transitory computer readable medium containing instructions for causing a processing device to perform a method, the instructions comprising (para 0266 detail hardware and system such as processor; 0324 teaches processor for classification label and prediction with memory): instructions for receiving a sample (0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer); instructions for generating, by a first neural network using the sample: a first classification of a positive label versus a negative label (paragraph 0178 teaches training data sets of images with each pixel labeled as a tissue class, which requires too much annotation time and processing time to be practical; 0011-0013 teaches medical image of tissue sample, where 0013 detail sample to diagnose such as skin cancer); instructions for generating, by a second neural network using the sample: a second classification of the negative label versus not the negative label (above paragraph 1543 teaches parallel of machine learning where the other will be the second machine learning (second neural network); paragraph 2870 teaches classifying label to be negative, positive or equivocal (view as “not the negative”)); and instructions for using the first classification and the second classification to generate a category of the sample (0013 and 0018 teaches what condition of “cancer state” by the tumor characteristics; paragraph 1828-1830 detail predict MSI “Microsatellite instability” state of skin cancer with 1830 further detail using the neural network to predict the MSI). Colley et al teaches first and second scores above and parallel processing (paragraph 1543 teaches machine learning features with advance parallelization to reduce inefficiencies at each bottleneck), but the following is taught by Powell et al (US 2019/0042933): wherein the first neural network and the second neural network are in parallel to process the received sample to generate the first classification and the second classification respectively (0024 teaches multiple neural networks may operate simultaneously, whereas in other embodiments the output that are all classifier-type network, where 0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities) to determine that the first classification score is a probability complement to the second classification score (0024 further detail outcome of a variable that has two or more classes (e.g., pass/fail, positive/negative/neutral, or complementary probabilities) or a numerical value; 0066 further detail complementary probabilities). Colley et al and Powell et al are both in image analysis, especially the use of parallel/simultaneously/concurrent neural network processing for classification score, such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Colley et al by Powell et al using of parallel improve upon neural-network predictive modeling by incorporating multiple specialized neural networks into a larger neural network that, in aggregate, is capable of analyzing large amounts of structured data, unstructured data as disclosed by Powell et al in 0024. Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Colley et al (US 2021/0090694) in view of Powell et al (US 2019/0042933) as applied to claim 19 above, and further in view of Montag et al (US 2022/0188087). Claim 27: Colley et al and Powell et al teaches all the subject matter above, but the following is taught by Montag et al: The method as claimed in claim 19, wherein a first number of the first set of samples is at least ten times less than a second number of the second set of samples (0066 teaches training data ten time more/less (there is a ten-time difference) with huge and diverse training database, where 0051 detail the training of parallel neural networks). Colley et al and Powell et al and Montag et al are both in the field of image analysis especially the use of parallel neural network for biometric such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Colley et al and Powell et al by Montag et al where such huge and diverse training database result a ML model to be robust at inference because of nose and missing data as disclosed by Montag et al in 0066. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. ARMAITIS et al (US 2020/0151873) teaches METHODS, SYSTEMS AND USE FOR DETECTING IRREGULARITIES IN MEDICAL IMAGES BY MEANS OF A MACHINE LEARNING MODEL - machine learning model for detecting irregularities in medical images, the method including: identifying at least one predetermined type of body region (14) depicted in a medical image (10), said body region (14) having a depicted irregularity (12); defining a plurality of image segments (20) each including at least part of the depicted body region (14), wherein a resolution of the image segments (20) is maintained or not reduced by more than 20% compared to the medical image (10); and using said image segments (20) to train a machine learning model to detect similar irregularities (12) in other medical images (10). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TSUNG YIN TSAI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Apr 24, 2023
Application Filed
Oct 10, 2024
Response after Non-Final Action
Jul 15, 2025
Non-Final Rejection — §101, §103
Oct 16, 2025
Response Filed
Nov 04, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597118
IMAGE INSPECTION APPARATUS, IMAGE INSPECTION METHOD, AND IMAGE INSPECTION PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12597237
INFERENCE LEARNING DEVICE AND INFERENCE LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579797
VIDEO PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12573029
IMAGE ANNOTATION USING ONE OR MORE NEURAL NETWORKS
2y 5m to grant Granted Mar 10, 2026
Patent 12567235
Visual Explanation of Classification
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 984 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month