DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claims 1-8, 10-11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
In regard to the arguments for 101 rejection, applicant argues that claim 1 is not a mental processing grouping of abstract in view of the memorandum to patent examiners dated august 4, 2025. Examiner agrees in part. While the examiner agrees in view of the memorandum to patent examiners dated august 4, 2025; a 101 is upheld in view of the Desjardins Memorandum filed December 5h 2025. Desjardins Memorandum added subject matter eligibility changes to the MPEP specifically to the end of 2106.04 (d), subsection III which found training a machine learning model to be an abstract idea particularly mathematical concepts.
In the instance case the current application comprises a model training device which obtains a learning model that evaluates by using a global metric to obtain error data sets having an outlier from among a plurality of data sets used in the evaluation; grouping the error data sets using a local metric. The evaluation including the global metric, outlier, and local metric are descriptions of mathematical concepts found in view of the above cited Desjardins Memorandum.
Amended subject matter acquires particular learning data based on specified model training information and further re-learning the learning data. The amended subject matter emphasizes the mathematical concepts being utilized in the model training, necessitating the 101 rejection to mathematical concepts correlating to Desjardins Memorandum.
Applicants may overcome the current 101 rejection by clearly claiming the improvements to the computer functionality and integrating the judicial exception into a practical application that is more than mere instructions to apply the exception (2106.05(f)).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8, 10-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claims 1 and 11 recite a model training device that obtains an initial learning model by learning a data set including medical images as learning data, evaluates the initial learning model by using a global metric…, obtains a plurality of error data sets groups by grouping the plurality of error data sets while using a local metric…, specify model training information with respect to each of the error data set groups and further trains the initial learning model… and further re-learning the learning data. The limitations of evaluate the initial learning model using a global metric and obtaining a plurality of error data set groups by grouping the plurality of error data sets while using a local metric recite mathematical concepts type abstract idea.
This judicial exception is not integrated into a practical application because the rest of the other limitations like “obtain an initial learning model by learning a data set including medical images as learning data”, “specify model training information with respect to each of the error data set groups” and “processing circuitry is further configured to train the initial learning model by supplementarily acquiring particular learning data based on the specified model training information and further re-learning the learning data” recite high level recitation of training a machine learning model with previously determined data that just apply the model merely uses a computer as a tool to perform an abstract idea, See MPEP 2106.05(f).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as disclosed do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea.
Claim 2 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 2 recites the same abstract idea of claim 1. The claim recites the additional limitation of “the local metric including one of a local contour matching metric and a spatial distance metric ”, which is merely elaborating on the abstract idea, by adding mathematical concepts therefore, does not amount to significantly more than the abstract idea.
Claim 3 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 3 recites the same abstract idea of claim 1. The claim recites the additional limitation of “segmenting a medical image corresponding to the error data sets, separating boundary region, calculating the local metric and grouping the plurality of error data sets”, which is merely elaborating on the abstract idea, by further adding mathematical calculations and grouping the results, therefore, does not amount to significantly more than the abstract idea.
Claim 4 is dependent on claim 3 and includes all the limitations of claim 1 and 3. Therefore, claim 4 recites the same abstract idea of claim 1. The claim recites the additional limitation of “comparing data and organizing accordingly”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation and data gathering, therefore, does not amount to significantly more than the abstract idea.
Claim 5 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 5 recites the same abstract idea of claim 1. The claim recites the additional limitation of “segmenting, specifying, calculating and grouping”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea.
Claim 6 is dependent on claim 5 and includes all the limitations of claims 1 and 5. Therefore, claim 5 recites the same abstract idea of claim 1. The claim recites the additional limitation of “analyze, and organize data ”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea.
Claim 7 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 7 recites the same abstract idea of claim 1. The claim recites the additional limitation of “selecting a data information from each data set”, which is merely elaborating on the abstract idea, by further specifying an additional data gathering, therefore, does not amount to significantly more than the abstract idea.
Claim 8 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 8 recites the same abstract idea of claim 1. The claim recites the additional limitation of “perform a learning curve fitting process and specify the model training information”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea.
Claim 10 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 9 recites the same abstract idea of claim 1. The claim recites the additional limitation of “acquiring learning data and generate plurality of learning models”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-7, 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Jha et al. (US 2022/0284643 A1) in view of Wu et al. (US 10872599 B1).
With respect to Claim 1, Jha’643 shows a model training device, comprising: processing circuitry configured (Figure 1 paragraphs [0064]-[0065], [0080], and [0082] computing device 300 with configuration 400 for trained machine learning) to:
obtain an initial learning model by learning a data set including medical images as learning data (Figure 2 paragraphs [0066] and [0080]-[0084] algorithm 420 for defining analysis of medical images by a machine learning model);
evaluate the initial learning model by using a global metric (Figure 5 paragraph [0105] reconstructed medical image segmented into regions that are labeled, labeling inherently includes an evaluation performed by a metric (global) in order to perform the function of labeling), so as to obtain, as error data sets, data sets (Paragraphs [0105] and [0122] the issue/difficulty of segmentation is unable to account for tissue-fraction effect (errors) and provides estimating the fractional volume in each of the regions of the reconstructed image, the estimation regarding data sets of the tissue-fraction effect (errors) and allowing the model to learn from prior segmentation (evaluating initial learning model)) each having an outlier from among a plurality of data sets used in the evaluation (Paragraph [0191] comparing/evaluating the segmentation method with a SUV-max thresholding method (values outside of the threshold considered outliers));
obtain a plurality of error data set groups by grouping the plurality of error data sets while using a local metric (Figure 17 paragraphs [0128], [0134], [0185], [0188]-[0191], and [0200] an active contour and spatial distance metrics (local metrics) based technique for segmenting and evaluating SPECT/medical images in accordance with voxel groupings, paragraph [0134] emphasizes the boundary between different regions is normally blurred (tissue-fraction effect/an erroneous data set)); and [ ]
Jha’643 does not specifically show specify model training information with respect to each of the error data set groups, wherein the processing circuitry is further configured to train the initial learning model by supplementarily acquiring particular learning data based on the specified model training information and further re-learning the learning data.
Wu’599 shows specify model training information with respect to each of the error data set groups, wherein the processing circuitry is further configured to train the initial learning model by supplementarily acquiring particular learning data based on the specified model training information and further re-learning the learning data (Column 4, lines 8-17, using a model-update data (like error data) to update the model with, Column 9, lines 44-67, obtain error data and selecting only a portion of the error data based on its quality, similarity, or other such metric, like group the error data by region of origin, country of origin, or other such geographic metric and may create an updated model specific to the desired metric).
At the time of the invention, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claim invention to modify Jha’643 to include specify model training information with respect to each of the error data set groups, wherein the processing circuitry is further configured to train the initial learning model by supplementarily acquiring particular learning data based on the specified model training information and further re-learning the learning data method taught by Wu’599. The suggestion/motivation for doing so would have been to improve the system’s ability to be able to update a trained model and reduce the number of future false positives and/or false negatives according to a desired data used to train the model (column 3, lines 5-7).
With respect to Claim 2, the combination of Jha’643 and Wu’599 shows the model training device according to claim 1, wherein the local metric includes one of a local contour matching metric and a spatial distance metric (in Jha’643: Figure 17 paragraphs [0128], [0134], [0185], [0188]-[0191], and [0200] an active contour and spatial distance metrics (local metrics) based technique for segmenting and evaluating SPECT/medical images).
With respect to Claim 3, the combination of Jha’643 and Wu’599 shows the model training device according to claim 1, wherein the processing circuitry is further configured to: segment a medical image corresponding to the error data sets so as to make it possible to distinguish a positional relationship between a detection target site serving as a detection target and another site positioned adjacent to the detection target site in the medical image (in Jha’643: Paragraphs [0100]-[0104] segmenting SPECT/medical images for determining boundaries between regions in view of tissue fraction effects (errors), the reconstruction of the image is over a finite-sized voxel grid (voxels inherently adjacent to one another to form the grid), figure 17 and paragraph [0220] the disclosed method with identified boundaries for a detection target of a tumor and other sites adjacent such as the background); separate a boundary region of the detection target site into a plurality of subregions, in accordance with two or more other adjacent sites including said another adjacent site (in Jha’643: Paragraph [0059] generally describes segmentation regarding pixel/voxel/subregions which may comprise a portion of a boundary, paragraph [0103] a reconstruction of the image is over a finite-sized voxel grid (voxels inherently adjacent to one another to form the grid), paragraph [0146] describes determining whether each voxel belongs to region of interest ROI); calculate the local metric with respect to each of the subregions (in Jha’643: Figure 17 paragraphs [0128], [0134], [0185], [0188]-[0191], and [0200] an active contour and spatial distance metrics (local metrics) based technique for segmenting and evaluating SPECT/medical images); and group the plurality of error data sets on a basis of the subregions and the local metrics (in Jha’643: Figures 6-7 paragraph [0134] grouping on the basis of voxel subregions utilizing contouring metrics).
With respect to Claim 5, the combination of Jha’643 and Wu’599 shows the model training device according to claim 1, wherein the processing circuitry is further configured to: segment a medical image corresponding to the error data sets so as to make it possible to distinguish regions having mutually-different image characteristics within a detection target site serving as a detection target (in Jha’643: Paragraph [0130] describes estimate tumor fraction area from segmentation); specify, from among the regions, regions each having a specific image characteristic as special regions (in Jha’643: paragraph [0130] the pixels/voxels assigned a number between 0 and 1 as an estimate of the tumor fraction area); calculate the local metric with respect to each of the special regions (in Jha’643: paragraph [0134] describes a 40% threshold standardized uptake value SUV with active contours (local metrics) methods segmentation, the 40% threshold regarding set level for determining likelihood of abnormal tissue); and group the plurality of error data sets on a basis of the special regions and the local metrics (in Jha’643: Paragraphs [0204]-[0205] describes training to be performed more accurately from estimated tumor contour (special regions and local metrics)).
With respect to Claim 6, the combination of Jha’643 and Wu’599 shows the model training device according to claim 5, wherein the local metrics are local contour matching metrics (in Jha’643: Figure 17 paragraphs [0128], [0134], [0185], [0188]-[0191], and [0200] describes an active contour and spatial distance metrics (local metrics) based technique for segmenting and evaluating SPECT/medical images), and the processing circuitry is further configured to: analyze at least one selected from among an image-related characteristic, an anatomical characteristic, and a pathological characteristic of the special regions, on a basis of the local metrics of the special regions (in Jha’643: paragraph [0134] describes a 40% threshold standardized uptake value SUV with active contours (local metrics) methods segmentation, the 40% threshold regarding set level for determining likelihood of abnormal tissue (pathological characteristics) in striatal and palidal regions (anatomical characteristics)); and organize error data sets of the special regions having a mutually same image-related, anatomical, or pathological characteristic into a group (in Jha’643: Paragraphs [0134] and [0193] describes segmenting images of striatal and pallidal regions (anatomical grouping) and in accordance with tumor size (pathological characteristics)).
With respect to Claim 7, the combination of Jha’643 and Wu’599 shows the model training device according to claim 1, wherein the processing circuitry is further configured to specify at least one selected from among an image-related characteristic, an anatomical characteristic, and a pathological characteristic of each of the error data set groups as the model training information (in Jha’643: Paragraphs [0127]-[0128], [0134], and [0193] describes segmenting images of striatal and pallidal regions (anatomical grouping) and in accordance with tumor size (pathological characteristics) for training).
With respect to Claim 10, the combination of Jha’643 and Wu’599 shows the model training device according to claim 1, wherein, with respect to each of the plurality of error data set groups, the processing circuitry is configured to acquire learning data corresponding to a characteristic of the error data set group on a basis of the model training information and to further generate a plurality of learning models respectively corresponding to the characteristics of the plurality of error data set groups (in Jha’643: Paragraphs [0127]-[0128] describes segmenting images with characteristics (tumor boundary for specific regions) for training).
With respect to Claim 11, rejection analogous to those presented for claim 1, are applicable.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Jha et al. (US 2022/0284643 A1) in view of Wu et al. (US 10872599 B1) and further in view of Saltz et al. (US 2020/0126207 A1).
With respect to Claim 4, Jha’643 shows the model training device according to claim 3, wherein the local metrics are local contour matching metrics (Figure 17 paragraphs [0128], [0134], [0185], [0188]-[0191], and [0200] an active contour and spatial distance metrics (local metrics) based technique for segmenting and evaluating SPECT/medical images), and [ ].
Jha’643 and Wu’599 does not specifically show the processing circuitry is further configured to: determine, with respect to each of the subregions, whether the subregion is oversegmented or undersegmented by comparing the local metric of the subregion with a threshold value; and organize, with respect to each of the subregions, error data sets each including an oversegmented subregion into a group and error data sets each including an undersegmented subregion into another group.
Saltz’207 shows the processing circuitry is further configured to: determine, with respect to each of the subregions, whether the subregion is oversegmented or undersegmented by comparing the local metric of the subregion with a threshold value (Figure 3 paragraphs [0075] describes classifying/grouping the regions of interest, partitioned into equal sized patches, as good, bad, under, or over segmented, paragraph [0106] describes user defined threshold value determines contour of each nucleus); and organize, with respect to each of the subregions, error data sets each including an oversegmented subregion into a group and error data sets each including an undersegmented subregion into another group (paragraph [0072] describes classification models trained using bad and under-segmented patches and another training using good and over-segmented patches, together the models predict the label and train the segmentation algorithm).
At the time of the invention, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claim invention to modify Jha’643 and Wu’599 to include determine, with respect to each of the subregions, whether the subregion is oversegmented or undersegmented by comparing the local metric of the subregion with a threshold value; and organize, with respect to each of the subregions, error data sets each including an oversegmented subregion into a group and error data sets each including an undersegmented subregion into another group method taught by Saltz’207. The suggestion/motivation for doing so would have been to improve the system’s ability to be able to increase prediction quality with suggested threshold values for segmentation (paragraph [0052]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Jha et al. (US 2022/0284643 A1) in view of Wu et al. (US 10872599 B1) and further in view of Do (US 2020/0286614 A1).
With respect to Claim 8, Jha’643 and Wu’599 does not specifically show the model training device according to claim 1, wherein the processing circuitry is further configured to: perform a learning curve fitting process on the model, by testing the initial learning model with respect to each of the error data set groups, while using a set made up of data sets included in the error data set group as a test set; and specify the model training information by predicting a quantity of pieces of learning data to be acquired or a precision level of the model required to construct a model corresponding to characteristics of the error data set groups on a basis of a learning curve resulting from the fitting process.
Do’614 shows the processing circuitry is further configured to: perform a learning curve fitting process on the model, by testing the initial learning model with respect to each of the error data set groups, while using a set made up of data sets included in the error data set group as a test set (Figures 8-9 and paragraph [0078]-[0079] describes a fitted learning curve in regards to training data); and to specify the model training information by predicting a quantity of pieces of learning data to be acquired or a precision level of the model required to construct a model corresponding to characteristics of the error data set groups on a basis of a learning curve resulting from the fitting process (Figures 8-9 and paragraph [0078]-[0079] examples the seed sizes 5 to 50 increase accuracy significantly as compared to seed size 100 to 200 describing a predicted quantity of pieces of learning data to be less than 100).
At the time of the invention, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claim invention to modify Jha’643 and Wu’599 to include wherein the processing circuitry is further configured to: perform a learning curve fitting process on the model, by testing the initial learning model with respect to each of the error data set groups, while using a set made up of data sets included in the error data set group as a test set; and specify the model training information by predicting a quantity of pieces of learning data to be acquired or a precision level of the model required to construct a model corresponding to characteristics of the error data set groups on a basis of a learning curve resulting from the fitting process method taught by Do’614. The suggestion/motivation for doing so would have been to improve the system’s ability to fine tune the model with increased accuracy without additional number of training data (paragraph [0079]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lyman et al. (US 2022/0051114 A1): paragraphs [0049], [0100] and figure 13A shows training a model with medical scans and using subsets for training wherein the training parameters can also include training error data that indicates a training error associated with the medical scan analysis function, for example, based on applying cross validation indicated in testing data.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRIANA CRUZ whose telephone number is (571)270-3246. The examiner can normally be reached 10-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi M. Sarpong can be reached at (571) 270-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IRIANA CRUZ/ Primary Examiner, Art Unit 2681