Prosecution Insights
Last updated: April 19, 2026
Application No. 17/953,504

APPARATUS AND COMPUTER-IMPLEMENTED METHOD FOR TRAINING A MACHINE LEARNING SYSTEM FOR MAPPING A SCAN STUDY TO A STANDARDIZED IDENTIFIER CODE

Final Rejection §103
Filed
Sep 27, 2022
Examiner
MEYER, JACQUELINE CHRISTINE
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Siemens Healthineers AG
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
8 granted / 13 resolved
+6.5% vs TC avg
Strong +68% interview lift
Without
With
+67.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
24 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103
DETAILED ACTION This final office action is responsive to the amendment filed on December 29, 2025. Claims 1-13 and 15-20 are pending. Claims 1, 4, and 20 are independent. Claim 14 is canceled. Claim rejection of claim 14 under 35 USC §101 for being directed to non-statutory subject matter is moot in light of applicant canceling claim 14. Claim rejections of claims 1-20 under 35 USC §101 are withdrawn in light of applicant’s arguments. See section Response to Arguments below. Claim rejections of claims 1-20 under 35 USC §103 are withdrawn in light of applicant’s amendment and arguments. However, a new grounds of rejection has been made. See sections Claim Rejections – 35 USC §103 and Response to Arguments below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: An active learning module configured to train, a labeling task determining module configured to select, a labeling module configured to obtain, and a machine learning system training module configured to train in claim 1. A human interaction module configured to display and obtain in claim 2. Each of these modules are being interpreted under 35 U.S.C. 112(f) as a set of instructions and the broadest reasonable interpretation would include software, a computer program, or a computer function as described in the specification. (Paragraph 0019: “in cases where one or more modules are provided as software, the modules may be implemented by program code sections or program code snippets,” Paragraph 0020: “similarly, in cases where one or more modules are provided as hardware, the functions of one or more modules may be provided by one and the same hardware component…,” and Paragraph 00149: “the term ‘module’ may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, executed, or group) that stores code executed by the processor hardware.”) Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 5, 13, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ghose et al. (US20220114389), hereinafter Ghose, in view of Ionasec et al. (US20190221304), hereinafter Ionasec, in view of Wigness et al. (Efficient Label Collection for Unlabeled Image Datasets), hereinafter Wigness. Regarding claim 1, Ghose teaches: An apparatus for training a machine learning system for mapping a scan study to a standardized identifier code of a standardized identifier code dictionary, the apparatus comprising: (Ghose, paragraph 0046: “In the exemplary embodiment, the automatic labeling system 101 includes an automatic labeling computing device 103 (FIG. 1D).” and paragraph 0027: “The suboptimal labels are optimized using the representative templates. The methods described herein may be referred to as anatomy-guided registration. The automatically-generated ground truth may be used to generate labels of images of an increased accuracy, optimize machine learning models, and ensure quality of classification and segmentation systems.” – The computing device is analogous to the apparatus, with the automatic labeling system being analogous to the machine learning system for mapping a scan study to a standardized identifier code. The representative templates being used for the labels is analogous to a standardized identifier code dictionary.) an input interface configured to obtain a base set of scan studies; (Ghose, paragraph 0032: “In the exemplary embodiment, a training dataset 115 is started with a relatively small number of training images” and paragraph 0065: “In the exemplary embodiment, the computing device 800 includes a user interface 804 that receives at least one input from a user. The user interface 804 may include a keyboard 806 that enables the user to input pertinent information. The user interface 804 may also include, for example, a pointing device, a mouse, a stylus, a touch sensitive panel (e.g., a touch pad and a touch screen), a gyroscope, an accelerometer, a position detector, and/or an audio input interface (e.g., including a microphone).” – The user interface including a touch sensitive panel and/or an audio input interface is analogous to the input interface. The pertinent information being input is analogous to the system receiving the base set of scan studies, i.e. training images.) a computing device configured to implement at least a clustering module to classify, using a clustering algorithm, scan studies in the base set of scan studies into a plurality of clusters; and (Ghose, paragraph 0032: “During clustering 102, a similarity metric is measured for each of the training images against another training image in the training dataset. A similarity metric measures the similarity between two images 104. In some embodiments, the similarity metric, such as normalized cross correlation and mutual information, is based on a comparison of the entire images. In other embodiments, the similarity metric, such as Dice similarity coefficient and surface distance, is based on the labeled structures or the ROI images. The similarity metric may also be based on features in a deep learning or traditional machine learning framework.” – Clustering the training images is analogous to the clustering module classifying the scan studies into a plurality of clusters.) an active learning module configured to train the machine learning system, the active learning module including (Ghose, paragraph 0062: “Additionally or alternatively, the machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as images, object statistics, and information.” – The machine learning program is analogous to the active learning module.) a labelling task determining module configured to select at least one scan study from each cluster among the plurality of clusters, (Ghose, paragraph 0034: “The representative template 120 includes a representative image 104-rep and a corresponding label 110-rep of the representative image 104-rep.” – The representative template including a representative image is analogous to selecting one scan study as the representative template is based off the clusters (paragraph 0027).) a machine learning system training module configured to train the machine learning system based on the training set of labelled scan studies; (Ghose, paragraph 0037: “In the exemplary embodiments, the method 100 includes training 128 the neural network model with a training dataset. The training dataset includes training images 104-tn and their training labels 110-tn.” – The method including training the neural network model is analogous to the module configured to train the machine learning system. While the training dataset is analogous to the training set of labelled scan studies.) wherein the active learning module is further configured to re-train the machine learning system by performing at least one refinement loop, the at least one refinement loop including re-training the machine learning system using at least the enlarged training set of labelled scan studies. (Ghose, paragraph 0044: “The neural network model 126 may be retrained using the updated training dataset.”) Ghose does not explicitly teach: a labelling module configured to obtain standardized identifier code labels for the selected scan studies in order to generate a training set of labelled scan studies, and determining, from the base set of scan studies, an additional set of scan studies based on evaluation metrics associated with scan studies in the additional set of scan studies, wherein an evaluation metric associated with a scan study is based on at least one of an entropy of the scan study or a position of a data point representing the scan study in a data point space used for classifying the scan study into a cluster, obtaining standardized identifier code labels for scan studies in the additional set of scan studies in order to enlarge the training set of labelled scan studies, and However, Ionasec teaches: a labelling module configured to obtain standardized identifier code labels for the selected scan studies in order to generate a training set of labelled scan studies, and (Ionasec, paragraph 0085: “For a given training data set of radiological examination reports and image data sets, the second analysis algorithms for extraction of information from text may be used to generate labels in form of activated medical concepts by analysing an examination report associated with the image data set.” – The activated medical concepts is analogous to the standardized identifier code labels for the scan studies which are used to create labels for the training data while the second analysis module is analogous to the labelling module.) obtaining standardized identifier code labels for scan studies in the additional set of scan studies in order to enlarge the training set of labelled scan studies, and (Ionasec, paragraph 0085: “Preferably, training data sets are determined by applying the at least one second analysis algorithm to a training examination report associated with a training image data set, wherein the results of the second analysis algorithm at least partly form the ground truth for the training image data set. For a given training data set of radiological examination reports and image data sets, the second analysis algorithms for extraction of information from text may be used to generate labels in form of activated medical concepts by analysing an examination report associated with the image data set.” – The algorithm being able to apply multiple training data sets indicates that it can use as input the updated training set. The activated medical concepts is analogous to the standardized identifier code labels for the scan studies which are used to create labels for the training data.) Ionasec is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose, which already teaches a machine learning system for mapping a scan study to a standardized code but does not explicitly teach obtaining standardized identifier code labels for the training sets, to include the teachings of Ionasec which does teach obtaining standardized identifier code labels for the training sets in order to provide "a foundation for the development and use of evaluation applications, rendering these more robust and providing a larger scope of applicability, in particular for large fields of radiology and medicine and/or large numbers of cases." (Ionasec, paragraph 0008) Ghose and Ionasec do not explicitly teach: determining, from the base set of scan studies, an additional set of scan studies based on evaluation metrics associated with scan studies in the additional set of scan studies, wherein an evaluation metric associated with a scan study is based on at least one of an entropy of the scan study or a position of a data point representing the scan study in a data point space used for classifying the scan study into a cluster, However, Wigness teaches: determining, from the base set of scan studies, an additional set of scan studies based on evaluation metrics associated with scan studies in the additional set of scan studies, wherein an evaluation metric associated with a scan study is based on at least one of an entropy of the scan study or a position of a data point representing the scan study in a data point space used for classifying the scan study into a cluster, (Wigness, section 3.1, paragraph 2: “Local structural change is found by comparing the internal structure of a group, c, to one of its ancestors. In this paper, the comparison is modeled as the angle between c, and its parent, p (relationship seen in Figure 3).” And paragraph 3: “Most groups in H have at least some structural difference from their parent, but S should represent only the splits that are likely to result from a change of concept. To detect these transitions, HCGL looks for large changes in structure followed by a lack of structural change in local neighborhoods of H. In other words, if the structural change of c is a local peak with respect to p and its children, cr and cl (relationship illustrated in Figure 3), it is added to S.” – Where the groups represent additional scan studies that are already taught by Ghose and the structural changes represent the position of a data point in the scan study that is used as an evaluation metric to determine whether the additional groups are being added to the set S. Algorithm 1 in section 3.2 shows that this process is iterated until a stopping condition is met and, therefore, is performed in a refinement loop.) Wigness is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose and Ionasec, which already teaches an active learning model to re-train the machine learning system but does not explicitly teach part of the re-training includes determining an additional set of scan studies based on evaluation metrics associated with scan studies, to include the teachings of Wigness which does teach part of the re-training includes determining an additional set of scan studies based on evaluation metrics associated with scan studies since “this approach is more efficient than existing labeling techniques, and achieves higher classification accuracy.” (Wigness, abstract) Regarding claim 2, Ghose, Ionasec, and Wigness teach the apparatus of claim 1, as cited above. Ghose further teaches: display the scan studies selected by the labelling task determining module to a user as labelling tasks using a graphical user interface, and (Ghose, paragraph 0045: “The label of the unlabeled image generated by the retrained neural network model may be output to a user interface. The label may be displayed by itself, or as being overlaid over the image.” And paragraph 0065: “Moreover, in the exemplary embodiment, computing device 800 includes a display interface 817 that presents information, such as input events and/or validation results, to the user.” – The label being output to a user interface and displayed is analogous to the scan study being displayed to a user using a graphical user interface.) obtain labels for the selected and displayed scan studies as responses by the user to respective labelling tasks. (Ghose, paragraph 0032: “In the exemplary embodiment, a training dataset 115 is started with a relatively small number of training images 104-tn, e.g., 75 images, which are manually labeled by a rater such as a radiologist or technologist to generate training labels 110-tn for the training images 104-tn.” – The images being manually labeled by a rater to generate training labels for the training images is analogous to obtaining the labels as responses by the user.) Regarding claim 4, Ghose teaches the computer-implemented method: obtaining a base set of scan studies; (Ghose, paragraph 0032: “In the exemplary embodiment, a training dataset 115 is started with a relatively small number of training images” – starting the dataset is analogous to receiving the base set of scan studies, i.e. training images.) classifying, using a clustering algorithm, scan studies in the base set of scan studies, into a plurality of clusters; (Ghose, paragraph 0032: “During clustering 102, a similarity metric is measured for each of the training images against another training image in the training dataset. A similarity metric measures the similarity between two images 104. In some embodiments, the similarity metric, such as normalized cross correlation and mutual information, is based on a comparison of the entire images. In other embodiments, the similarity metric, such as Dice similarity coefficient and surface distance, is based on the labeled structures or the ROI images. The similarity metric may also be based on features in a deep learning or traditional machine learning framework.” – Clustering the training images is analogous to the clustering module classifying the scan studies into a plurality of clusters.) selecting at least one scan study from each cluster among the plurality of clusters; (Ghose, paragraph 0034: “The representative template 120 includes a representative image 104-rep and a corresponding label 110-rep of the representative image 104-rep.” – The representative template including a representative image is analogous to selecting one scan study as the representative template is based off the clusters (paragraph 0027).) training a machine learning system, using the labelled scan studies to map individual scan studies to a corresponding standardized identifier code of the standardized identifier code dictionary; (Ghose, paragraph 0037: “In the exemplary embodiments, the method 100 includes training 128 the neural network model with a training dataset. The training dataset includes training images 104-tn and their training labels 110-tn.” And paragraph 0027: “The suboptimal labels are optimized using the representative templates. The methods described herein may be referred to as anatomy-guided registration. The automatically-generated ground truth may be used to generate labels of images of an increased accuracy, optimize machine learning models, and ensure quality of classification and segmentation systems.” – The method including training the neural network model is analogous to training the machine learning system. While the training dataset is analogous to the training set of labelled scan studies and the representative templates being used for the labels is analogous to a standardized identifier code library.) performing at least one refinement loop including re-training the machine learning system, using at least the enlarged training set of labelled scan studies. (Ghose, paragraph 0044: “The neural network model 126 may be retrained using the updated training dataset.”) Ghose does not explicitly teach: obtaining standardized identifier code labels for the selected scan studies in order to generate a training set of labelled scan studies; determining, from the base set of scan studies, an additional set of scan studies based on evaluation metrics associated with scan studies in the additional set of scan studies, wherein an evaluation metric associated with a scan study is based on at least one of an entropy of the scan study or a position of a data point representing the scan study in a data point space used for classifying the scan study into a cluster, obtaining standardized identifier code labels for scan studies in the additional set of scan studies in order to enlarge the training set of labelled scan studies, and However, Ionasec teaches: obtaining standardized identifier code labels for the selected scan studies in order to generate a training set of labelled scan studies; (Ionasec, paragraph 0085: “For a given training data set of radiological examination reports and image data sets, the second analysis algorithms for extraction of information from text may be used to generate labels in form of activated medical concepts by analysing an examination report associated with the image data set.” – The activated medical concepts is analogous to the standardized identifier code labels for the scan studies which are used to create labels for the training data.) obtaining standardized identifier code labels for scan studies in the additional set of scan studies in order to enlarge the training set of labelled scan studies, and (Ionasec, paragraph 0085: “Preferably, training data sets are determined by applying the at least one second analysis algorithm to a training examination report associated with a training image data set, wherein the results of the second analysis algorithm at least partly form the ground truth for the training image data set. For a given training data set of radiological examination reports and image data sets, the second analysis algorithms for extraction of information from text may be used to generate labels in form of activated medical concepts by analysing an examination report associated with the image data set.” – The algorithm being able to apply multiple training data sets indicates that it can use as input the updated training set as taught by Ghose above. The activated medical concepts is analogous to the standardized identifier code labels for the scan studies which are used to create labels for the training data.) Ionasec is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose, which already teaches a machine learning system for mapping a scan study to a standardized identifier code but does not explicitly teach obtaining standardized identifier code labels for the training sets, to include the teachings of Ionasec which does teach obtaining standardized identifier code labels for the training sets in order to provide "a foundation for the development and use of evaluation applications, rendering these more robust and providing a larger scope of applicability, in particular to large fields of radiology and medicine and/or large numbers of cases." (Ionasec, paragraph 0008) Ghose and Ionasec do not explicitly teach: determining, from the base set of scan studies, an additional set of scan studies based on evaluation metrics associated with scan studies in the additional set of scan studies, wherein an evaluation metric associated with a scan study is based on at least one of an entropy of the scan study or a position of a data point representing the scan study in a data point space used for classifying the scan study into a cluster, However, Wigness teaches: determining, from the base set of scan studies, an additional set of scan studies based on evaluation metrics associated with scan studies in the additional set of scan studies, wherein an evaluation metric associated with a scan study is based on at least one of an entropy of the scan study or a position of a data point representing the scan study in a data point space used for classifying the scan study into a cluster, (Wigness, section 3.1, paragraph 2: “Local structural change is found by comparing the internal structure of a group, c, to one of its ancestors. In this paper, the comparison is modeled as the angle between c, and its parent, p (relationship seen in Figure 3).” And paragraph 3: “Most groups in H have at least some structural difference from their parent, but S should represent only the splits that are likely to result from a change of concept. To detect these transitions, HCGL looks for large changes in structure followed by a lack of structural change in local neighborhoods of H. In other words, if the structural change of c is a local peak with respect to p and its children, cr and cl (relationship illustrated in Figure 3), it is added to S.” – Where the groups represent additional scan studies that are already taught by Ghose and the structural changes represent the position of a data point in the scan study that is used as an evaluation metric to determine whether the additional groups are being added to the set S. Algorithm 1 in section 3.2 shows that this process is iterated until a stopping condition is met and, therefore, is performed in a refinement loop.) Wigness is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose and Ionasec, which already teaches an active learning model to re-train the machine learning system but does not explicitly teach part of the re-training includes determining an additional set of scan studies based on evaluation metrics associated with scan studies, to include the teachings of Wigness which does teach part of the re-training includes determining an additional set of scan studies based on evaluation metrics associated with scan studies since “this approach is more efficient than existing labeling techniques, and achieves higher classification accuracy.” (Wigness, abstract) Regarding claim 5, Ghose, Ionasec, and Wigness teach the computer-implemented method of claim 4, as cited above. Ghose further teaches: wherein the standardized identifier code labels are obtained by presenting, using a graphical user interface, a user with labelling tasks for the selected scan studies (Ghose, paragraph 0045: “The label of the unlabeled image generated by the retrained neural network model may be output to a user interface. The label may be displayed by itself, or as being overlaid over the image.” And paragraph 0065: “Moreover, in the exemplary embodiment, computing device 800 includes a display interface 817 that presents information, such as input events and/or validation results, to the user.” – The label being output to a user interface and displayed is analogous to the scan study being displayed to a user using a graphical user interface.) and receiving user input as labels for the selected scan studies. (Ghose, paragraph 0032: “In the exemplary embodiment, a training dataset 115 is started with a relatively small number of training images 104-tn, e.g., 75 images, which are manually labeled by a rater such as a radiologist or technologist to generate training labels 110-tn for the training images 104-tn.” – The images being manually labeled by a rater to generate training labels for the training images is analogous to obtaining the labels as responses by the user.) Regarding claim 13, Ghose, Ionasec, and Wigness teach the computer-implemented method of claim 4, as cited above. Ghose further teaches: using a machine learning system trained using the method according to claim 4 to map the scan study to the standardized identifier code of the standardized identifier code dictionary. (Ghose, paragraph 0045: “In the exemplary embodiment, the method 100 may further include segmenting anatomical structures of interest of an unlabeled image using the retrained neural network model. The label of the unlabeled image generated by the retrained neural network model may be output to a user interface. The label may be displayed by itself, or as being overlaid over the image. The labels may also be used to classify images, for example, classifying the images into categories such as cervical images or lumbar images based on the labels.” – The machine learning system being trained to map the scan study to the standardized identifier code of the standardized identifier code dictionary is taught by Ghose, Ionasec, and Wigness in claim 4, The method using the retrained model to label and classify the images is analogous to using the method to map the scan studies.) Regarding claim 15, Ghose, Ionasec, and Wigness teach the computer-implemented method of claim 4, as cited above. Ghose further teaches: A non-transitory computer-readable storage medium including executable program code that, when executed by at least one processor, causes the at least one processor to perform the computer-implemented method according to claim 4. (Ghose, paragraph 0067: “In the exemplary embodiment, the memory device 818 includes one or more devices that enable information, such as executable instructions and/or other data, to be stored and retrieved. Moreover, the memory device 818 includes one or more computer readable media, such as, without limitation, dynamic random access memory (DRAM), static random access memory (SRAM), a solid state disk, and/or a hard disk.”) Regarding claim 20, Claim 20 has all the same limitations of claim 4, which are taught by Ghose, Ionasec, and Wigness – see claim 4 above. Ghose additionally teaches: a memory storing computer-readable instructions; and at least one processor configured to execute the computer-readable instructions to cause the apparatus to (Ghose, paragraph 0067: “In the exemplary embodiment, the memory device 818 includes one or more devices that enable information, such as executable instructions and/or other data, to be stored and retrieved. Moreover, the memory device 818 includes one or more computer readable media, such as, without limitation, dynamic random access memory (DRAM), static random access memory (SRAM), a solid state disk, and/or a hard disk.”) Claims 3, 11, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ghose in view of Ionasec in view of Wigness in view of Amthor et al. (US20230343449), hereinafter Amthor. Regarding claim 3, Ghose, Ionasec, and Wigness teach the apparatus of claim 1, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: a protocol determining artificial neural network configured to determine, for a scan study, a protocol name with which the scan study is to be designated. However, Amthor teaches: a protocol determining artificial neural network configured to determine, for a scan study, a protocol name with which the scan study is to be designated. (Amthor, paragraph 0048: “At an operation 104, the extracted image features are converted into a representation 43 (i.e., the abstract representation) of a current status of the medical imaging examination. The operation 104 is performed by the abstract representation module 42. To generate the representation 43, the extracted image features are input into a generic imaging examination workflow model that is independent of a format of the image features displayed on the display device 24′ of the medical imaging device controller 10. The representation 43 includes one or more of: a number of scans, a remaining scan time, a weight value of a patient to be scanned, a time elapsed since a start of the medical imaging examination, a number of rescans, a name of a scan protocol, a progress of a current medical imaging examination, a heart rate of the patient to be scanned, and a breathing rate of the patient to be scanned.” – The generic imaging examination workflow model is analogous to the protocol determining artificial neural network. The representation including a name of a scan protocol indicates that generating the representation is analogous to determining a protocol name.) Amthor is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose, Ionasec, and Wigness, which already teaches the machine learning system to map scan studies to a standardized identifier code but does not explicitly teach determining a protocol name with which the scan study is to be designated, to include the teachings of Amthor which does teach determining a protocol name with which the scan study is to be designated in so that “the information can be displayed in a generic way that allows the remote expert RE to quickly understand the status of the medical imagining examination.” (Amthor, paragraph 0041) Regarding claim 11, Ghose, Ionasec, and Wigness teach the computer-implemented method of claim 4, as cited above. Ghose does not explicitly teach: wherein the machine learning system includes a protocol determining artificial neural network configured to determine, for a scan study, a protocol name with which the scan study is to be designated, and wherein mapping of the scan study to the standardized identifier code by the machine learning system is partially, and at least indirectly, based on an output of the protocol determining artificial neural network. However, Ionasec further teaches: and wherein the mapping of the scan study to the standardized identifier code by the machine learning system is partially, and at least indirectly, based on an output of the protocol determining artificial neural network. (Ionasec, paragraph 0075: “In an embodiment, the input of the evaluation method/system comprises both of unstructured text, in form of radiological examination reports (including/extendible to electronic health records, historical records, other records such as pathology, lab reports, etc.) and image data sets, in form of radiological images (this includes any image modality, in particular x-ray, MR and/or ultrasound, historical acquisitions, multi-modal images, pathology images, invasive images, etc.).” – The input for the model including information such as historical records, lab reports and image data sets from different imaging techniques indicates that the input is capable of being based on the output of the protocol determining artificial neural network, namely the protocol name that is taught by Amthor below.) Ghose, Ionasec, and Wigness do not explicitly teach: wherein the machine learning system includes a protocol determining artificial neural network configured to determine, for a scan study, a protocol name with which the scan study is to be designated, However, Amthor teaches: wherein the machine learning system includes a protocol determining artificial neural network configured to determine, for a scan study, a protocol name with which the scan study is to be designated, (Amthor, paragraph 0048: “At an operation 104, the extracted image features are converted into a representation 43 (i.e., the abstract representation) of a current status of the medical imaging examination. The operation 104 is performed by the abstract representation module 42. To generate the representation 43, the extracted image features are input into a generic imaging examination workflow model that is independent of a format of the image features displayed on the display device 24′ of the medical imaging device controller 10. The representation 43 includes one or more of: a number of scans, a remaining scan time, a weight value of a patient to be scanned, a time elapsed since a start of the medical imaging examination, a number of rescans, a name of a scan protocol, a progress of a current medical imaging examination, a heart rate of the patient to be scanned, and a breathing rate of the patient to be scanned.” – The generic imaging examination workflow model is analogous to the protocol determining artificial neural network. The representation including a name of a scan protocol indicates that generating the representation is analogous to determining a protocol name.) Amthor is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose, Ionasec, and Wigness, which already teaches the machine learning system to map scan studies to a standardized identifier code but does not explicitly teach determining a protocol name with which the scan study is to be designated, to include the teachings of Amthor which does teach determining a protocol name with which the scan study is to be designated so that “the information can be displayed in a generic way that allows the remote expert RE to quickly understand the status of the medical imaging examination.” (Amthor, paragraph 0041) Regarding claim 16, Ghose, Ionasec, and Wigness teach the apparatus of claim 2, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: a protocol determining artificial neural network configured to determine, for a scan study, a protocol name with which the scan study is to be designated. However, Amthor teaches: a protocol determining artificial neural network configured to determine, for a scan study, a protocol name with which the scan study is to be designated. (Amthor, paragraph 0048: “At an operation 104, the extracted image features are converted into a representation 43 (i.e., the abstract representation) of a current status of the medical imaging examination. The operation 104 is performed by the abstract representation module 42. To generate the representation 43, the extracted image features are input into a generic imaging examination workflow model that is independent of a format of the image features displayed on the display device 24′ of the medical imaging device controller 10. The representation 43 includes one or more of: a number of scans, a remaining scan time, a weight value of a patient to be scanned, a time elapsed since a start of the medical imaging examination, a number of rescans, a name of a scan protocol, a progress of a current medical imaging examination, a heart rate of the patient to be scanned, and a breathing rate of the patient to be scanned.” – The generic imaging examination workflow model is analogous to the protocol determining artificial neural network. The representation including a name of a scan protocol indicates that generating the representation is analogous to determining a protocol name.) Amthor is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose, Ionasec, and Wigness, which already teaches the machine learning system to map scan studies to a standardized identifier code but does not explicitly teach determining a protocol name with which the scan study is to be designated, to include the teachings of Amthor which does teach determining a protocol name with which the scan study is to be designated so that “the information can be displayed in a generic way that allows the remote expert RE to quickly understand the status of the medical imaging examination.” (Amthor, paragraph 0041) Claims 6-10, 12, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Ghose in view of Ionasec in view of Wigness in view of Wang et al. (US20220343638), hereinafter Wang. Regarding claim 6, Ghose, Ionasec, and Wigness teach the computer-implemented method of claim 4, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: wherein additional virtual scan studies, or features thereof, are generated for training the machine learning system based on vectorize operations performed on scan studies of the enlarged training set of labelled scan studies, and wherein at least a final re- training of the machine learning system is performed using the enlarged training set of labelled scan studies and the additional virtual scan studies or the features thereof. However, Wang teaches: wherein additional virtual scan studies, or features thereof, are generated for training the machine learning system based on vectorize operations performed on scan studies of the enlarged training set of labelled scan studies, (Wang, paragraph 0133: “During the training, the composite images provided with classification labels are generated based on the preset classification labels, the one-dimensional Gaussian random vectors and the preset generator model, and the composite image label pairs are finally generated; the sample image label pairs corresponding to the sample images are determined based on the sample images in the training data and the preset classifier model; the sample image label pairs, the preset real image label pairs and the composite image label pairs are input into the preset discriminator model for discrimination to obtain the first discrimination result corresponding to the sample image label pairs, the second discrimination result corresponding to the preset real image label pairs and the third discrimination result corresponding to the composite image label pairs;” – The composite images being generated based off the labels and Gaussian random vectors is analogous to the virtual scans generated based on vectorize operations. These being added to the sample and real data is analogous to enlarging the training set of labelled scan studies.) and wherein at least a final re--training of the machine learning system is performed using the enlarged training set of labelled scan studies and the additional virtual scan studies or the features thereof. (Wang, paragraph 0133: “the first loss function corresponding to the preset generator model, the second loss function corresponding to the preset discriminator model and the third loss function corresponding to the preset classifier model are calculated based on the first discrimination result, the second discrimination result and the third discrimination result; the network parameters respectively corresponding to the preset generator model, the preset discriminator model and the preset classifier model are updated through gradient descent of the back-propagation algorithm based on the first loss function, the second loss function and the third loss function; and the training is stopped when the first loss function, the second loss function and the third loss function are all converged to obtain the ternary generative adversarial network, that is, to obtain the trained classification model.” – The loss functions converging to obtain the trained classification model is analogous to a final re-training as the training would continue until the loss functions converged. The loss functions being based off the discrimination results indicates that the training is performed using the labelled scan studies and the virtual scan studies as the discrimination results are based off those, as shown above.) Wang is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose, Ionasec, and Wigness, which already teaches a method of training a machine learning system to map a scan study to a standardized identifier code but does not explicitly teach generating additional virtual scan studies based on vectorize operations and re-training the model using the virtual scan studies, to include the teachings of Wang which does teach generating additional virtual scan studies based on vectorize operations and re-training the model using the virtual scan studies in order to "solve the problem of the inaccurate classification result due to that a lot of internal structure information and internal correlation information of the image are lost when a medical image is processed by a traditional deep network model." (Wang, paragraph 0004) Regarding claim 7, Ghose, Ionasec, and Wigness teach the computer-implemented method of claim 4, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: wherein additional virtual scan studies are generated by adding noise to scan studies for which labels have been obtained, and wherein at least a final re-training of the machine learning system is performed using the enlarged training set of labelled scan studies and the additional virtual scan studies. However, Wang teaches: wherein additional virtual scan studies are generated by adding noise to scan studies for which labels have been obtained, (Wang, paragraph 0133: “During the training, the composite images provided with classification labels are generated based on the preset classification labels, the one-dimensional Gaussian random vectors and the preset generator model, and the composite image label pairs are finally generated;” and Fig. 5: “noise vector” – The noise vector shown in figure 5 shows that noise is added to the scan studies and, therefore, the composite images (virtual scan studies) are generated by adding noise.) and wherein at least a final re-training of the machine learning system is performed using the enlarged training set of labelled scan studies and the additional virtual scan studies. (Wang, paragraph 0133: “the first loss function corresponding to the preset generator model, the second loss function corresponding to the preset discriminator model and the third loss function corresponding to the preset classifier model are calculated based on the first discrimination result, the second discrimination result and the third discrimination result; the network parameters respectively corresponding to the preset generator model, the preset discriminator model and the preset classifier model are updated through gradient descent of the back-propagation algorithm based on the first loss function, the second loss function and the third loss function; and the training is stopped when the first loss function, the second loss function and the third loss function are all converged to obtain the ternary generative adversarial network, that is, to obtain the trained classification model.” – The loss functions converging to obtain the trained classification model is analogous to a final re-training as the training would continue until the loss functions converged. The loss functions being based off the discrimination results indicates that the training is performed using the labelled scan studies and the virtual scan studies as the discrimination results are based off those, as shown in paragraph 0133.) Wang is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose, Ionasec, and Wigness, which already teaches a method of training a machine learning system to map a scan study to a standardized identifier code but does not explicitly teach generating additional virtual scan studies by adding noise to the scan studies and re-training the model using the virtual scan studies, to include the teachings of Wang which does teach generating additional virtual scan studies by adding noise to the scan studies and re-training the model using the virtual scan studies in order to "solve the problem of the inaccurate classification result due to that a lot of internal structure information and internal correlation information of the image are lost when a medical image is processed by a traditional deep network model." (Wang, paragraph 0004) Regarding claim 8, Ghose, Ionasec, and Wigness teach the computer-implemented method of claim 4, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: generating representations for standardized identifier codes based on weighted unigrams. However, Wang teaches: generating representations for standardized identifier codes based on weighted unigrams. (Wang, paragraph 0117: “The terminal weights the first feature map based on the calculated weight vector, so that a weight of an important channel in the first feature map is larger and a weight of an unimportant channel is smaller, so as to obtain a more representative global high-order feature map.” – The feature map is analogous to the representations for standardized identifier code as it is also used to determine the weights, see e.g. paragraph 0034-0037, therefore, the weighted vector is analogous to the weighted unigrams.) Wang is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose, Ionasec, and Wigness, which already teaches a method of training a machine learning system to map a scan study to a standardized identifier code but does not explicitly teach using weighted unigrams to generate the representations for standardized identifier codes, to include the teachings of Wang which does teach using weighted unigrams to generate the representations for standardized identifier codes in order to "solve the problem of the inaccurate classification result due to that a lot of internal structure information and internal correlation information of the image are lost when a medical image is processed by a traditional deep network model." (Wang, paragraph 0004) Regarding claim 9, Ghose, Ionasec, Wigness, and Wang teach the computer-implemented method of claim 8, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: wherein the representations for the standardized identifier codes are updated at least once based on the standardized identifier code labels. However, Wang further teaches: wherein the representations for the standardized identifier codes are updated at least once based on the standardized identifier code labels. (Wang, paragraph 0117: “Specifically, the trained classifier model uses the back-propagation algorithm to make the weight of the important channel in the first feature map larger and the weight of the unimportant channel in the first feature map smaller, so as to extract more representative feature information and thus obtain the global high-order feature map.” – The feature maps being updated using the back-propagation algorithm is analogous to the representations being updated at least once.) Regarding claim 10, Ghose, Ionasec, Wigness, and Wang teach the computer-implemented method of claim 9, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: wherein the representations for the standardized identifier codes are updated by changing weights of the weighted unigrams within the representations based on a determination of how impactful at least one of an addition or a deletion of each weighted unigram is for deciding whether a specific scan study is classified into a particular standardized identifier code. However, Wang further teaches: wherein the representations for the standardized identifier codes are updated by changing weights of the weighted unigrams within the representations based on a determination of how impactful at least one of an addition or a deletion of each weighted unigram is for deciding whether a specific scan study is classified into a particular standardized identifier code. (Wang, paragraph 0117: “Specifically, the trained classifier model uses the back-propagation algorithm to make the weight of the important channel in the first feature map larger and the weight of the unimportant channel in the first feature map smaller, so as to extract more representative feature information and thus obtain the global high-order feature map.” – The weights being larger and smaller during the back-propagation algorithm is analogous to updating the representations by changing the weights. The weight of the important channel being larger and the weight of the unimportant channel being smaller is analogous to a determination of how impactful it is on the scan study as the addition of the important channel would have a bigger impact on the classification while the deletion of the unimportant channel would have a smaller impact on the classification. Thus, making the weight larger would indicate an addition of the weighted unigram while making the weight smaller, e.g., to zero, would indicate deletion of the weighted unigram.) Regarding claim 12, Ghose, Ionasec, and Wigness teach the computer-implemented method of claim 4, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: wherein the at least one refinement loop is iterated until an abort criterion is fulfilled, and wherein the abort criterion includes at least one of a threshold number of labels has been obtained, a threshold number of iterations has been performed, or performing of the re-training of the machine learning system no longer improves significantly above a certain threshold or remains constant after a certain threshold. However, Wang teaches: wherein the at least one refinement loop is iterated until an abort criterion is fulfilled, and wherein the abort criterion includes at least one of a threshold number of labels has been obtained, a threshold number of iterations has been performed, or performing of the re-training of the machine learning system no longer improves significantly above a certain threshold or remains constant after a certain threshold. (Wang, paragraph 0028: “stopping training when the first loss function, the second loss function and the third loss function all converge to obtain the ternary generative adversarial network.” – Stopping training when the loss functions all converge is analogous to the system remaining constant after a certain threshold (convergence) is reached.) Wang is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ghose, Ionasec, and Wigness, which already teaches a refinement loop for re-training the machine learning system but does not explicitly teach an abortion criteria for terminating the iteration of the refinement loop, to include the teachings of Wang which does teach an abortion criteria for terminating the iteration of the refinement loop in order to "solve the problem of the inaccurate classification result to do that a lot of internal structure information and internal correlation information of the image are lost when a medical image is processed by a traditional deep network model." (Wang, paragraph 0004) Regarding claim 17, Ghose, Ionasec, Wigness, and Wang teach the computer-implemented method of claim 6, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: generating representations for the standardized identifier codes based on weighted unigrams. However, Wang further teaches: generating representations for the standardized identifier codes based on weighted unigrams. (Wang, paragraph 0117: “The terminal weights the first feature map based on the calculated weight vector, so that a weight of an important channel in the first feature map is larger and a weight of an unimportant channel is smaller, so as to obtain a more representative global high-order feature map.” – The feature map is analogous to the representations for standardized identifier code as it is also used to determine the weights, see e.g. paragraph 0034-0037, therefore, the weighted vector is analogous to the weighted unigrams.) Regarding claim 18, Ghose, Ionasec, Wigness, and Wang teach the computer-implemented method of claim 7, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: generating representations for the standardized identifier codes based on weighted unigrams. However, Wang further teaches: generating representations for the standardized identifier codes based on weighted unigrams. (Wang, paragraph 0117: “The terminal weights the first feature map based on the calculated weight vector, so that a weight of an important channel in the first feature map is larger and a weight of an unimportant channel is smaller, so as to obtain a more representative global high-order feature map.” – The feature map is analogous to the representations for standardized identifier code as it is also used to determine the weights, see e.g. paragraph 0034-0037, therefore, the weighted vector is analogous to the weighted unigrams.) Regarding claim 19, Ghose, Ionasec, Wigness, and Wang teach the computer-implemented method of claim 8, as cited above. Ghose, Ionasec, and Wigness do not explicitly teach: wherein the refinement loop is iterated until an abort criterion is fulfilled, and wherein the abort criterion includes at least one of a threshold number of labels has been obtained, a threshold number of iterations has been performed, or performing of the re-training of the machine learning system no longer improves significantly above a certain threshold or remains constant after a certain threshold. However, Wang further teaches: wherein the refinement loop is iterated until an abort criterion is fulfilled, and wherein the abort criterion includes at least one of a threshold number of labels has been obtained, a threshold number of iterations has been performed, or performing of the re-training of the machine learning system no longer improves significantly above a certain threshold or remains constant after a certain threshold. (Wang, paragraph 0028: “stopping training when the first loss function, the second loss function and the third loss function all converge to obtain the ternary generative adversarial network.” – Stopping training when the loss functions all converge is analogous to the system remaining constant after a certain threshold (convergence) is reached.) Response to Arguments Applicant’s arguments on pages 14 and 15 of Applicant’s Remarks, filed December 29, 2025, with respect to claim rejections of claims 1-20 under 35 USC §101 have been fully considered and are persuasive. In particular, the improvement to the technology of improving “the functioning of conventional machine learning systems (e.g., by at least reducing the number of labels used) in a specific technical filed (medical imaging), thereby integrating any alleged abstract idea into a practical application.” The rejection of September 25, 2025 of claims 1-20 under 35 USC § 101 has been withdrawn. Applicant’s arguments with respect to the rejection(s) of claim(s) 1 under 35 USC §103 have been fully considered and are persuasive. In particular, the argument that none of the cited prior art teach the amended limitation “determining, from the base set of scan studies, an additional set of scan studies based on evaluation metrics associated with scan studies in the additional set of scan studies, wherein an evaluation metric associated with a scan study is based on at least one of an entropy of the scan study or a position of a data point representing the scan study in a data point space used for classifying the scan study into a cluster.” Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Wigness which does teach the amended limitation. See section Claim Rejections – 35 USC §103 above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACQUELINE MEYER whose telephone number is (703)756-5676. The examiner can normally be reached M-F 8:00 am - 4:30 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.C.M./Examiner, Art Unit 2144 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Sep 27, 2022
Application Filed
Sep 22, 2025
Non-Final Rejection — §103
Dec 29, 2025
Response Filed
Mar 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585981
MANAGING AN INSTALLED BASE OF ARTIFICIAL INTELLIGENCE MODULES
2y 5m to grant Granted Mar 24, 2026
Patent 12468941
SYSTEMS AND METHODS FOR DYNAMICS-AWARE COMPARISON OF REWARD FUNCTIONS
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+67.5%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month