Prosecution Insights
Last updated: April 19, 2026
Application No. 18/182,163

TRAINING MACHINE LEARNING MODELS USING VARYING MULTI-MODALITY TRAINING DATA

Non-Final OA §103
Filed
Mar 10, 2023
Examiner
BENOURAIDA, AMINA MORENO
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
The Adt Security Corporation
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
16 currently pending
Career history
18
Total Applications
across all art units

Statute-Specific Performance

§101
28.1%
-11.9% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liao et al., (US11270446B2) in view of Kumar et al., (US20200242395A1), further in view of Ngiam et al., Non-Patent Literature (“Multimodal Deep Learning”). Regarding claim 1: Liao teaches: training a first ML model using a plurality of image modalities as training data, the plurality of image modalities comprising a first image modality and a second image modality different from the first image modality, (Col 3, lines 54-59, “The training process may include generating the trained machine learning model by training a preliminary machine learning model [training a first ML model] using the plurality of groups of training samples [plurality of image modalities as training data]. Each group of the plurality of groups of training samples may include a sample input image and a reference image”…Col 9, line 61, “A first image of a first modality may be obtained [first image modality]”…Col 9, line 64, “The second modality [second image modality] may be different from the first modality.”) the first image modality having a first image modality parameter, the second image modality having a second image modality parameter, (Col 22, line 5, “a modality of a specific image (e.g., the first image) (i.e., wherein a first image modality parameter) of a specific subject may be defined by an imaging device acquiring the specific image, one or more scanning parameters used by the imaging device scanning the specific subject,”…Col 22, line 29, “Different images generated using a same imaging device but based on different scanning parameters may correspond to different modalities (i.e., wherein different images and modalities under the broadest reasonable interpretation (BRI) is interpreted to include the second image modality parameter)”) the first image modality corresponding to a first plurality of images, the second image modality corresponding to a second plurality of images, one or more images of the first plurality of images being different from one or more images of the second plurality of images, (Col 22, line 5, “a modality of a specific image (e.g., the first image) of a specific subject may be defined by an imaging device acquiring the specific image, one or more scanning parameters used by the imaging device scanning the specific subject, an image reconstruction technique for generating the specific image, or the like, or any combination thereof (i.e., wherein the first image modality is interpreted to be from images acquired by the image device) The subject may be biological or non-biological. For example, the subject may include a patient, a man-made object, or the like, as described elsewhere in the present disclosure (e.g., FIG. 1 and the descriptions thereof). Different images of a same subject acquired by different imaging devices may correspond to different modalities. For example, an MR image of a specific subject obtained by an MRI device may be considered a different modality than a PET image of the specific subject obtained by a PET device (i.e., wherein a second image modality is interpreted to be from another image device (i.e., PET device)). Different images of a same subject generated using different image reconstruction techniques based on same imaging data (e.g., projection data) may correspond to different modalities (i.e., wherein the first and second plurality of images are from different set of images)”) each one of the first and second plurality of images comprising one of: color visible light images; monochromatic visible light images; color infrared images; or monochromatic infrared images; (Col 31, line 47, “Exemplary color information may include values associated with red, green, blue (RGB), hue, saturation, value (HSV), luminance-bandwidth-chrominance (YUV), luminance [color visible light images] (i.e., wherein color visible light images under the broadest reasonable interpretation are interpreted as the images used include RGB, hue, saturation etc..)”) modifying one or both of the first image modality parameter and the second image modality parameter, the modification of one or both of the first image modality parameter and the second image modality parameter being based on a degree of similarity between a first image of the first plurality of images and a second image of the second plurality of images, (Col 31, line 11, “The trained discriminative model may generate the value of the cost function based on a comparison of the sample intermediate image [first image modality] with the reference image [second image modality] and/or a comparison of the sample intermediate image with the sample input image. The value of the cost function may indicate a degree of similarity [degree of similarity] or difference between the sample intermediate image of the trained generative model and the reference image inputted into the trained discriminative model, and/or a degree of similarity or difference between the sample intermediate image of the trained generative model and the sample input image inputted into the trained discriminative model.”…Col 33, line 39, “In 1050, the processing device 120 (e.g., the updating unit 540) may update the preliminary generative model or the intermediate generative model generated in the prior iteration by updating [modifying] at least some of the parameter values of the preliminary generative model or the intermediate generative model”) the second image being derived from the first image; (Col 9, line 62, “A second image of a second modality may be generated by processing, based on a trained machine learning model, the first image.”) the modified one or both of the first image modality parameter and the second image modality parameter; (Col 33, line 39, “In 1050, the processing device 120 (e.g., the updating unit 540) may update the preliminary generative model or the intermediate generative model generated in the prior iteration by updating at least some of the parameter values of the preliminary generative model or the intermediate generative model (i.e., wherein the modified modality parameter)”) Liao does not explicitly teach: A method implemented in a management node configured for training and testing one or more machine learning (ML) models, the method comprising: testing the first ML model and the second ML model based on an accuracy threshold, the testing of the first ML model and the second ML model comprising comparing the first and second ML models to identify which model has the greatest accuracy; and selecting one of the first and second ML models to perform one or more actions, the selection being based on a result of testing the first ML model and the second ML model. Ngiam teaches: A method implemented in a management node configured for training and testing one or more machine learning (ML) models, the method comprising: (Introduction, paragraph 5, “In cross modality learning, data from multiple modalities is available only during feature learning; during the supervised training and testing phase, only data from a single modality is provided. For this setting, the aim is to learn better single modality representations given unlabeled data from multiple modalities. Last, we consider a shared representation learning setting, which is unique in that different modalities are presented for supervised training and testing [training and testing one or more machine learning models] (i.e., wherein ‘management node’ under the broadest reasonable interpretation (BRI) is interpreted as a system that performs the training and testing). This setting allows us to evaluate if the feature representations can capture correlations across different modalities. Specifically, studying this setting allows us to assess whether the learned representations are modality-invariant”) Ngiam and Liao are both related to the same field of endeavor (i.e., multimodality machine learning). In view of teachings of Ngiam it would have been obvious for a person of ordinary skill in the art to apply the teachings of Ngiam to Liao before the effective filing date of the claimed invention in order to improve the efficiency of training machine learning models using multimodal data (Ngiam, Abstract, “we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time.”) Kumar teaches: training a second ML model using the plurality of image modalities and ([0034], “a second trained machine learning model may have been trained using a first subset of the features (i.e., wherein training a second model)”…[0037], “The CNN may be able to receive images as training, validation, or testing input. The trained CNN may be able to receive images as input”…[0020], “applicable to multiple image modalities (i.e., wherein using the plurality of image modalities for training)(e.g., scanning electron microscope (SEM), cross-sectional SEM (XSEM), transmission electron microscope (TEM), top-down imaging, cross-section imaging, etc.)”) testing the first ML model and the second ML model based on an accuracy threshold, the testing of the first ML model and the second ML model comprising comparing the first and second ML models to identify which model has the greatest accuracy; and ([0036], “The testing engine 186 may be capable of testing a trained machine learning model 190 using a corresponding set of features of a testing set from data set generator 172. For example, a first trained machine learning model 190 [first ML model] that was trained using a first set of features of the training set may be tested using the first set of features of the testing set. The testing engine 186 may determine a trained machine learning model 190 that has the highest accuracy [comparing] of all of the trained machine learning models based on the testing sets (i.e., wherein the determining which model has the highest accuracy under the broadest reasonable interpretation (BRI) is interpreted as comparing first and second ml model to identify which has the highest accuracy)”…[0035], “The validation engine 184 may determine an accuracy of each of the trained machine learning models 190 based on the corresponding sets of features of the validation set. The validation engine 184 may discard trained machine learning models 190 that have an accuracy that does not meet a threshold accuracy [an accuracy threshold] (i.e., wherein the accuracy threshold of the first and second ml model)”… [0034], “The training engine 182 may generate multiple trained machine learning models 190, where each trained machine learning model 190 corresponds to a distinct set of features (i.e., wherein the second model under the broadest reasonable interpretation (BRI) is interpreted to be included in the multiple machine learning models)”) selecting one of the first and second ML models to perform one or more actions, the selection being based on a result of testing the first ML model and the second ML model ([0035], “the selection engine 185 may be capable of selecting [selecting] one or more trained machine learning models 190 that have an accuracy that meets a threshold accuracy. In some embodiments, the selection engine 185 may be capable of selecting the trained machine learning model 190 that has the highest accuracy of the trained machine learning models 190”…[0060], “system 300 uses the trained model (e.g., selected model 308) to receive images 348 [to perform one or more actions]”…[0062], “responsive to receiving additional data (e.g., additional historical images, corresponding manufacturing process attributes, and corresponding image classifications), flow may continue to block 310 to re-train the trained machine learning model based on the additional data and the original data (i.e., wherein the model with the highest accuracy is selected to further perform an action ‘retrain’ the model)”) Kumar and Liao are both related to the same field of endeavor (i.e., multimodality machine learning). In view of teachings of Kumar it would have been obvious for a person of ordinary skill in the art to apply the teachings of Kumar to Liao before the effective filing date of the claimed invention in order to improve the efficiency of training machine learning models using multimodal data (Kumar, [0002], “thousands of images may be generated every month in a semiconductor laboratory during process development. To perform image measurements, a setup (e.g., recipe setup) may be created for measuring attributes (e.g., product width, height, etc.) of a first image. The setup may be run on the remaining images to measure attributes of the remaining images. If the remaining images have variations (e.g., variation in structure of the product, variations due to change in process and imaging conditions, etc.), the setup created based on the first image may not apply and a system using the setup may fail to measure the attributes of the remaining images. Due to this failure, the remaining images may be manually measured by process engineers.”) Regarding claim 2 and analogous claim 12: Liao teaches: training a first ML model using a plurality of image modalities as training data, the plurality of image modalities comprising a first image modality and a second image modality different from the first image modality, (Col 3, lines 54-59, “The training process may include generating the trained machine learning model by training a preliminary machine learning model [training a first ML model] using the plurality of groups of training samples [plurality of image modalities as training data]. Each group of the plurality of groups of training samples may include a sample input image and a reference image”…Col 9, line 61, “A first image of a first modality may be obtained [first image modality]”…Col 9, line 64, “The second modality [second image modality] may be different from the first modality.”) the first image modality having a first image modality parameter and the second image modality having a second image modality parameter; (Col 22, line 5, “a modality of a specific image (e.g., the first image) (i.e., wherein a first image modality parameter) of a specific subject may be defined by an imaging device acquiring the specific image, one or more scanning parameters used by the imaging device scanning the specific subject,”…Col 22, line 29, “Different images generated using a same imaging device but based on different scanning parameters may correspond to different modalities (i.e., wherein different images and modalities under the broadest reasonable interpretation (BRI) is interpreted to include the second image modality parameter)”) modifying one or both of the first image modality parameter and the second image modality parameter; Col 33, line 39, “In 1050, the processing device 120 (e.g., the updating unit 540) may update the preliminary generative model or the intermediate generative model generated in the prior iteration by updating [modifying] at least some of the parameter values of the preliminary generative model or the intermediate generative model”… Col 31, line 11, “The trained discriminative model may generate the value of the cost function based on a comparison of the sample intermediate image [first image modality] with the reference image [second image modality] and/or a comparison of the sample intermediate image with the sample input image.”) the modified one or both of the first image modality parameter and the second image modality parameter; and (Col 33, line 39, “In 1050, the processing device 120 (e.g., the updating unit 540) may update the preliminary generative model or the intermediate generative model generated in the prior iteration by updating at least some of the parameter values of the preliminary generative model or the intermediate generative model (i.e., wherein the modified modality parameter)”) Liao does not explicitly teach: A method implemented in a management node configured for training and testing one or more machine learning (ML) models, the method comprising: training a second ML model using the plurality of image modalities and testing the first ML model and the second ML model based on an accuracy threshold Ngiam teaches: A method implemented in a management node configured for training and testing one or more machine learning (ML) models, the method comprising: (Introduction, paragraph 5, “In cross modality learning, data from multiple modalities is available only during feature learning; during the supervised training and testing phase, only data from a single modality is provided. For this setting, the aim is to learn better single modality representations given unlabeled data from multiple modalities. Last, we consider a shared representation learning setting, which is unique in that different modalities are presented for supervised training and testing [training and testing one or more machine learning models] (i.e., wherein ‘management node’ under the broadest reasonable interpretation (BRI) is interpreted as a system that performs the training and testing). This setting allows us to evaluate if the feature representations can capture correlations across different modalities. Specifically, studying this setting allows us to assess whether the learned representations are modality-invariant”) Kumar teaches: training a second ML model using the plurality of image modalities and ([0034], “a second trained machine learning model may have been trained using a first subset of the features (i.e., wherein training a second model)”…[0037], “The CNN may be able to receive images as training, validation, or testing input. The trained CNN may be able to receive images as input”…[0020], “applicable to multiple image modalities (i.e., wherein using the plurality of image modalities for training)(e.g., scanning electron microscope (SEM), cross-sectional SEM (XSEM), transmission electron microscope (TEM), top-down imaging, cross-section imaging, etc.)”) testing the first ML model and the second ML model based on an accuracy threshold ([0036], “The testing engine 186 may be capable of testing a trained machine learning model 190 using a corresponding set of features of a testing set from data set generator 172. For example, a first trained machine learning model 190 [first ML model] that was trained using a first set of features of the training set may be tested using the first set of features of the testing set”…[0035], “The validation engine 184 may determine an accuracy of each of the trained machine learning models 190 based on the corresponding sets of features of the validation set. The validation engine 184 may discard trained machine learning models 190 that have an accuracy that does not meet a threshold accuracy [an accuracy threshold] (i.e., wherein the accuracy threshold of the first and second ml model)”… [0034], “The training engine 182 may generate multiple trained machine learning models 190, where each trained machine learning model 190 corresponds to a distinct set of features (i.e., wherein the second model under the broadest reasonable interpretation (BRI) is interpreted to be included in the multiple machine learning models)”) The motivation for claim 2 is the same motivation for claim 1. Regarding claim 3 and analogous claim 13: Liao, as modified by Kumar and Ngiam, teaches the method of claim 2. Liao further teaches: wherein the first image modality corresponds to a first plurality of images, and the second image modality corresponds to a second plurality of images (Col 3, lines 54-59, “The training process may include generating the trained machine learning model by training a preliminary machine learning model using the plurality of groups of training samples [plurality of images]. Each group of the plurality of groups of training samples may include a sample input image and a reference image”…Col 9, line 61, “A first image of a first modality may be obtained [first image modality] (i.e., wherein the first image modality is using plurality of training samples ‘plurality of images’)”…Col 9, line 64, “The second modality [second image modality] (i.e., wherein the second image modality is using plurality of training samples ‘plurality of images’) may be different from the first modality.”) The motivation for claim 3 is the same motivation for claim 1. Regarding claim 4 and analogous claim 14: Liao, as modified by Kumar and Ngiam, teaches the method of claim 3. Liao further teaches: wherein one or more images of the first plurality of images are different from one or more images of the second plurality of images, and (Col 22, line 5, “a modality of a specific image (e.g., the first image) of a specific subject may be defined by an imaging device acquiring the specific image, one or more scanning parameters used by the imaging device scanning the specific subject, an image reconstruction technique for generating the specific image, or the like, or any combination thereof (i.e., wherein the first image modality is interpreted to be from images acquired by the image device) The subject may be biological or non-biological. For example, the subject may include a patient, a man-made object, or the like, as described elsewhere in the present disclosure (e.g., FIG. 1 and the descriptions thereof). Different images of a same subject acquired by different imaging devices may correspond to different modalities. For example, an MR image of a specific subject obtained by an MRI device may be considered a different modality than a PET image of the specific subject obtained by a PET device (i.e., wherein a second image modality is interpreted to be from another image device (i.e., PET device)). Different images of a same subject generated using different image reconstruction techniques based on same imaging data (e.g., projection data) may correspond to different modalities (i.e., wherein the first and second plurality of images are from different set of images)”) each one of the first and second plurality of images comprise one of: color visible light images; monochromatic visible light images; color infrared images; or monochromatic infrared images (Col 31, line 47, “Exemplary color information may include values associated with red, green, blue (RGB), hue, saturation, value (HSV), luminance-bandwidth-chrominance (YUV), luminance [color visible light images] (i.e., wherein color visible light images under the broadest reasonable interpretation are interpreted as the images used include RGB, hue, saturation etc.)”) The motivation for claim 4 is the same motivation for claim 1. Regarding claim 5 and analogous claim 15: Liao, as modified by Kumar and Ngiam, teaches the method of claim 3. Liao further teaches: wherein the modification of one or both of the first image modality parameter and the second image modality parameter is based on a degree of similarity between a first image of the first plurality of images and second image of the second plurality of images, (Col 31, line 11, “The trained discriminative model may generate the value of the cost function based on a comparison of the sample intermediate image [first image modality] with the reference image [second image modality] and/or a comparison of the sample intermediate image with the sample input image. The value of the cost function may indicate a degree of similarity [degree of similarity] or difference between the sample intermediate image of the trained generative model and the reference image inputted into the trained discriminative model, and/or a degree of similarity or difference between the sample intermediate image of the trained generative model and the sample input image inputted into the trained discriminative model.”…Col 33, line 39, “In 1050, the processing device 120 (e.g., the updating unit 540) may update the preliminary generative model or the intermediate generative model generated in the prior iteration by updating [modifying] at least some of the parameter values of the preliminary generative model or the intermediate generative model”) the second image being derived from the first image (Col 9, line 62, “A second image of a second modality may be generated by processing, based on a trained machine learning model, the first image.”) The motivation for claim 5 is the same motivation for claim 1. Regarding claim 6 and analogous claim 16: Liao, as modified by Kumar and Ngiam, teaches the method of claim 3. Liao further teaches: wherein the first image modality parameter is a first image parameter of one or more images of the first plurality of images, and the second image modality parameter is a second image parameter of one or more images of the second plurality of images (Col 22, line 5, “a modality of a specific image (e.g., the first image) (i.e., wherein a first image modality parameter) of a specific subject may be defined by an imaging device acquiring the specific image, one or more scanning parameters used by the imaging device scanning the specific subject,”…Col 22, line 29, “Different images generated using a same imaging device but based on different scanning parameters may correspond to different modalities (i.e., wherein different images and modalities under the broadest reasonable interpretation (BRI) is interpreted to include the second image modality parameter)”) The motivation for claim 6 is the same motivation for claim 1. Regarding claim 7 and analogous claim 17: Liao, as modified by Kumar and Ngiam, teaches the method of claim 6. Liao further teaches: wherein each one of the first image parameter and the second image parameter is a weight factor assigned to the corresponding one or more images (Col 22, line 5, “a modality of a specific image (e.g., the first image) (i.e., wherein a first image modality parameter) of a specific subject may be defined by an imaging device acquiring the specific image, one or more scanning parameters used by the imaging device scanning the specific subject,”…Col 22, line 29, “Different images generated using a same imaging device but based on different scanning parameters may correspond to different modalities (i.e., wherein different images and modalities under the broadest reasonable interpretation (BRI) is interpreted to include the second image modality parameter)”…Col 23, line 18, “Exemplary parameters of the preliminary machine learning model may include a size of a kernel of a layer, a total count (or number) of layers, a count (or number) of nodes in each layer, a learning rate, a batch size, an epoch, a connected weight [parameter is a weight factor] between two connected nodes, a bias vector relating to a node, or the like. One or more parameter values of the plurality of parameters may be altered during the training of the preliminary machine learning model using a plurality of groups of training samples.”) The motivation for claim 6 is the same motivation for claim 1. Regarding claim 8 and analogous claim 18: Liao, as modified by Kumar and Ngiam, teaches the method of claim 7. Liao further teaches: further comprising determining the weight factor based on a lack of available images comprised in one or both of the first and second plurality of images that can be used to train one or both of the first ML model and the second ML model (Col, line 37, “the trained machine learning model may be updated from time to time, e.g., periodically or not, based on a sample set that is at least partially different [lack of available images] (i.e., wherein the lack of images under the broadest reasonable interpretation (BRI) is interpreted as the partially different samples) from an original sample set from which an original trained machine learning model is determined. For instance, the trained machine learning model may be updated based on a sample set including new samples that are not in the original sample set”…Col 22, line 5, “a modality of a specific image (e.g., the first image) of a specific subject may be defined by an imaging device acquiring the specific image, one or more scanning parameters used by the imaging device scanning the specific subject,”…Col 22, line 29, “Different images generated using a same imaging device but based on different scanning parameters may correspond to different modalities”…Col 23, line 18, “Exemplary parameters of the preliminary machine learning model may include a size of a kernel of a layer, a total count (or number) of layers, a count (or number) of nodes in each layer, a learning rate, a batch size, an epoch, a connected weight [weight factor] between two connected nodes, a bias vector relating to a node, or the like. One or more parameter values of the plurality of parameters may be altered during the training of the preliminary machine learning model using a plurality of groups of training samples.”) The motivation for claim 8 is the same motivation for claim 1. Regarding claim 9 and analogous claim 19: Liao, as modified by Kumar and Ngiam, teaches the method of claim 3. Liao further teaches: wherein the first image modality parameter is a first quantity of images comprised in the first plurality of images, and the second image modality parameter is a second quantity of images comprised in the second plurality of images (Col 3, lines 54-59, “The training process may include generating the trained machine learning model by training a preliminary machine learning model using the plurality of groups of training samples [plurality of images]. Each group [quantity] of the plurality of groups of training samples (i.e., wherein the quantity of images under the broadest reasonable interpretation (BRI) is interpreted as a group of images from the plurality of images) may include a sample input image and a reference image”…Col 9, line 61, “A first image of a first modality may be obtained [first image modality] (i.e., wherein the first image modality is using plurality of training samples ‘plurality of images’)”…Col 9, line 64, “The second modality [second image modality] (i.e., wherein the second image modality is using plurality of training samples ‘plurality of images’) may be different from the first modality”…Col 22, line 5, “a modality of a specific image (e.g., the first image) (i.e., wherein a first image modality parameter) of a specific subject may be defined by an imaging device acquiring the specific image, one or more scanning parameters used by the imaging device scanning the specific subject,”…Col 22, line 29, “Different images generated using a same imaging device but based on different scanning parameters may correspond to different modalities (i.e., wherein different images and modalities under the broadest reasonable interpretation (BRI) is interpreted to include the second image modality parameter)”) The motivation for claim 9 is the same motivation for claim 1. Regarding claim 10: Liao, as modified by Kumar and Ngiam, teaches the method of claim 2. Liao and Ngiam do not explicitly teach: wherein the testing of the first ML model and the second ML model comprises comparing the first and second ML models to identify which model has the greatest accuracy. Kumar further teaches: wherein the testing of the first ML model and the second ML model comprises comparing the first and second ML models to identify which model has the greatest accuracy ([0036], “The testing engine 186 may be capable of testing a trained machine learning model 190 using a corresponding set of features of a testing set from data set generator 172. For example, a first trained machine learning model 190 [first ML model] that was trained using a first set of features of the training set may be tested using the first set of features of the testing set. The testing engine 186 may determine a trained machine learning model 190 that has the highest accuracy [comparing] of all of the trained machine learning models based on the testing sets (i.e., wherein the determining which model has the highest accuracy under the broadest reasonable interpretation (BRI) is interpreted as comparing first and second ml model to identify which has the highest accuracy. Second ml model is interpreted to be one of the trained ml models)”) The motivation for claim 10 is the same motivation for claim 1. Regarding claim 11: Liao, as modified by Kumar and Ngiam, teaches the method of claim 2. Liao and Ngiam do not explicitly teach: further comprising selecting one of the first and second ML models to perform one or more actions, the selection being based on a result of testing the first ML model and the second ML model. Kumar further teaches: further comprising selecting one of the first and second ML models to perform one or more actions, the selection being based on a result of testing the first ML model and the second ML model ([0035], “the selection engine 185 may be capable of selecting [selecting] one or more trained machine learning models 190 that have an accuracy that meets a threshold accuracy. In some embodiments, the selection engine 185 may be capable of selecting the trained machine learning model 190 that has the highest accuracy of the trained machine learning models 190”…[0060], “system 300 uses the trained model (e.g., selected model 308) to receive images 348 [to perform one or more actions]”…[0062], “responsive to receiving additional data (e.g., additional historical images, corresponding manufacturing process attributes, and corresponding image classifications), flow may continue to block 310 to re-train the trained machine learning model based on the additional data and the original data (i.e., wherein the model with the highest accuracy is selected to further perform an action ‘retrain’ the model)”) The motivation for claim 11 is the same motivation for claim 1. Regarding claim 20: Liao, as modified by Kumar and Ngiam, teaches the method of claim 12. Liao and Ngiam do not explicitly teach: wherein: the testing of the first ML model and the second ML model comprises comparing the first and second ML models to identify which model has the greatest accuracy; or the at least one memory stores additional computer instructions that, when executed by the at least one processor, further cause the at least one processor to select one of the first and second ML models to perform one or more actions, the selection being based on a result of testing the of the first ML model and the second ML model. Kumar further teaches: wherein: the testing of the first ML model and the second ML model comprises comparing the first and second ML models to identify which model has the greatest accuracy; or ([0036], “The testing engine 186 may be capable of testing a trained machine learning model 190 using a corresponding set of features of a testing set from data set generator 172. For example, a first trained machine learning model 190 [first ML model] that was trained using a first set of features of the training set may be tested using the first set of features of the testing set. The testing engine 186 may determine a trained machine learning model 190 that has the highest accuracy [comparing] of all of the trained machine learning models based on the testing sets (i.e., wherein the determining which model has the highest accuracy under the broadest reasonable interpretation (BRI) is interpreted as comparing first and second ml model to identify which has the highest accuracy. Second ml model is interpreted to be one of the trained ml models)”) the at least one memory stores additional computer instructions that, when executed by the at least one processor, further cause the at least one processor ([0029], “Data store 140 may be a memory (e.g., random access memory), a drive (e.g., a hard drive, a flash drive), a database system, or another type of component or device capable of storing data. Data store 140 may include multiple storage components (e.g., multiple drives or multiple databases) that may span multiple computing devices (e.g., multiple server computers)”…[0066], “Methods 500-600 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, processing device, etc.), software (such as instructions run on a processing device, a general purpose computer system, or a dedicated machine)”) to select one of the first and second ML models to perform one or more actions, the selection being based on a result of testing the of the first ML model and the second ML model ([0035], “the selection engine 185 may be capable of selecting [selecting] one or more trained machine learning models 190 that have an accuracy that meets a threshold accuracy. In some embodiments, the selection engine 185 may be capable of selecting the trained machine learning model 190 that has the highest accuracy of the trained machine learning models 190”…[0060], “system 300 uses the trained model (e.g., selected model 308) to receive images 348 [to perform one or more actions]”…[0062], “responsive to receiving additional data (e.g., additional historical images, corresponding manufacturing process attributes, and corresponding image classifications), flow may continue to block 310 to re-train the trained machine learning model based on the additional data and the original data (i.e., wherein the model with the highest accuracy is selected to further perform an action ‘retrain’ the model)”) The motivation for claim 20 is the same motivation for claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMINA BENOURAIDA whose telephone number is (571)272-4340. The examiner can normally be reached Monday-Friday 8:30am-5pm ET.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMINA MORENO BENOURAIDA/Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Mar 10, 2023
Application Filed
Jan 07, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month