Prosecution Insights
Last updated: April 19, 2026
Application No. 18/161,464

ARTIFICIAL INTELLIGENCE ASSISTED CONFLICT SCENARIO DETECTION WITH ADDITION OF CLASSES

Non-Final OA §101§103§112
Filed
Jan 30, 2023
Examiner
HASSAN, YASIN ABDULLAH
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Volvo Car Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
1 currently pending
Career history
1
Total Applications
across all art units

Statute-Specific Performance

§101
33.3%
-6.7% vs TC avg
§103
33.3%
-6.7% vs TC avg
§112
33.3%
-6.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 16 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 16 recites a “second defined threshold”, however, there is no recitation of any previous threshold, rendering the limitation unclear and indefinite. For purposes of examination, examiner will interpret the “second defined threshold” as a defined threshold. Clarification is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 According to the first part of the analysis, claims 1-9 are system claims, claims 10-17 are method type claims, and Claims 18-20 are program product claims. Etc. Therefore, claims 1-20 fall within one of the four statutory categories (a process, machine, manufacture, or composition of matter). Regarding Claim 1: 2A Prong 1: processes signal data generated by a vehicle to detect at least one or more non-collision scenarios (This step for processing signal data for detecting non-collision scenarios is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation /judgement, a human can perform data analysis to make judgement/evaluation based on received data mentally to make a determination if the received data is a non-collision scenario.); processes the signal data to detect at least one or more near-collision scenarios such that data from the at least one or more near-collision scenarios is used to train a second AI model to define one or more rules that enable the second AI model to detect one or more new near-collision scenarios with a level of accuracy above that of the first AI model (This step for processing signal data for detecting near-collision scenarios is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation /judgement, a human can perform data analysis to make a judgement/evaluation based on received data mentally to make a determination if the received data is a near-collision scenario.); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: A computer implemented system comprising (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); a memory that stores computer executable components (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); processor that executes the computer executable components stored in the memory, wherein the computer executable components comprise (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); a detection component (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); a first artificial intelligence (AI) model (This step is mere instructions to apply the exception using a generic learning model as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional Elements: A computer implemented system comprising (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); a memory that stores computer executable components (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); processor that executes the computer executable components stored in the memory (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); a detection component (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); a first artificial intelligence (AI) model (This step is mere instructions to apply the exception using a generic learning model as a tool to perform the abstract idea, see MPEP 2106.05(f)).); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. Regarding Claim 2 2A Prong 1: There are no elements. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: a data collection component (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f)); that collects the signal data from one or more sensors of the vehicle (This step is collecting signal data, which is understood to be an insignificant extra solution activity and data gathering, see MPEP 2106.05(g).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool and insignificant extra-solution activities to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: a data collection component (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); that collects the signal data from one or more sensors of the vehicle (This step is for collecting data is sending and transmitting data and therefore well understood, routine, conventional, see MPEP 2106.05(d).); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool and well, understood, routine, conventional to perform the disclosed abstract idea above. Regarding Claim 3 2A Prong 1: There are no elements. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the signal data comprising the at least one or more near-collision scenarios forms inlier data required for training the second AI model (This step is for near-collision scenario data forming inlier data to train the 2nd AI model; The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are the field of use for limiting the abstract idea to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the signal data comprising the at least one or more near-collision scenarios forms inlier data required for training the second AI model (This step is for near-collision scenario data forming inlier data to train the 2nd AI model, which is understood to be descriptive information in the field of use for limiting the abstract idea towards collision avoidance of autonomous vehicles, see MPEP 2106.05(h).); The additional elements as disclosed above in combination of the abstract do not add anything significantly more as they are the field of use for limiting the abstract idea to perform the disclosed abstract idea above. Regarding Claim 4 2A Prong 1: identify from the inlier data, one or more classes of data required for the training of the second AI model, wherein the one or more classes of data are annotated to generate annotated data for the training of the second AI model (This step for analyzing inlier data to identify classes is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation /judgement, a human can analyze data and annotate it into classes required to be trained.); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: a sample balancer that analyzes the inlier data (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: a sample balancer that analyzes the inlier data (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. Regarding Claim 5 2A Prong 1: wherein the annotated data is used as an existing class of data to generate training data, validation data, and test data for the training, validation and testing of the second AI model (This step for generating training, testing, and validating data is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation/judgement, a human can use annotated data to generate training, testing, and validation data for another AI model.); 2A Prong 2: This judicial exception is not integrated into a practical application. There are no additional elements. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: There are no additional elements. Regarding Claim 6 2A Prong 1: wherein the signal data comprising the at least one or more near-collision scenarios forms outlier data previously unseen by the first AI model, wherein the outlier data is annotated to generate annotated data for training of the second AI model (This step for annotating outlier data is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation/judgment a human can identify unfamiliar data.); 2A Prong 2: This judicial exception is not integrated into a practical application. There are no additional elements. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: There are no additional elements. Regarding Claim 7 2A Prong 1: wherein the annotated data is used as a new class of data to generate training data, validation data, and test data for the training, validation and testing of the second AI model (This step for generating training, testing, and validating data is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation/judgement, a human can use annotated data to generate training, testing, and validation data for another AI model.); 2A Prong 2: This judicial exception is not integrated into a practical application. There are no additional elements. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: There are no additional elements. Regarding Claim 8 2A Prong 1: There are no elements. 2A Prong 2: This judicial exception is not integrated into a practical application. wherein the annotated data is stockpiled until a quantity of the annotated data exceeds a defined threshold for the annotated data to be used as the new class of data. (This step for stockpiling data is an insignificant extra-solution activity, see MPEP 2106.05(g).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are insignificant, extra-solution activities to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the annotated data is stockpiled until a quantity of the annotated data exceeds a defined threshold for the annotated data to be used as the new class of data. (This step for stockpiling data is electronic recordkeeping, which is well-understood, routine, conventional, see MPEP 2106.05(d)(II)(iii).); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are well-understood, routine, conventional to perform the disclosed abstract idea above. Regarding Claim 9 2A Prong 1: There are no elements. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the second AI model combines two or more signals from the signal data to define the one or more rules (This step is mere instructions to apply the exception using a generic learning model as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the second AI model combines two or more signals from the signal data to define the one or more rules (This step is mere instructions to apply the exception using a generic learning model as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are and mere instructions to apply the exception using the generic learning model as a tool to perform the disclosed abstract idea above. Regarding Claim 10 2A Prong 1: Processing ,…, signal data generated by a vehicle to detect at least one or more non-collision scenarios (This step for processing signal data for detecting non-collision scenarios is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation/judgement, a human can perform data analysis to make a judgement/evaluation based on received data mentally to make a determination if the received data is a non-collision scenario.); Processing ,…, the signal data to detect at least one or more near-collision scenarios, such that data from the at least one or more near-collision scenarios is used to train a second AI model to define one or more rules that enable the second AI model to detect one or more new near-collision scenarios with a level of accuracy above that of a first AI model (This step for processing signal data for detecting near-collision scenarios is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation/judgement, a human can perform data analysis to make a judgement/evaluation based on received data mentally to make a determination if the received data is a near-collision scenario.); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: by a system operatively coupled to a processor (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: by a system operatively coupled to a processor (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. Regarding Claim 11 2A Prong 1: There are no elements. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: by the system (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); Collecting ,…, the signal data from one or more sensors of the vehicle, wherein the signal data comprising the at least one or more near-collision scenarios forms inlier data required for training the second AI model (This step for collecting near-collision scenarios to form inlier data is an insignificant extra-solution activity, see MPEP 2106.05(g).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool and insignificant extra-solution activities to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: by the system (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); Collecting ,…, the signal data from one or more sensors of the vehicle, wherein the signal data comprising the at least one or more near-collision scenarios forms inlier data required for training the second AI model (This step is collecting signal data, which is well-understood, routine, conventional for receiving and transmitting data, see MPEP 2106.05(d).); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool and well-understood, routine, conventional to perform the disclosed abstract idea above. Regarding Claim 12 2A Prong 1: Analyzing,…, the inlier data to identify, from the inlier data, one or more classes of data required for the training of the second AI model, wherein the one or more classes of data are annotated to generate annotated data for the training of the second AI model (This step for annotating inlier data to train the 2nd AI model is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation judgement, a human can analyze data and annotate it into classes required to be trained.); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: by the system (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f)); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: by the system (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f)); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. Regarding Claim 13 2A Prong 1: Using ,…, the annotated data as an existing class of data to generate training data, validation data, and test data for the training, validation and testing of the second AI model (This step for using the annotated data to generate training, testing, and validating data of the 2nd AI model is a mental process, a human can use data to generate training data, testing data, and validation data to train a 2nd AI model see MPEP 2106.05(f).); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: by the system (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: by the system (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. Regarding Claim 14 2A Prong 1: wherein the signal data comprising the at least one or more near-collision scenarios forms outlier data previously unseen by the first AI model, wherein the outlier data is annotated to generate annotated data for training of the second AI model (This step for annotating outlier data is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation /judgment a human can identify unfamiliar data.); 2A Prong 2: This judicial exception is not integrated into a practical application. There are no additional elements. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: There are no additional elements. Regarding Claim 15 2A Prong 1: Using ,…, the annotated data as a new class of data to generate training data, validation data, and test data for the training, validation and testing of the second AI model (This step for generating training, testing, and validating data is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation/judgement a human annotate data to test, train, and validate an AI model.); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: There are no additional elements. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: There are no additional elements. Regarding Claim 16 2A Prong 1: Stockpiling ,…, the annotated data until a quantity of the annotated data exceeds a second defined threshold for the annotated data to be used as the new class of data (This step for stockpiling data is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation/judgement, a human can stockpile data to use for later.); 2A Prong 2: This judicial exception is not integrated into a practical application. There are no additional elements. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: There are no additional elements. Regarding Claim 17 2A Prong 1: There are no elements. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the second AI model combines two or more signals from the signal data to define the one or more rules (This step is mere instructions to apply the exception using a generic learning model as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the second AI model combines two or more signals from the signal data to define the one or more rules (This step is mere instructions to apply the exception using a generic learning model as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic learning model as a tool to perform the disclosed abstract idea above. Regarding Claim 18: 2A Prong 1: Processing ,…, signal data generated by a vehicle to detect at least one or more non-collision scenarios (This step for processing signal data for detecting non-collision scenarios is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation /judgement based on received data mentally to make a determination if the received data is a non-collision scenario.); Processing ,…, the signal data to detect at least one or more near-collision scenarios, such that data from the at least one or more near-collision scenarios is used to train a second AI model to define one or more rules that enable the second AI model to detect one or more new near-collision scenarios with a level of accuracy above that of a first AI model. (This step for processing signal data for detecting non-collision scenarios is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation/judgement, a human can perform data analysis to make judgement/evaluation based on received data mentally to make a determination if the received data is a near-collision scenario.); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: A computer program product for using an AI model to detect conflict scenarios for vehicles, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: A computer program product for using an AI model to detect conflict scenarios for vehicles, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool to perform the disclosed abstract idea above. Regarding Claim 19 2A Prong 1: There are no elements 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the program instructions are further executable by the processor to cause the processor to ,.., by the processor (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); Collect,…, the signal data from one or more sensors of the vehicle (This step for collecting signal data is mere data gathering and therefore understood to be an insignificant extra solution activity, see MPEP 2106.05(g).); wherein the signal data comprising the at least one or more near-collision scenarios forms inlier data required for training the second AI model (This step is for near-collision scenario data forming inlier data to train the 2nd AI model; The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool, insignificant, extra-solution activities, and descriptive information in the field of use to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the program instructions are further executable by the processor to cause the processor to ,.., by the processor (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); Collect,…, the signal data from one or more sensors of the vehicle (This step is collecting signal data, which is electronic recordkeeping and therefore, well-understood, routine, conventional, see MPEP 2106.05(g).); wherein the signal data comprising the at least one or more near-collision scenarios forms inlier data required for training the second AI model (This step is for near-collision scenario data forming inlier data to train the 2nd AI model The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool, well, understood, routine, conventional and descriptive information in the field of use to perform the disclosed abstract idea above. Regarding Claim 20 2A Prong 1: Analyze,…, the signal data from one or more sensors of the vehicle, the inlier data to identify, from the inlier data, one or more classes of data required for the training of the second AI model (This step for analyzing inlier data to identify classes is practically implementable in the human mind and is understood to be a recitation of a mental process of evaluation/judgement, a human can analyze data and annotate it into classes required to be trained.); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the program instructions are further executable by the processor to cause the processor to ,.., by the processor (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); wherein the one or more classes of data are annotated to generate annotated data for training the second AI model. (This step is for near-collision scenario data forming inlier data to train the 2nd AI model The additional elements as disclosed above in combination of the abstract idea do not integrate the judicial exception into practical application as they are mere instructions to apply the exception using the generic computer component as a tool and descriptive information in the field of use to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the program instructions are further executable by the processor to cause the processor to ,.., by the processor (This step is mere instructions to apply the exception using a generic computer component as a tool to perform the abstract idea, see MPEP 2106.05(f).); wherein the one or more classes of data are annotated to generate annotated data for training the second AI model. (This step is for near-collision scenario data forming inlier data to train the 2nd AI model The additional elements as disclosed above in combination of the abstract idea do not add anything significantly more as they are mere instructions to apply the exception using the generic computer component as a tool and descriptive information in the field of use to perform the disclosed abstract idea above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1, 2, 3, 9, 10, 11, 17, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Qi (US 20220017032 A1) in view of Sagi (US 11568181 B2). Regarding claim 1 Qi teaches a computer-implemented system comprising: a memory stores computer executable component (See Paragraph 0130, Qi recites “Each component may include one or more processors (not shown) and memory (not shown). Instructions stored in the memory of a component may be executed by the one or more processors”, which is interpreted as the memory storing instructions, or the computer executable components.); a processor that executes the computer executable components stored in the memory (See Paragraph 0130. Qi recites “Alternatively, one or more processors of electronic device 1004 (not shown) may execute instructions stored in a central memory”, which is interpreted as the processors executing the computer executable components stored in the memory.); a detection component that processes signal data generated by a vehicle to detect at least one or more non-collision scenarios (See Paragraph 0031. Qi recites “total loss module 201 obtains input from driving sensors 202”, where the “Driving sensors 202” can include “sensors embedded into the vehicle.” In addition, Qi recites “In some instances, the total loss module may determine that a rapid acceleration corresponds to a loss event that is not a collision”, see Paragraph 0046. This is interpreted as the total loss module, or the detection component, processing non-collision scenarios from vehicles.); a first artificial intelligence (AI) model that processes the signal data to detect at least one or more near-collision scenarios such that data from the at least one or more near-collision scenarios is used to train a second AI model (See Paragraph 0003, Qi recites “Embodiments of the present invention generally relate to predicting a total loss event, and more particularly, to detecting a crash event and predicting a confidence of a total loss event associated with the crash event”. Next, Qi recites “the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred, while proceeding to block 406 when the binary output indicates that crash event has not occurred”, see Paragraph 0058. Then, Qi recites “At block 406, the process 400 involves generating a crash feature vector using an output of the crash prediction model. The crash feature vector, for example, may include crash features extracted from some or all of the data received from driving sensors and include a crash prediction (e.g., a statistical likelihood that a crash event occurred)”, where the crash prediction model is the 1st AI model processing events where crashes haven’t occurred at block 406, see Paragraph 0059. Furthermore, Qi recites “The first machine-learning model may be trained” with the “output from a crash prediction model”, where “the first machine-learning model may be a decision tree that outputs a confidence that the vehicle has been involved in a total loss event. The first machine-learning model may use some or all of the data values of the crash feature vector to predict a confidence of a total loss event.”, where the decision tree is the first machine-learning model, or the 2nd AI model predicting a confidence of total loss events, see Paragraph 0077. Finally, Qi recites that the “first machine learning model” is used to improve the “accuracy” of the prediction of the total loss event confidence, see Paragraph 0021, which is interpreted as the first-machine learning model, or the 2nd AI model, predicting total loss event confidence, or near-collision scenarios, with higher accuracy than the crash prediction model, or the 1st AI model. This is interpreted as the crash prediction model, or the 1st AI model, processing the sensor data, or signal data for any likelihood or confidence of crash events or total loss events, or near-collision scenarios, to generate a crash feature vector with a crash prediction, where the first-machine learning model, or the 2nd AI model, uses the crash feature vector with the crash prediction to predict a total loss event with a decision tree, or the first machine-learning module using the decision tree for predicting near-collision scenarios with a higher accuracy than the crash prediction model, the 1st AI model). However, Qi fails to disclose the training of a Sagi teaches the training of a (See Col. 1 lines 35-46, Sagi recites a method of “extracting features from the integrated anomaly analysis data that correlate with an indication of an anomaly, based on predefined correlation criteria; training a plurality of machine learning models using the extracted features, wherein each of the plurality of machine learning models is trained using different combinations of the extracted features; evaluating a performance of the plurality of trained machine learning models; and extracting one or more rules from one or more of the trained machine learning models based on the performance, wherein the extracted one or more rules are used to classify transactions as anomalous”, which is interpreted as the extraction of one or more rules from trained machine learning models is a model defining one or more rules. Furthermore, Sagi recites “One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world”, see Col. 4 lines 7-13, which is interpreted as the rules being defined during the model training.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Sagi into the method of Qi to train a model that would define one or more rules during the training of a model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Sagi as all the references are in the field of classification to have a model that defines one or more rules. A person of ordinary skill of the art would have been motivated to perform the combination for being able to decrease concept drift (“One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world” as suggested by Sagi at Col. 4 lines 7-13.). Regarding claim 2, Qi teaches the computer-implemented system of claim 1, further comprising: a data collection component that collects the signal data from one or more sensors of the vehicle (See Paragraph 0031, Qi recites “Total loss module 201 obtains input from driving sensors 202. Driving sensors 202 may” include “sensors embedded into the vehicle”, which is interpreted as the total loss module, or the data collection component, collecting signal data, or sensor data, from the vehicle). Regarding claim 3, Qi teaches the computer-implemented system of claim 1, wherein the signal data comprising the at least one or more near-collision scenarios forms inlier data required for training the second AI model (See Paragraph 0057, Qi recites “The crash prediction model may be a trained model to output a likelihood that a vehicle crash has occurred based on the sensor data.” Furthermore, Qi recites “For instance, the crash prediction model may generate an output only when the likelihood of the crash event is greater than a threshold crash likelihood” and “In these cases, the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred”, where the crash prediction model outputs the likelihood of the crash event, or near-collision scenarios, is greater than a threshold crash likelihood, which is interpreted as the near-collision scenarios forming the inlier data, see Paragraph 0058. Also, Qi recites “The first machine-learning model may be trained using the sensor data (e.g., from sensors of the mobile device or an output from a crash prediction model)”, where the first machine learning model, or the 2nd AI model, receives the inlier data as training data from the crash prediction model, or the 1st AI model, see Paragraph 0077. This is interpreted as likelihood that a vehicle crash has occurred based on the sensor data, or the signal data, forming near-collision scenarios, where the likelihood is greater than a threshold crash likelihood, or forming inlier data, to train the first machine-learning model, or the 2nd AI model). Regarding claim 9, Qi teaches the computer-implemented system of claim 1, wherein the second AI model combines two or more signals from the signal data (See Paragraph 0004, Qi recites “A mobile device can detect a crash event using one or more sensors. The mobile device records a first set of data from the one or more sensors” that generates a “first feature vector” to be used by the “first machine-learning model” to generate a “confidence value”. Then, Qi recites “crash prediction model 206 may derive a set of crash features from various acceleration measurements, GPS location positions, vehicle movements measured by driving sensors 202” and that “Crash prediction model 206 can additionally output a crash feature vector that includes crash features associated with the crash event 204”, where the crash prediction model is the 1st AI model, see Paragraph 0037. Then, Qi recites “For example, the crash prediction may be generated based on a sensor of the mobile device determining a measurement of the accelerometer that is above a threshold value, a variation in lateral vehicle position (e.g., indicating the vehicle is going off of the road), rumble strip detection (e.g., to determine if a vehicle is going off of the road), frequent and/or hard braking (e.g., indicative of heavy congestion and/or not keeping the proper distance from vehicles in front of the driver), distracted driving (e.g., sensed driver interaction with the mobile device while the vehicle is in motion), and the like. The crash prediction model may be a trained model to output a likelihood that a vehicle crash has occurred based on the sensor data and contextual information such as the factors listed above”, see Paragraph 0057. Furthermore, Qi recites “The first machine-learning model may be a decision tree that outputs a confidence that the vehicle has been involved in a total loss event. The first machine-learning model may use some or all of the data values of the crash feature vector to predict a confidence of a total loss event”, where the first machine-learning model, or the 2nd AI model, uses the signal data from the crash feature vector to define the one or more rules from the decision tree, see Paragraph 0062. This is interpreted as the first machine-learning model, or the 2nd AI model, combines all of the signal data, or sensor data, from the mobile device, including the signal data associated with the crash prediction model, or the 1st AI model.). However, Qi fails to disclose the second AI model defining the one or more rules. Sagi teaches the ( See Col. 1 lines 35-46, Sagi recites a method of “extracting features from the integrated anomaly analysis data that correlate with an indication of an anomaly, based on predefined correlation criteria; training a plurality of machine learning models using the extracted features, wherein each of the plurality of machine learning models is trained using different combinations of the extracted features; evaluating a performance of the plurality of trained machine learning models; and extracting one or more rules from one or more of the trained machine learning models based on the performance, wherein the extracted one or more rules are used to classify transactions as anomalous”, which is interpreted as the extraction of one or more rules from trained machine learning models is a model defining one or more rules. Then, Sagi recites “As shown in FIG. 3, the exemplary data-driven anomaly rule extraction process 300 initially obtains anomaly analysis data integrated from multiple data sources of an organization”, which is interpreted as the combination of multiple sources of data, or the combination of signal data, see Col. 6 lines 14-17. In addition, Sagi recites “Thereafter, the exemplary data-driven anomaly rule extraction process 300 extracts features from the integrated anomaly analysis data during step 320 that correlate with an indication of an anomaly, based on predefined correlation criteria (e.g., being identified as a feature by a domain expert). The multiple machine learning models are trained during step 330 using the extracted features. Each machine learning model is trained using a different combination of the extracted features.”, where the model is trained using a combination of multiple sources of data, or a combination of multiple signals, because process 300, step 320, and step 330 are all a part of Fig. 3, see Col. 6 lines 20-29. Furthermore, Sagi recites “One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world”, which is interpreted as the rules being defined during the model training, see Col. 4 lines 7-13. Therefore, the models are trained from a combination of data sources to define one or more rules.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Sagi into the method of Qi to add a model that would define one or more rules during the training of a model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Sagi as all the references are in the field of classification to have a model that defines one or more rules. A person of ordinary skill of the art would have been motivated to perform the combination for being able to decrease concept drift (“One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world” as suggested by Sagi at Col. 4 lines 7-13.). Regarding claim 10 Qi teaches computer-implemented method, comprising: processing, by a system operatively coupled to a processor, signal data generated by a vehicle to detect at least one or more non-collision scenarios (See Paragraph 0030, Qi recites “Total loss module 201 may be executed by data processing block 144 of mobile device 104.” Furthermore, Qi recites “total loss module 201 obtains input from driving sensors 202”, where the “Driving sensors 202” can include “sensors embedded into the vehicle”, see Paragraph 0031. In addition, Qi recites “In some instances, the total loss module may determine that a rapid acceleration corresponds to a loss event that is not a collision”, see Paragraph 0046. This is interpreted as the total loss module, or the detection component, processing non-collision scenarios from vehicles.); processing, by the system, the signal data to detect at least one or more near-collision scenarios, such that data from the at least one or more near-collision scenarios is used to train a second AI model (See Paragraph 0003, Qi recites “Embodiments of the present invention generally relate to predicting a total loss event, and more particularly, to detecting a crash event and predicting a confidence of a total loss event associated with the crash event”. Next, Qi recites “the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred, while proceeding to block 406 when the binary output indicates that crash event has not occurred”, see Paragraph 0058. Then, Qi recites “At block 406, the process 400 involves generating a crash feature vector using an output of the crash prediction model. The crash feature vector, for example, may include crash features extracted from some or all of the data received from driving sensors and include a crash prediction (e.g., a statistical likelihood that a crash event occurred), where the crash prediction model is the 1st AI model, see Paragraph 0059. Furthermore, Qi recites “The first machine-learning model may be trained” with the “output from a crash prediction model)”, where “the first machine-learning model may be a decision tree that outputs a confidence that the vehicle has been involved in a total loss event. The first machine-learning model may use some or all of the data values of the crash feature vector to predict a confidence of a total loss event.”, where the decision tree is the first machine-learning model, or the 2nd AI model predicting a confidence of total loss events, see Paragraph 0077. Finally, Qi recites that the “first machine learning model” is used to improve the “accuracy” of the prediction of the total loss event confidence, which is interpreted as the first-machine learning model, or the 2nd AI model, predicting total loss event confidence, or near-collision scenarios, with higher accuracy than the crash prediction model, or the 1st AI model, see Paragraph 0021. This is interpreted as the crash prediction model, or the 1st AI model, processing the sensor data, or signal data for any likelihood or confidence of crash events or total loss events, or near-collision scenarios, to generate a crash feature vector with a crash prediction, where the first-machine learning model, or the 2nd AI model, uses the crash feature vector with the crash prediction to predict a total loss event with a decision tree, or the first machine-learning module st AI model). However, Qi fails to disclose the training of a Sagi teaches the training of a (See Col. 1 lines 35-46, Sagi recites a method of “extracting features from the integrated anomaly analysis data that correlate with an indication of an anomaly, based on predefined correlation criteria; training a plurality of machine learning models using the extracted features, wherein each of the plurality of machine learning models is trained using different combinations of the extracted features; evaluating a performance of the plurality of trained machine learning models; and extracting one or more rules from one or more of the trained machine learning models based on the performance, wherein the extracted one or more rules are used to classify transactions as anomalous”, which is interpreted as the extraction of one or more rules from trained machine learning models is a model defining one or more rules. Furthermore, Sagi recites “One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world”, see Col. 4 lines 7-13, which is interpreted as the rules being defined during the model training.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Sagi into the method of Qi to add a model that would define one or more rules during the training of a model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Sagi as all the references are in the field of classification to have a model that defines one or more rules. A person of ordinary skill of the art would have been motivated to perform the combination for being able to decrease concept drift (“One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world” as suggested by Sagi at Col. 4 lines 7-13.). Regarding claim 11, Qi teaches the computer-implemented method of claim 10, further comprising: collecting, by the system, the signal data from one or more sensors of the vehicle, wherein the signal data comprising the at least one or more near-collision scenarios forms inlier data required for training the second AI model (See Paragraph 0031, Qi recites “Total loss module 201 obtains input from driving sensors 202. Driving sensors 202 may” include “sensors embedded into the vehicle”, which is interpreted as the total loss module, or the data collection component, collecting signal data, or sensor data, from the vehicle. Then, Qi recites “The crash prediction model may be a trained model to output a likelihood that a vehicle crash has occurred based on the sensor data”, See Paragraph 0057. Furthermore, Qi recites “For instance, the crash prediction model may generate an output only when the likelihood of the crash event is greater than a threshold crash likelihood” and “In these cases, the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred, while proceeding to block 406 when the binary output indicates that crash event has not occurred”, where the crash prediction model outputs the likelihood of the crash event, or near-collision scenarios, is greater than a threshold crash likelihood, which is interpreted as the near-collision scenarios forming the inlier data, see Paragraph 0058. Also, Qi recites “The first machine-learning model may be trained using the sensor data (e.g., from sensors of the mobile device or an output from a crash prediction model)”, where the first machine learning model, or the 2nd AI model, receives the inlier data as training data from the crash prediction model, or the 1st AI model, see Paragraph 0077. This is interpreted as likelihood that a vehicle crash has occurred based on the sensor data, or the signal data forming near-collision scenarios, where the likelihood is greater than a threshold crash likelihood, or forming inlier data, to train the first machine-learning model, or the 2nd AI model). Regarding claim 17, Qi teaches the computer-implemented method of claim 10, wherein the second AI model combines two or more signals from the signal data (See Paragraph 0004, Qi recites “A mobile device can detect a crash event using one or more sensors. The mobile device records a first set of data from the one or more sensors” that generates a “first feature vector” to be used by the first machine-learning model” to generate a “confidence value”. Then, Qi recites “crash prediction model 206 may derive a set of crash features from various acceleration measurements, GPS location positions, vehicle movements measured by driving sensors 202” and that “Crash prediction model 206 can additionally output a crash feature vector that includes crash features associated with the crash event 204”, where the crash prediction model is the 1st AI model, see Paragraph 0037. Then, Qi recites “For example, the crash prediction may be generated based on a sensor of the mobile device determining a measurement of the accelerometer that is above a threshold value, a variation in lateral vehicle position (e.g., indicating the vehicle is going off of the road), rumble strip detection (e.g., to determine if a vehicle is going off of the road), frequent and/or hard braking (e.g., indicative of heavy congestion and/or not keeping the proper distance from vehicles in front of the driver), distracted driving (e.g., sensed driver interaction with the mobile device while the vehicle is in motion), and the like. The crash prediction model may be a trained model to output a likelihood that a vehicle crash has occurred based on the sensor data and contextual information such as the factors listed above”, see Paragraph 0057. Furthermore, Qi recites “The first machine-learning model may be a decision tree that outputs a confidence that the vehicle has been involved in a total loss event. The first machine-learning model may use some or all of the data values of the crash feature vector to predict a confidence of a total loss event”, where the first machine-learning model, or the 2nd AI model, combines the signal data from the crash feature vector, see Paragraph 0062. This is interpreted as the first machine-learning model, or the 2nd AI model, combines all of the signal data, or sensor data, from the mobile device, including the signal data associated with the crash prediction model, or the 1st AI model.). However, Qi fails to disclose the second AI model defining one or more rules. Sagi teaches the (See Col. 1 lines 35-46, Sagi recites a method of “extracting features from the integrated anomaly analysis data that correlate with an indication of an anomaly, based on predefined correlation criteria; training a plurality of machine learning models using the extracted features, wherein each of the plurality of machine learning models is trained using different combinations of the extracted features; evaluating a performance of the plurality of trained machine learning models; and extracting one or more rules from one or more of the trained machine learning models based on the performance, wherein the extracted one or more rules are used to classify transactions as anomalous”, which is interpreted as the extraction of one or more rules from trained machine learning models is a model defining one or more rules. Then, Sagi recites “As shown in FIG. 3, the exemplary data-driven anomaly rule extraction process 300 initially obtains anomaly analysis data integrated from multiple data sources of an organization”, which is interpreted as the combination of multiple sources of data, or the combination of signal data, see Col. 6 lines 14-17. In addition, Sagi recites “Thereafter, the exemplary data-driven anomaly rule extraction process 300 extracts features from the integrated anomaly analysis data during step 320 that correlate with an indication of an anomaly, based on predefined correlation criteria (e.g., being identified as a feature by a domain expert). The multiple machine learning models are trained during step 330 using the extracted features. Each machine learning model is trained using a different combination of the extracted features.”, where the model is trained using a combination of multiple sources of data, or a combination of multiple signals, because process 300, step 320, and step 330 are all a part of Fig. 3, see Col. 6 lines 20-29. Furthermore, Sagi recites “One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world”, which is interpreted as the rules being defined during the model training, see Col. 4 lines 7-13. Therefore, the models are trained from a combination of data sources to define one or more rules.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Sagi into the method of Qi to train a model that would define one or more rules. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Sagi as all the references are in the field of classification to have a model that defines one or more rules. A person of ordinary skill of the art would have been motivated to perform the combination for being able to decrease concept drift (“One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world” as suggested by Sagi at Col. 4 lines 7-13.). Regarding claim 18, Qi teaches computer program product for using an AI model to detect conflict scenarios for vehicles, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: process, by the processor, signal data generated by a vehicle to detect at least one or more non-collision scenarios (See Paragraph 0031, Qi recites “total loss module 201 obtains input from driving sensors 202”, where the “Driving sensors 202” can include “sensors embedded into the vehicle.” In addition, Qi recites “In some instances, the total loss module may determine that a rapid acceleration corresponds to a loss event that is not a collision”, see Paragraph 0046. This is interpreted as processing non-collision scenarios from vehicles.); process, by the processor, the signal data to detect at least one or more near-collision scenarios, such that data from the at least one or more near-collision scenarios is used to train a second AI model (See Paragraph 0003, Qi recites “Embodiments of the present invention generally relate to predicting a total loss event, and more particularly, to detecting a crash event and predicting a confidence of a total loss event associated with the crash event” Next, Qi recites “the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred, while proceeding to block 406 when the binary output indicates that crash event has not occurred”, See Paragraph 0058. Then, Qi recites “At block 406, the process 400 involves generating a crash feature vector using an output of the crash prediction model. The crash feature vector, for example, may include crash features extracted from some or all of the data received from driving sensors and include a crash prediction (e.g., a statistical likelihood that a crash event occurred)”, see Paragraph 0059. Furthermore, Qi recites “The first machine-learning model may be a decision tree that outputs a confidence that the vehicle has been involved in a total loss event. The first machine-learning model may use some or all of the data values of the crash feature vector to predict a confidence of a total loss event”, where the decision tree is the first machine-learning model, or the 2nd AI model predicting a confidence of total loss events, see Paragraph 0062. Finally, Qi recites that the “first machine learning model” is used to improve the “accuracy” of the prediction of the total loss event confidence, which is interpreted as the first-machine learning model, or the 2nd AI model, predicting total loss event confidence, or near-collision scenarios, with higher accuracy than the crash prediction model, or the 1st AI model, see Paragraph 0021. This is interpreted as the crash prediction model, or the 1st AI model, processing the sensor data, or signal data for any likelihood or confidence of crash events or total loss events, or near-collision scenarios, to generate a crash feature vector with a crash prediction, where the first-machine learning model, or the 2nd AI model, uses the crash feature vector with the crash prediction to predict a total loss event with a decision tree, or the first machine-learning module using the decision tree, for predicting near-collision scenarios with a higher accuracy than the crash prediction model, the 1st AI model). However, Qi fails to disclose the training of a Sagi teaches the training of a (See Col. 1 lines 35-46, Sagi recites a method of “extracting features from the integrated anomaly analysis data that correlate with an indication of an anomaly, based on predefined correlation criteria; training a plurality of machine learning models using the extracted features, wherein each of the plurality of machine learning models is trained using different combinations of the extracted features; evaluating a performance of the plurality of trained machine learning models; and extracting one or more rules from one or more of the trained machine learning models based on the performance, wherein the extracted one or more rules are used to classify transactions as anomalous”, which is interpreted as the extraction of one or more rules from trained machine learning models is a model defining one or more rules. Furthermore, Sagi recites “One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world”, see Col. 4 lines 7-13, which is interpreted as the rules being defined during the model training.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Sagi into the method of Qi to add a model that would define one or more rules. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Sagi as all the references are in the field of classification to have a model that defines one or more rules. A person of ordinary skill of the art would have been motivated to perform the combination for being able to decrease concept drift (“One justification for using machine learning techniques for rule extraction is that rules can be updated by re-training the underlying machine learning model. Frequent updates, for example, help in decreasing concept drift issues. Concept drift is the case where the predictive performance of a machine learning model is degraded over time due to changes in the real world” as suggested by Sagi at Col. 4 lines 7-13.). Regarding claim 19, Qi teaches the computer program product of claim 18, wherein the program instructions are further executable by the processor to cause the processor to: collect, by the processor, the signal data from one or more sensors of the vehicle, wherein the signal data comprising the at least one or more near-collision scenarios forms inlier data required for training the second AI model (See Paragraph 0057, Qi recites “The crash prediction model may be a trained model to output a likelihood that a vehicle crash has occurred based on the sensor data.” Furthermore, Qi recites “For instance, the crash prediction model may generate an output only when the likelihood of the crash event is greater than a threshold crash likelihood” and “In these cases, the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred”, where the crash prediction model outputs the likelihood of the crash event, or near-collision scenarios, is greater than a threshold crash likelihood, which is interpreted as the near-collision scenarios forming the inlier data, see Paragraph 0058. Also, Qi recites “The first machine-learning model may be trained using the sensor data (e.g., from sensors of the mobile device or an output from a crash prediction model)”, where the first machine learning model, or the 2nd AI model, receives the inlier data as training data from the crash prediction model, or the 1st AI model, see Paragraph 0077. This is interpreted as likelihood that a vehicle crash has occurred based on the sensor data, or the signal data, forming near-collision scenarios, where the likelihood is greater than a threshold crash likelihood, or forming inlier data, to train the first machine-learning model, or the 2nd AI model). Claim(s) 4, 12, and 20 rejected under 35 U.S.C. 103 as being unpatentable over Qi (US 20220017032 A1) in view of Sagi (US 11568181 B2) further in view of Afrasiabi (US 20240249500 A1). Regarding claim 4, While Qi discloses the computer-implemented system of claim 3, defining the inlier data from near-collision scenarios (See Paragraph 0057, Qi recites “The crash prediction model may be a trained model to output a likelihood that a vehicle crash has occurred based on the sensor data.” Furthermore, Qi recites “For instance, the crash prediction model may generate an output only when the likelihood of the crash event is greater than a threshold crash likelihood” and “In these cases, the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred”, where the crash prediction model outputs the likelihood of the crash event, or near-collision scenarios, is greater than a threshold crash likelihood, which is interpreted as the near-collision scenarios forming the inlier data, see Paragraph 0058-0059. Also, Qi recites “The first machine-learning model may be trained using the sensor data (e.g., from sensors of the mobile device or an output from a crash prediction model)”, where the first machine learning model, or the 2nd AI model, receives the inlier data as training data from the crash prediction model, or the 1st AI model, see Paragraph 0077. This is interpreted as likelihood that a vehicle crash has occurred based on the sensor data, or the signal data, forming near-collision scenarios, where the likelihood is greater than a threshold crash likelihood, or forming inlier data, to train the first machine-learning model, or the 2nd AI model), Qi and Sagi fail to teach a sample balancer that analyzes the data to identify one or more classes of data required for the training of the second AI model, wherein the one or more classes of data are annotated to generate annotated data for the training of the second AI model. However, Afrasiabi teaches a sample balancer that analyzes the data to identify one or more classes of data required for the training of the second AI model, wherein the one or more classes of data are annotated to generate annotated data for the training of the second AI model (See Paragraph 0004, Afrasiabi recites a “feature extractor” that extracts “features for a plurality of data elements of the input data”, a “clustering model configured to cluster the plurality of data elements of the input data into a plurality of feature clusters based on similarities of the extracted features to each other, label a plurality of target clusters of the plurality of feature clusters and a plurality of data elements of the plurality of target clusters with respective predetermined labels, generate a training dataset including the plurality of data elements of the plurality of target clusters”, where the “training dataset” is used to train a “machine learning model”. This is interpreted as the feature extractor, or the 1st AI model, providing data to the clustering model, or the sample balancer, that labels and clusters, or annotates and sorts into classes, training data that is used to train a machine learning model, or the 2nd AI model.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Afrasiabi into the method of Qi and Sagi to add a sample balancer that would classify the inlier data into separate classes to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Afrasiabi as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to improve the performance of the second AI model (“Filtered cropped images 44 of the scratches can be inputted into a clustering model 54, and features 48 of the filtered cropped images 44 can be clustered into feature clusters 56, which are further filtered to generate a training dataset 72, which is used to train a pre-training machine learning model 66 to improve its performance in detecting scratches on images of aircraft skin” as suggested by Afrasiabi at Paragraph 0038.). Regarding claim 12, While Qi discloses the computer-implemented system of claim 11 further comprising: the inlier data required for training the second AI model (See Paragraph 0057, Qi recites “The crash prediction model may be a trained model to output a likelihood that a vehicle crash has occurred based on the sensor data.” Furthermore, Qi recites “For instance, the crash prediction model may generate an output only when the likelihood of the crash event is greater than a threshold crash likelihood” and “In these cases, the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred”, where the crash prediction model outputs the likelihood of the crash event, or near-collision scenarios, is greater than a threshold crash likelihood, which is interpreted as the near-collision scenarios forming the inlier data, see Paragraph 0058. Also, Qi recites “The first machine-learning model may be trained using the sensor data (e.g., from sensors of the mobile device or an output from a crash prediction model)”, where the first machine learning model, or the 2nd AI model, receives the inlier data as training data from the crash prediction model, or the 1st AI model, see Paragraph 0077. This is interpreted as likelihood that a vehicle crash has occurred based on the sensor data, or the signal data, forming near-collision scenarios, where the likelihood is greater than a threshold crash likelihood, or forming inlier data, to train the first machine-learning model, or the 2nd AI model), Qi and Sagi fail to teach the analysis of, by the system, the data to identify one or more classes of data required for the training of the second AI model, wherein the one or more classes of data are annotated to generate annotated data for the training of the second AI model. However, Afrasiabi teaches the analysis of the data to identify one or more classes of data required for the training of the second AI model, wherein the one or more classes of data are annotated to generate annotated data for the training of the second AI model (See Paragraph 0004, Afrasiabi recites a “feature extractor” that extracts “features for a plurality of data elements of the input data”, a “clustering model configured to cluster the plurality of data elements of the input data into a plurality of feature clusters based on similarities of the extracted features to each other, label a plurality of target clusters of the plurality of feature clusters and a plurality of data elements of the plurality of target clusters with respective predetermined labels, generate a training dataset including the plurality of data elements of the plurality of target clusters”, where the “training dataset” is used to train a “machine learning model”. This is interpreted as the feature extractor, or the 1st AI model, providing data to the clustering model, or the sample balancer, that labels and clusters, or annotates and sorts into classes, training data that is used to train a machine learning model, or the 2nd AI model.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Afrasiabi into the method of Qi and Sagi to add a sample balancer that would classify the inlier data into separate classes to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Afrasiabi as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to improve the performance of the second AI model (“Filtered cropped images 44 of the scratches can be inputted into a clustering model 54, and features 48 of the filtered cropped images 44 can be clustered into feature clusters 56, which are further filtered to generate a training dataset 72, which is used to train a pre-training machine learning model 66 to improve its performance in detecting scratches on images of aircraft skin” as suggested by Afrasiabi at Paragraph 0038.). Regarding claim 20, While Qi discloses the computer program product of claim 19 comprising the inlier data required for training the second AI model (See Paragraph 0057, Qi recites “The crash prediction model may be a trained model to output a likelihood that a vehicle crash has occurred based on the sensor data.” Furthermore, Qi recites “For instance, the crash prediction model may generate an output only when the likelihood of the crash event is greater than a threshold crash likelihood” and “In these cases, the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred”, where the crash prediction model outputs the likelihood of the crash event, or near-collision scenarios, is greater than a threshold crash likelihood, which is interpreted as the near-collision scenarios forming the inlier data, see Paragraph 0058. Also, Qi recites “The first machine-learning model may be trained using the sensor data (e.g., from sensors of the mobile device or an output from a crash prediction model)”, where the first machine learning model, or the 2nd AI model, receives the inlier data as training data from the crash prediction model, or the 1st AI model, see Paragraph 0077. This is interpreted as likelihood that a vehicle crash has occurred based on the sensor data, or the signal data, forming near-collision scenarios, where the likelihood is greater than a threshold crash likelihood, or forming inlier data, to train the first machine-learning model, or the 2nd AI model), Qi and Sagi fail to teach that the program instructions are further executable by the processor to cause the processor to: analyze, by the processor, the data to identify one or more classes of data required for the training of the second AI model, wherein the one or more classes of data are annotated to generate annotated data for the training of the second AI model. However, Afrasiabi teaches the processor analyzing the data to identify one or more classes of data required for the training of the second AI model, wherein the one or more classes of data are annotated to generate annotated data for the training of the second AI model (See Paragraph 0004, Afrasiabi recites a “feature extractor” that extracts “features for a plurality of data elements of the input data”, a “clustering model configured to cluster the plurality of data elements of the input data into a plurality of feature clusters based on similarities of the extracted features to each other, label a plurality of target clusters of the plurality of feature clusters and a plurality of data elements of the plurality of target clusters with respective predetermined labels, generate a training dataset including the plurality of data elements of the plurality of target clusters”, where the “training dataset” is used to train a “machine learning model”. This is interpreted as the feature extractor, or the 1st AI model, providing data to the clustering model, or the sample balancer, that labels and clusters, or annotates and sorts into classes, training data that is used to train a machine learning model, or the 2nd AI model.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Afrasiabi into the method of Qi and Sagi to add a sample balancer that would classify the inlier data into separate classes to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Afrasiabi as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to improve the performance of the second AI model (“Filtered cropped images 44 of the scratches can be inputted into a clustering model 54, and features 48 of the filtered cropped images 44 can be clustered into feature clusters 56, which are further filtered to generate a training dataset 72, which is used to train a pre-training machine learning model 66 to improve its performance in detecting scratches on images of aircraft skin” as suggested by Afrasiabi at Paragraph 0038.). Claim(s) 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Qi (US 20220017032 A1) in view of Sagi (US 11568181 B2) further in view of Mahmud (US 20240096067 A1). Regarding Claim 6, While Qi discloses the computer system of claim 1, wherein the signal data comprising at least one or more near-collision scenarios (See Paragraph 0003, Qi recites “Embodiments of the present invention generally relate to predicting a total loss event, and more particularly, to detecting a crash event and predicting a confidence of a total loss event associated with the crash event”. Next, Qi recites “the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred, while proceeding to block 406 when the binary output indicates that crash event has not occurred”, see Paragraph 0058. Then, Qi recites “At block 406, the process 400 involves generating a crash feature vector using an output of the crash prediction model. The crash feature vector, for example, may include crash features extracted from some or all of the data received from driving sensors and include a crash prediction (e.g., a statistical likelihood that a crash event occurred), where the crash prediction model is the 1st AI model processing events where crashes haven’t occurred at block 406, see Paragraph 0059. Furthermore, Qi recites “The first machine-learning model may be trained” with the “output from a crash prediction model)”, where “the first machine-learning model may be a decision tree that outputs a confidence that the vehicle has been involved in a total loss event. The first machine-learning model may use some or all of the data values of the crash feature vector to predict a confidence of a total loss event.”, where the decision tree is the first machine-learning model, or the 2nd AI model predicting a confidence of total loss events, see Paragraph 0077. Finally, Qi recites that the “first machine learning model” is used to improve the “accuracy” of the prediction of the total loss event confidence, see Paragraph 0021, which is interpreted as the first-machine learning model, or the 2nd AI model, predicting total loss event confidence, or near-collision scenarios, with higher accuracy than the crash prediction model, or the 1st AI model. This is interpreted as the signal data is being analyzed by the first machine learning model, or the 2nd AI model to predict crash events or total loss event confidence, or near-collision scenarios.), Qi and Sagi fail to disclose that data forms outlier data unseen by the first AI model that is annotated to generate annotated data for training of the second AI model. However, Mahmud teaches the data forming outlier data previously unseen by the first AI model that is annotated to generate annotated data for training of the second AI model (See Paragraph 0036, Mahmud recites “In particular, FIG. 3A illustrates the training of a classifier and backbone feature-extractor models. First, the training data is classified into a plurality of classes using the feature-extractor backbone model and classifier model. Then, as shown in FIG. 3B, the classes of data are grouped, with each group including a number of the classes. Then, as shown in FIG. 3C, a teacher model is assigned to a respective one of the groups. Each teacher model is trained with the backbone feature-extractor model and the data in its respective group. Then the output of each teacher model is aggregated or combined to create a trained class prediction model” where a “uncertainty score” is assigned “to the output of each teacher model, and the final class prediction model classifies the input data based on the uncertainty scores. In FIG. 3E, the system trains a student model that can learn from the output of the teacher models”, where the teacher model is the first AI model providing training data with scores, or annotated data, to the second AI model and the student model is the 2nd AI model, and the annotator is the final class prediction model because it classifies the data based on the uncertainty scores.. Furthermore, Mahmud recites “Each teacher model processes the input data and predicts the class, along with a confidence score. It follows that the teacher model that was previously trained with the data that is now being input in FIG. 3D should result with the lowest uncertainty score and thus should be weighed more heavily” meaning that if a teacher model is trained with data it wasn’t previously trained for, the uncertainty score should be higher, which is the outlier data previously unseen by the first AI model. Therefore, a teacher model with a high uncertainty score, or annotated outlier data previously unseen by the first AI model, is used to train the final class prediction model, or the second AI model, see Paragraph 0047.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mahmud into the method of Qi and Sagi to have outlier data to the 1st AI model form annotated data to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mahmud as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to have an improved classification model (“The outputs of the teacher models are merged into a final class prediction model, which represents an improved classification model” as suggested by Mahmud at Paragraph 0049.). Regarding Claim 14, While Qi discloses the computer implemented system of claim 10 where the signal data comprising at least one or more near-collision scenarios (See Paragraph 0003, Qi recites “Embodiments of the present invention generally relate to predicting a total loss event, and more particularly, to detecting a crash event and predicting a confidence of a total loss event associated with the crash event”. Next, Qi recites “the process 400 may proceed to block 408 when the binary output indicates that crash event has occurred, while proceeding to block 406 when the binary output indicates that crash event has not occurred”, see Paragraph 0058. Then, Qi recites “At block 406, the process 400 involves generating a crash feature vector using an output of the crash prediction model. The crash feature vector, for example, may include crash features extracted from some or all of the data received from driving sensors and include a crash prediction (e.g., a statistical likelihood that a crash event occurred), where the crash prediction model is the 1st AI model processing events where crashes haven’t occurred at block 406, see Paragraph 0059. Furthermore, Qi recites “The first machine-learning model may be trained” with the “output from a crash prediction model)”, where “the first machine-learning model may be a decision tree that outputs a confidence that the vehicle has been involved in a total loss event. The first machine-learning model may use some or all of the data values of the crash feature vector to predict a confidence of a total loss event.”, where the decision tree is the first machine-learning model, or the 2nd AI model predicting a confidence of total loss events, see Paragraph 0077. Finally, Qi recites that the “first machine learning model” is used to improve the “accuracy” of the prediction of the total loss event confidence, see Paragraph 0021, which is interpreted as the first-machine learning model, or the 2nd AI model, predicting total loss event confidence, or near-collision scenarios, with higher accuracy than the crash prediction model, or the 1st AI model. This is interpreted as the signal data is being analyzed by the first machine learning model, or the 2nd AI model to predict crash events or total loss event confidence, or near-collision scenarios.), Qi and Sagi fail to disclose that the data forms outlier data previously unseen by the first AI model that is annotated to generate annotated data for training of the second AI model. However, Mahmud teaches the data forming outlier data previously unseen by the first AI model that is annotated to generate annotated data for training of the second AI model (See Paragraph 0036, Mahmud recites “In particular, FIG. 3A illustrates the training of a classifier and backbone feature-extractor models. First, the training data is classified into a plurality of classes using the feature-extractor backbone model and classifier model. Then, as shown in FIG. 3B, the classes of data are grouped, with each group including a number of the classes. Then, as shown in FIG. 3C, a teacher model is assigned to a respective one of the groups. Each teacher model is trained with the backbone feature-extractor model and the data in its respective group. Then the output of each teacher model is aggregated or combined to create a trained class prediction model” where a “uncertainty score” is assigned “to the output of each teacher model, and the final class prediction model classifies the input data based on the uncertainty scores. In FIG. 3E, the system trains a student model that can learn from the output of the teacher models”, where the teacher model is the first AI model providing training data with scores, or annotated data, to the second AI model and the student model is the 2nd AI model, and the annotator is the final class prediction model because it classifies the data based on the uncertainty scores.. Furthermore, Mahmud recites “Each teacher model processes the input data and predicts the class, along with a confidence score. It follows that the teacher model that was previously trained with the data that is now being input in FIG. 3D should result with the lowest uncertainty score and thus should be weighed more heavily” meaning that if a teacher model is trained with data it wasn’t previously trained for, the uncertainty score should be higher, which is the outlier data previously unseen by the first AI model. Therefore, a teacher model with a high uncertainty score, or annotated outlier data previously unseen by the first AI model, is used to train the final class prediction model, or the second AI model, see Paragraph 0047.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mahmud into the method of Qi and Sagi to have outlier data to the 1st AI model form annotated data to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mahmud as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to have an improved classification model (“The outputs of the teacher models are merged into a final class prediction model, which represents an improved classification model” as suggested by Mahmud at Paragraph 0049.). Claims 5 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Qi (US 20220017032 A1) in view of Sagi (US 11568181 B2) further in view of Afrasiabi (US 20240249500 A1) further in view of Mahmud (US 20240096067 A1) further in view of Zaman (US 20200050879 A1). Regarding Claim 5, Qi and Sagi do not specifically disclose the computer-implemented system of claim 4, wherein the annotated data is used as an existing class of data to generate training data, validation data, and testing data for the training of the second AI model. Afrasiabi discloses the computer-implemented system of claim 4, wherein data is annotated to be used as a(See Paragraph 0004, Afrasiabi recites a “feature extractor” that extracts “features for a plurality of data elements of the input data”, a “clustering model configured to cluster the plurality of data elements of the input data into a plurality of feature clusters based on similarities of the extracted features to each other, label a plurality of target clusters of the plurality of feature clusters and a plurality of data elements of the plurality of target clusters with respective predetermined labels, generate a training dataset including the plurality of data elements of the plurality of target clusters”, where the “training dataset” is used to train a “machine learning model”. This is interpreted as the feature extractor, or the 1st AI model, providing data to the clustering model, that labels and clusters the data, or annotates and sorts into classes, to be used as training data to train a machine learning model, or the 2nd AI model.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Afrasiabi into the method of Qi and Sagi to add a sample balancer that would classify the inlier data into separate classes to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Afrasiabi as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to improve the performance of the second AI model (“Filtered cropped images 44 of the scratches can be inputted into a clustering model 54, and features 48 of the filtered cropped images 44 can be clustered into feature clusters 56, which are further filtered to generate a training dataset 72, which is used to train a pre-training machine learning model 66 to improve its performance in detecting scratches on images of aircraft skin” as suggested by Afrasiabi at Paragraph 0038.). However, Afrasiabi fails to disclose that the data can form an existing class of data that can be used for the training of the second AI model. Mahmud teaches the data forming an existing class of data for the training of the second AI model (See Paragraph 0036, Mahmud recites “In particular, FIG. 3A illustrates the training of a classifier and backbone feature-extractor models. First, the training data is classified into a plurality of classes using the feature-extractor backbone model and classifier model. Then, as shown in FIG. 3B, the classes of data are grouped, with each group including a number of the classes. Then, as shown in FIG. 3C, a teacher model is assigned to a respective one of the groups. Each teacher model is trained with the backbone feature-extractor model and the data in its respective group. Then the output of each teacher model is aggregated or combined to create a trained class prediction model” where a “uncertainty score” is assigned “to the output of each teacher model, and the final class prediction model classifies the input data based on the uncertainty scores. In FIG. 3E, the system trains a student model that can learn from the output of the teacher models”, where the teacher model is the first AI model providing training data with scores, or annotated data, to the second AI model and the student model is the 2nd AI model, and the annotator is the final class prediction model because it classifies the data based on the uncertainty scores. Furthermore, Mahmud recites “Each teacher model processes the input data and predicts the class, along with a confidence score. It follows that the teacher model that was previously trained with the data that is now being input in FIG. 3D should result with the lowest uncertainty score and thus should be weighed more heavily” meaning that if a teacher model is trained with data it was previously trained for, then the familiar data forms an existing class of data. Therefore, a teacher model with a low uncertainty score, or an existing class of data, is used to train the final class prediction model, or the second AI model, see Paragraph 0047.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mahmud into the method of Qi, Sagi, and Afrasiabi to have an existing class of data from the 1st AI model to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mahmud as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to have an improved classification model (“The outputs of the teacher models are merged into a final class prediction model, which represents an improved classification model” as suggested by Mahmud at Paragraph 0049.). However, Mahmud fails to disclose that the data can also be used for generating validation data and test data for the training of the second AI model. Zaman teaches, in an analogous system, wherein the data is used to generate validation data, and test data for the validation and testing of the second AI model (See Paragraph 0021, Zaman recites “For example, object identification platform 115 may obtain hundreds, thousands, millions, or billions of images and metadata associated with the images identifying features of the images” and the “object identification platform 115 may segment the images data set into an images training data set, an images testing data set, an images validation data set, and/or the like”, where the metadata associated with the images identifying the features is generated to create test data, training data, and validation data. Zaman recites that an “object identification platform” improves the “accuracy of object identification” in image processing for “improved safety for collision avoidance applications of object identification (e.g., autonomous vehicle navigation), see Paragraph 0013. Furthermore, Zaman recites that the “object identification platform 115 may train the image identification model using, for example, the images training data set (e.g., as training data to train the model) and the images testing data set (e.g., as testing data to test the model), and may validate the image identification model using, for example, the images validation data set (e.g., as validation data to validate the model), see Paragraph 0022. This is interpreted as the object identification platform, or the 1st AI model, using the data to form testing data, training data, and validation data for the training of the image identification model, or the 2nd AI model.) It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Zaman into the methods of Afrasiabi, Mahmud, Sagi, and Qi for the data to form testing data and validation data for the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Zaman as all the references are in the field of annotation to form test data, training data, and validation data for the 2nd AI model. A person of ordinary skill of the art would have been motivated to perform the combination for being able to have improved both the 1st AI model and 2nd AI model (The “image generation model” has “improved image quality relative to the true image” and by improving the “image generation model” the “accuracy of object identification performed by object identification platform 115” is also improved). Regarding Claim 13, Qi and Sagi do not specifically disclose the computer-implemented system of claim 12, further comprising: using, by the system, the annotated data as an existing class of data to generate training data, validation data, and test data for the training, validation and testing of the second AI model. Afrasiabi discloses the computer-implemented system of claim 4, wherein the annotated data is used as a(See Paragraph 0004, Afrasiabi recites a “feature extractor” that extracts “features for a plurality of data elements of the input data”, a “clustering model configured to cluster the plurality of data elements of the input data into a plurality of feature clusters based on similarities of the extracted features to each other, label a plurality of target clusters of the plurality of feature clusters and a plurality of data elements of the plurality of target clusters with respective predetermined labels, generate a training dataset including the plurality of data elements of the plurality of target clusters”, where the “training dataset” is used to train a “machine learning model”. This is interpreted as the feature extractor, or the 1st AI model, providing data to the clustering model, or the sample balancer, that labels and clusters, or annotates and sorts into classes, training data that is used to train a machine learning model, or the 2nd AI model.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Afrasiabi into the method of Qi and Sagi to add a sample balancer that would classify the inlier data into separate classes to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Afrasiabi as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to improve the performance of the second AI model (“Filtered cropped images 44 of the scratches can be inputted into a clustering model 54, and features 48 of the filtered cropped images 44 can be clustered into feature clusters 56, which are further filtered to generate a training dataset 72, which is used to train a pre-training machine learning model 66 to improve its performance in detecting scratches on images of aircraft skin” as suggested by Afrasiabi at Paragraph 0038.). However, Afrasiabi fails to disclose that the annotated data can form an existing class of data that can be used for the training of the second AI model. Mahmud teaches the data forming an existing class of data for the training of the second AI model (See Paragraph 0036, Mahmud recites “In particular, FIG. 3A illustrates the training of a classifier and backbone feature-extractor models. First, the training data is classified into a plurality of classes using the feature-extractor backbone model and classifier model. Then, as shown in FIG. 3B, the classes of data are grouped, with each group including a number of the classes. Then, as shown in FIG. 3C, a teacher model is assigned to a respective one of the groups. Each teacher model is trained with the backbone feature-extractor model and the data in its respective group. Then the output of each teacher model is aggregated or combined to create a trained class prediction model” where a “uncertainty score” is assigned “to the output of each teacher model, and the final class prediction model classifies the input data based on the uncertainty scores. In FIG. 3E, the system trains a student model that can learn from the output of the teacher models”, where the teacher model is the first AI model providing training data with scores, or annotated data, to the second AI model and the student model is the 2nd AI model, and the annotator is the final class prediction model because it classifies the data based on the uncertainty scores.. Furthermore, Mahmud recites “Each teacher model processes the input data and predicts the class, along with a confidence score. It follows that the teacher model that was previously trained with the data that is now being input in FIG. 3D should result with the lowest uncertainty score and thus should be weighed more heavily” meaning that if a teacher model is trained with data it was previously trained for, then the familiar data forms an existing class of data. Therefore, a teacher model with a low uncertainty score, or an existing class of data, is used to train the final class prediction model, or the second AI model, see Paragraph 0047.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mahmud into the method of Qi, Afrasiabi, and Sagi to have an existing class of data from the 1st AI model to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mahmud as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to have an improved classification model (“The outputs of the teacher models are merged into a final class prediction model, which represents an improved classification model” as suggested by Mahmud at Paragraph 0049.). However, Mahmud fails to disclose that the existing class of data can also be used for generating validation data and test data for the training of the second AI model. Zaman teaches, in an analogous system, wherein the data is used as an existing class of data to generate validation data, and test data for the validation and testing of the second AI model (See Paragraph 0021, Zaman recites “For example, object identification platform 115 may obtain hundreds, thousands, millions, or billions of images and metadata associated with the images identifying features of the images” and the “object identification platform 115 may segment the images data set into an images training data set, an images testing data set, an images validation data set, and/or the like”, where the metadata associated with the images identifying the features is interpreted as the annotated data generated to create test data, training data, and validation data. Zaman recites that an “object identification platform” improves the “accuracy of object identification” in image processing for “improved safety for collision avoidance applications of object identification (e.g., autonomous vehicle navigation), see Paragraph 0013. Furthermore, Zaman recites that the “object identification platform 115 may train the image identification model using, for example, the images training data set (e.g., as training data to train the model) and the images testing data set (e.g., as testing data to test the model), and may validate the image identification model using, for example, the images validation data set (e.g., as validation data to validate the model), see Paragraph 0022. This is interpreted as the object identification platform, or the 1st AI model, using the data to form testing data, training data, and validation data for the training of the image identification model, or the 2nd AI model.) It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Zaman into the methods of Afrasiabi, Mahmud, Qi, and Sagi for the annotated data to form testing data and validation data for the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Zaman as all the references are in the field of annotation to form test data, training data, and validation data for the 2nd AI model. A person of ordinary skill of the art would have been motivated to perform the combination for being able to have an improved version of the 1st AI model (The “image generation model” has “improved image quality relative to the true image” and by improving the “image generation model” the “accuracy of object identification performed by object identification platform 115” is also improved). Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Qi (US 20220017032 A1) in view of Sagi (US 11568181 B2) further in view of Mahmud (US 20240096067 A1) further in view of Zaman (US 20200050879 A1). Regarding Claim 7, Qi and Sagi do not specifically disclose the computer-implemented system of claim 6, wherein the annotated data is used as a new class of data to generate training data, validation data, and test data for the training, validation and testing of the second AI model. Mahmud discloses that the annotated data forms a new class for training the second AI model (See Paragraph 0036, Mahmud recites “In particular, FIG. 3A illustrates the training of a classifier and backbone feature-extractor models. First, the training data is classified into a plurality of classes using the feature-extractor backbone model and classifier model. Then, as shown in FIG. 3B, the classes of data are grouped, with each group including a number of the classes. Then, as shown in FIG. 3C, a teacher model is assigned to a respective one of the groups. Each teacher model is trained with the backbone feature-extractor model and the data in its respective group. Then the output of each teacher model is aggregated or combined to create a trained class prediction model” where a “uncertainty score” is assigned “to the output of each teacher model, and the final class prediction model classifies the input data based on the uncertainty scores. In FIG. 3E, the system trains a student model that can learn from the output of the teacher models”, where the teacher model is the first AI model providing training data with scores, or annotated data, to the second AI model and the student model is the 2nd AI model, and the annotator is the final class prediction model because it classifies the data based on the uncertainty scores.. Furthermore, Mahmud recites “Each teacher model processes the input data and predicts the class, along with a confidence score. It follows that the teacher model that was previously trained with the data that is now being input in FIG. 3D should result with the lowest uncertainty score and thus should be weighed more heavily” meaning that if a teacher model is trained with data it wasn’t previously trained for, the uncertainty score should be higher, which is the outlier data previously unseen by the first AI model. Therefore, a teacher model with a high uncertainty score, or annotated outlier data previously unseen by the first AI model, is used to train the final class prediction model, or the second AI model, see Paragraph 0047). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mahmud into the method of Qi and Sagi to have outlier data to the 1st AI model form annotated data to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mahmud as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to have an improved classification model (“The outputs of the teacher models are merged into a final class prediction model, which represents an improved classification model” as suggested by Mahmud at Paragraph 0049.). However, Mahmud fails to disclose that the data is used to generate validation data, and test data for the validation and testing of the second AI model. Zaman teaches, in an analogous system, data is used to generate validation data, and test data for the validation and testing of the second AI model (See Paragraph 0021, Zaman recites “For example, object identification platform 115 may obtain hundreds, thousands, millions, or billions of images and metadata associated with the images identifying features of the images.” And that the “object identification platform 115 may segment the images data set into an images training data set, an images testing data set, an images validation data set, and/or the like”, where the metadata associated with the images identifying the features is interpreted as the annotated data generated to create test data, training data, and validation data. Furthermore, Zaman recites that an “object identification platform” improves the “accuracy of object identification” in image processing for “improved safety for collision avoidance applications of object identification (e.g., autonomous vehicle navigation). See Paragraph 0013. Furthermore, Zaman recites that the “object identification platform 115 may train the image identification model using, for example, the images training data set (e.g., as training data to train the model) and the images testing data set (e.g., as testing data to test the model), and may validate the image identification model using, for example, the images validation data set (e.g., as validation data to validate the model)”, see Paragraph 0022. This is interpreted as the object identification platform, or the 1st AI model, using the data to form testing data, training data, and validation data for the training of the image identification model, or the 2nd AI model.) It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Zaman into the methods of Mahmud and Qi for the annotated data to form testing data and validation data for the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Zaman as all the references are in the field of annotation to form test data, training data, and validation data for the 2nd AI model. A person of ordinary skill of the art would have been motivated to perform the combination for being able to have an improved version of the 1st AI model (The “object identification platform 115” uses “the image generation model” more effectively, as suggested by Zaman at Paragraph 0024). Regarding Claim 15, Qi and Sagi do not specifically disclose the computer-implemented system of claim 14, wherein the annotated data is used as a new class of data to generate training data the training of the second AI model. Mahmud discloses that the annotated data forms a new class for training the second AI model (See Paragraph 0036, Mahmud recites “In particular, FIG. 3A illustrates the training of a classifier and backbone feature-extractor models. First, the training data is classified into a plurality of classes using the feature-extractor backbone model and classifier model. Then, as shown in FIG. 3B, the classes of data are grouped, with each group including a number of the classes. Then, as shown in FIG. 3C, a teacher model is assigned to a respective one of the groups. Each teacher model is trained with the backbone feature-extractor model and the data in its respective group. Then the output of each teacher model is aggregated or combined to create a trained class prediction model” where a “uncertainty score” is assigned “to the output of each teacher model, and the final class prediction model classifies the input data based on the uncertainty scores. In FIG. 3E, the system trains a student model that can learn from the output of the teacher models”, where the teacher model is the first AI model providing training data to the second AI model and the student model is the 2nd AI model, and the annotator is the final class prediction model because it classifies the data based on the uncertainty scores, or the annotated data. Furthermore, Mahmud recites “Each teacher model processes the input data and predicts the class, along with a confidence score. It follows that the teacher model that was previously trained with the data that is now being input in FIG. 3D should result with the lowest uncertainty score and thus should be weighed more heavily”, meaning that if a teacher model is trained with data it wasn’t previously trained for, the uncertainty score should be higher, which is the outlier data previously unseen by the first AI model. Therefore, a teacher model with a high uncertainty score, or annotated outlier data previously unseen by the first AI model, is used to train the final class prediction model, or the second AI model.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mahmud into the method of Qi and Sagi to have outlier data to the 1st AI model form annotated data to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mahmud as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to have an improved classification model (“The outputs of the teacher models are merged into a final class prediction model, which represents an improved classification model” as suggested by Mahmud at Paragraph 0049.). However, Mahmud fails to disclose that the data is used to generate validation data, and test data for the validation and testing of the second AI model. Zaman teaches, in an analogous system, wherein the data is used as a new class of data to generate validation data, and test data for the validation and testing of the second AI model (See Paragraph 0021, Zaman recites “For example, object identification platform 115 may obtain hundreds, thousands, millions, or billions of images and metadata associated with the images identifying features of the images.” And that the “object identification platform 115 may segment the images data set into an images training data set, an image testing data set, an images validation data set, and/or the like”, where the metadata associated with the images identifying the features is interpreted as the annotated data generated to create test data, training data, and validation data. Furthermore, Zaman recites that an “object identification platform” improves the “accuracy of object identification” in image processing for “improved safety for collision avoidance applications of object identification (e.g., autonomous vehicle navigation). See Paragraph 0013. Furthermore, Zaman recites that the “object identification platform 115 may train the image identification model using, for example, the images training data set (e.g., as training data to train the model) and the images testing data set (e.g., as testing data to test the model), and may validate the image identification model using, for example, the images validation data set (e.g., as validation data to validate the model)”, see Paragraph 0022. This is interpreted as the object identification platform, or the 1st AI model, using the data to form testing data, training data, and validation data for the training of the image identification model, or the 2nd AI model.) It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Zaman into the methods of Mahmud and Qi for the annotated data to form testing data and validation data for the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Zaman as all the references are in the field of annotation to form test data, training data, and validation data for the 2nd AI model. A person of ordinary skill of the art would have been motivated to perform the combination for being able to have an improved version of the 1st AI model (The “object identification platform 115” uses “the image generation model” more effectively, as suggested by Zaman at Paragraph 0024). Claim(s) 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Qi (US 20220017032 A1) in view of Sagi (US 11568181 B2) further in view of Mahmud (US 20240096067 A1) further in view of Zaman (US 20200050879 A1) further in view of Mavroeidis (US 20240395025 A1) Regarding claim 8, Qi, Sagi, and Zaman does not specifically disclose the computer-implemented system of claim 7, wherein the annotated data being stockpiled until a quantity of the annotated data exceeds a defined threshold for the annotated data to be used as the new class of data. Mahmud discloses that the annotated data forms a new class for training the second AI model (See Paragraph 0036, Mahmud recites “In particular, FIG. 3A illustrates the training of a classifier and backbone feature-extractor models. First, the training data is classified into a plurality of classes using the feature-extractor backbone model and classifier model. Then, as shown in FIG. 3B, the classes of data are grouped, with each group including a number of the classes. Then, as shown in FIG. 3C, a teacher model is assigned to a respective one of the groups. Each teacher model is trained with the backbone feature-extractor model and the data in its respective group. Then the output of each teacher model is aggregated or combined to create a trained class prediction model” where a “uncertainty score” is assigned “to the output of each teacher model, and the final class prediction model classifies the input data based on the uncertainty scores. In FIG. 3E, the system trains a student model that can learn from the output of the teacher models”, where the teacher model is the first AI model providing training data to the second AI model and the student model is the 2nd AI model, and the annotator is the final class prediction model because it classifies the data based on the uncertainty scores, or the annotated data. Furthermore, Mahmud recites “Each teacher model processes the input data and predicts the class, along with a confidence score. It follows that the teacher model that was previously trained with the data that is now being input in FIG. 3D should result with the lowest uncertainty score and thus should be weighed more heavily”, meaning that if a teacher model is trained with data it wasn’t previously trained for, the uncertainty score should be higher, which is the outlier data previously unseen by the first AI model. Therefore, a teacher model with a high uncertainty score, or annotated outlier data previously unseen by the first AI model, is used to train the final class prediction model, or the second AI model.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mahmud into the method of Qi and Sagi to have outlier data to the 1st AI model form annotated data to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mahmud as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to have an improved classification model (“The outputs of the teacher models are merged into a final class prediction model, which represents an improved classification model” as suggested by Mahmud at Paragraph 0049.). However, Mahmud fails to disclose that the data is stockpiled until a quantity of data exceeds a defined threshold. Mavroeidis teaches the data being stockpiled until a quantity of data exceeds a defined threshold (See Paragraph 0103, Mavroeidis recites “Once a sufficient number of such admitted images (possibly re-labeled) has accumulated, that is, exceeds a pre-defined threshold, the next iteration learning cycle may be triggered and flow control passes to the training system TS”, which is interpreted as data being stockpiled until it exceeds a defined threshold.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mavroeidis into the methods of Zaman, Mahmud, Qi, and Sagi for the stockpiling of data until it reaches a defined threshold to be used. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mavroeidis for the annotated data being formed as a new class of data. A person of ordinary skill of the art would have been motivated to have improved parameterization (“Training system TS use in particular the newly admitted training images (possibly with labels awarded by re-labeler RL) to re-train the model M. In retraining, the newly admitted cases are accessed and fed as training input data in the training system. The implemented training algorithm then uses their targets to further adjust the current parameters θ) of model M to obtain an improved parameterization θ)’” as suggested by Mavroeidis at Paragraph 0104.). Regarding claim 16, Qi, Sagi, and Zaman do not specifically disclose the computer-implemented method of claim 15, further comprising: stockpiling, by the system, the annotated data until a quantity of the annotated data exceeds a second defined threshold for the annotated data to be used as the new class of data. Mahmud discloses that the annotated data forms a new class for training the second AI model (See Paragraph 0036, Mahmud recites “In particular, FIG. 3A illustrates the training of a classifier and backbone feature-extractor models. First, the training data is classified into a plurality of classes using the feature-extractor backbone model and classifier model. Then, as shown in FIG. 3B, the classes of data are grouped, with each group including a number of the classes. Then, as shown in FIG. 3C, a teacher model is assigned to a respective one of the groups. Each teacher model is trained with the backbone feature-extractor model and the data in its respective group. Then the output of each teacher model is aggregated or combined to create a trained class prediction model” where a “uncertainty score” is assigned “to the output of each teacher model, and the final class prediction model classifies the input data based on the uncertainty scores. In FIG. 3E, the system trains a student model that can learn from the output of the teacher models”, where the teacher model is the first AI model providing training data to the student model, or the second AI model, and the annotator is the final class prediction model because it classifies the data based on the uncertainty scores, or the annotated data. Furthermore, Mahmud recites “Each teacher model processes the input data and predicts the class, along with a confidence score. It follows that the teacher model that was previously trained with the data that is now being input in FIG. 3D should result with the lowest uncertainty score and thus should be weighed more heavily”, meaning that if a teacher model is trained with data it wasn’t previously trained for, the uncertainty score should be higher, which is the outlier data previously unseen by the first AI model. Therefore, a teacher model with a high uncertainty score, or annotated outlier data previously unseen by the first AI model, is used to train the final class prediction model, or the second AI model.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mahmud into the method of Qi and Sagi to have outlier data to the 1st AI model form annotated data to train the 2nd AI model. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mahmud as all the references are in the field of classification to train the 2nd AI model. A person of ordinary skill of the art would have been motivated to have an improved classification model (“The outputs of the teacher models are merged into a final class prediction model, which represents an improved classification model” as suggested by Mahmud at Paragraph 0049.). However, Mahmud fails to disclose that the data is stockpiled until a quantity of data exceeds a defined threshold. Mavroeidis teaches the data being stockpiled until a quantity of data exceeds a defined threshold (See Paragraph 0103, Mavroeidis recites “Once a sufficient number of such admitted images (possibly re-labeled) has accumulated, that is, exceeds a pre-defined threshold, the next iteration learning cycle may be triggered and flow control passes to the training system TS”, which is interpreted as data being stockpiled until it exceeds a defined threshold.). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Mavroeidis into the methods of Zaman, Mahmud, Qi, and Sagi for the stockpiling of data until it reaches a defined threshold to be used. The modification would have been obvious because one of the ordinary skills of the art would be motivated to utilize the feature of Mavroeidis for the annotated data being formed as a new class of data. A person of ordinary skill of the art would have been motivated to have improved parameterization (“Training system TS use in particular the newly admitted training images (possibly with labels awarded by re-labeler RL) to re-train the model M. In retraining, the newly admitted cases are accessed and fed as training input data in the training system. The implemented training algorithm then uses their targets to further adjust the current parameters θ) of model M to obtain an improved parameterization θ)’” as suggested by Mavroeidis at Paragraph 0104.). Conclusion 9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Qi (US 20220017032 A1) teaches a computer-implemented system of detecting near-collision scenarios with a vehicle. 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YASIN A HASSAN whose telephone number is (571)272-1567. The examiner can normally be reached Mon-Fri. 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YASIN ABDULLAH HASSAN/Examiner, Art Unit 2127 /ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Jan 30, 2023
Application Filed
Feb 17, 2026
Non-Final Rejection — §101, §103, §112
Apr 08, 2026
Interview Requested
Apr 16, 2026
Applicant Interview (Telephonic)
Apr 16, 2026
Examiner Interview Summary

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month