Prosecution Insights
Last updated: April 19, 2026
Application No. 17/940,851

METHOD FOR GENERATING A DATA SET FOR TRAINING AND/OR TESTING A MACHINE LEARNING ALGORITHM ON THE BASIS OF AN ENSEMBLE OF DATA FILTERS

Non-Final OA §101§102§103
Filed
Sep 08, 2022
Examiner
WOOLWINE, SHANE D
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
324 granted / 375 resolved
+31.4% vs TC avg
Strong +21% interview lift
Without
With
+21.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
10 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a mental process without significantly more. In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.) Regarding claims 1 and 12, taking claim 1 as exemplary: Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 1, a method, and 12 a device. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes. The claim recites “A method for generating a data set for training and/or testing a machine learning algorithm, the method comprising the following steps: providing a first data set, wherein the first data set includes data potentially relevant to the machine learning algorithm; providing an ensemble of data filters; configuring each data filter of the ensemble of data filters based on requirements of the machine learning algorithm; and selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters in order to obtain data for training and/or testing the machine learning algorithm, wherein the data form the data set for training and/or testing the machine learning algorithm.” This all recites the mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No – Although claims 1 and recite “machine learning algorithm” and the use of “an ensemble”, “training” and “data filter” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No— No – Although claims 1 and 12 recite “machine learning algorithm” and the use of “an ensemble”, “training” and “data filter” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 134 S. Ct. 2347, 2360 (2014). For the reasons above, claims 1 and 12 are rejected as being directed to non-patentable subject matter under §101. The additional limitations of the dependent claims are addressed briefly below: Regarding dependent claim 2: “wherein the step of selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters includes: respectively filtering the first data set using at least a part of the configured data filters of the ensemble of data filters in order to obtain filtered data; classifying the filtered data based on the requirements of the machine learning algorithm in order to obtain classified data; and selecting data from the classified data based on the requirements of the machine learning algorithm, wherein the selected data form the data set for training and/or testing the machine learning algorithm.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claim 1. The “machine learning algorithm” and the use of “an ensemble”, “training” and “data filter” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Regarding dependent claim 3: “wherein the step of selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters further includes fusing the filtered data of various data filters of the ensemble of data filters in order to obtain fused filtered data, and wherein the step of classifying the filtered data based on the requirements of the machine learning algorithm includes classifying the fused filtered data based on the requirements of the machine learning algorithm.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claim 1. The fusing of data is still organizing data which is a mental process. The “machine learning algorithm” and the use of “an ensemble”, “training” and “data filter” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Regarding dependent claim 4: “wherein the data potentially relevant to the machine learning algorithm are sensor data.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claim 1. The “sensor data” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf sensor which is no more than extra solution activity. Regarding dependent claim 5: “wherein the first data set includes metadata.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claim 1. Taken alone, the additional elements of the dependent claims above do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation Regarding claims 6 and 13, taking claim 6 as exemplary: Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 6, a method, and 13 a device. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes. The claim recites “A method for training a machine learning algorithm, comprising the following steps: generating a data set for training the machine learning algorithm by: providing a first data set, wherein the first data set includes data potentially relevant to the machine learning algorithm, providing an ensemble of data filters, configuring each data filter of the ensemble of data filters based on requirements of the machine learning algorithm, and selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters in order to obtain data for training the machine learning algorithm, wherein the data form the data set for training the machine learning algorithm; and training the machine learning algorithm based on the generated data set.” This all recites the mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No – Although claims 6 and 13 recite “machine learning algorithm” and the use of “an ensemble”, “training” and “data filter” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No—Although claims 6 and 13 recite “machine learning algorithm” and the use of “an ensemble”, “training” and “data filter” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 134 S. Ct. 2347, 2360 (2014). For the reasons above, claims 6 and 13 are rejected as being directed to non-patentable subject matter under §101. Regarding claims 7 and 14, taking claim 7 as exemplary: Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 7, a method, and 14 a device. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes. The claim recites “A method for classifying image data, comprising: training a machine learning algorithm, the training including: generating a data set for training the machine learning algorithm by: providing a first data set, wherein the first data set includes data potentially relevant to the machine learning algorithm, providing an ensemble of data filters, configuring each data filter of the ensemble of data filters based on requirements of the machine learning algorithm, and selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters in order to obtain data for training the machine learning algorithm, wherein the data form the data set for training the machine learning algorithm, and training the machine learning algorithm based on the generated data set; and classifying image data using the trained machine learning algorithm.” This all recites the mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No – Although claims 7 and 14 recite “machine learning algorithm” and the use of “an ensemble”, “training” and “data filter” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No— Although claims 7 and 14 recite “machine learning algorithm” and the use of “an ensemble”, “training” and “data filter” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 134 S. Ct. 2347, 2360 (2014). For the reasons above, claims 7 and 14 are rejected as being directed to non-patentable subject matter under §101. Regarding claims 8 and 15, taking claim 8 as exemplary: Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 8, a method, and 15 a device. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes. The claim recites “A method for verifying a machine learning algorithm trained to solve a particular problem, the method comprising the following steps: providing a machine learning algorithm trained to solve the particular problem; providing an ensemble of further machine learning algorithms trained to solve the particular problem; providing first output data by processing provided input data using the machine learning algorithm and providing further output data by processing the provided input data using at least a part of the machine learning algorithms of the ensemble of further machine learning algorithms; and verifying the machine learning algorithm by comparing the first output data with the further output data.” This all recites the mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No – Although claims 8 and 15 recite “machine learning algorithm” and the use of “an ensemble” and “trained” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No – Although claims 8 and 15 recite “machine learning algorithm” and the use of “an ensemble” and “trained” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 134 S. Ct. 2347, 2360 (2014). For the reasons above, claims 8 and 15 are rejected as being directed to non-patentable subject matter under §101. The additional limitations of the dependent claims are addressed briefly below: Regarding dependent claim 9: “wherein the step of verifying the machine learning algorithm includes determining consistency of the first output data and the further output data.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claim 8. Regarding dependent claim 10: “wherein at least one machine learning algorithm of the ensemble of further machine learning algorithms is configured to perform a different task than other machine learning algorithms of the ensemble of further machine learning algorithms.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claim 8. The task is still organizing data which is a mental process. The “machine learning algorithm” and the use of “ensemble” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. Regarding dependent claim 11: “wherein at least one machine learning algorithm of the ensemble of further machine learning algorithms has a different architecture than other machine learning algorithms of the ensemble of further machine learning algorithms.” – which continues to recite the abstract idea mental process of organizing data and making judgments through testing which be performed in the human mind and/or with the aid of pen and paper without significantly more of claim 8. The “machine learning algorithm” and the use of “ensemble” of the method are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using a generic machines that gather data and input into readily available routine into an off the shelf models which is no more than extra solution activity. The “different architecture” is simply applying the abstract idea to readily available off the shelf computer components and amounts not more than extra solution activity. Taken alone, the additional elements of the dependent claims above do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “A control device configured to generate a data set for training…” in claim 12 which is interpreted to be a device using a training unit and configuration unit implemented on the basis of code that can be executed by a processor stored in memory as described in page 27 line 30 through page 29 line 4 of the instant application’s specification. “A control device configured to train a machine learning algorithm…” in claim 13 which is interpreted to be a device using a training unit and configuration unit implemented on the basis of code that can be executed by a processor stored in memory as described in page 27 line 30 through page 29 line 4 of the instant application’s specification. “A control device configured to classify image data, the control device configured to: provide a trained machine learning algorithm…” in claim 14 which is interpreted to be a device using a training unit and configuration unit implemented on the basis of code that can be executed by a processor stored in memory as described in page 27 line 30 through page 29 line 4 of the instant application’s specification. “A control device configured to verify a machine learning algorithm trained to solve a particular problem, the control device configured to: provide a machine learning algorithm trained to solve the particular problem…” in claim 15 which is interpreted to be a device using a training unit and configuration unit implemented on the basis of code that can be executed by a processor stored in memory and verifying using memory as described in page 27 line 30 through page 29 line 4 of the instant application’s specification. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-6, 8, 10, 12-13, and 15 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kakosyan et al., (US 2023/0194278 A1, hereinafter Kakosyan). Regarding claims 1 and 12, taking claim 1 as exemplary: “A method for generating a data set for training and/or testing a machine learning algorithm, the method comprising the following steps: providing a first data set, wherein the first data set includes data potentially relevant to the machine learning algorithm;” (Paragraph [0007]: “The processor executes the route prediction block, the route prediction block to receive real-time mobile communication network data, aggregate the real-time mobile communication network data with historical data to form a training set, filter the training data set for relevance to servicing route predictions for connected vehicles, train a plurality of offline models using the filtered training data set, and select a best performing propagation model from the plurality of offline models.”) “providing an ensemble of data filters; configuring each data filter of the ensemble of data filters based on requirements of the machine learning algorithm;” (Paragraph [0087]: “FIG. 5 is a flowchart of one example embodiment of a process of filtering the network data to be used by the offline process. The filtering process can be implemented by a focused mobility model in the offline process. The example filtering process is provided by way of example and not limitation, and one skilled in the art would understand that other similar filtering processes can be utilized in conjunction with the online and offline processes. The process provides a classification of the data to filter out the data that is not relevant for training offline models. The filtering process filters network data to identify network data associated with high mobility. The input data is obtained from a CC (Block 501) and can be filtered sequentially to get desired data for the offline process. In the example, the first filter validates whether within a short-continuous time flow any handovers have occurred (Block 502). If the data does not relate to handovers, then this data can be discarded or ignored. For data that is related to handovers, the function checks whether the distance between mobile communication network cells is not walkable (i.e., a long distance) for a given time period (Block 503). If the data is related to a walkable or short distance, then the data can be discarded or ignored. For data that is related to a longer distance (i.e., a non-walkable distance), the data is analyzed to determine whether it correlates geo-spatially with nearby roads (Block 504) to confirm the data is relevant to moving along public roads that may be navigated by the autonomous vehicle. If the data is not correlated with a navigable road, then it may be discarded or ignored. The data that is related to navigable roads can be stored for using in the offline process (Block 505). This filter function is provided by way of example and not limitation. The filtering of data can utilize any number or variety of classifications that can improve the relevance of the data to the route prediction process of the prediction system.”) “and selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters in order to obtain data for training and/or testing the machine learning algorithm, wherein the data form the data set for training and/or testing the machine learning algorithm.” (Paragraph [0007]: “The processor executes the route prediction block, the route prediction block to receive real-time mobile communication network data, aggregate the real-time mobile communication network data with historical data to form a training set, filter the training data set for relevance to servicing route predictions for connected vehicles, train a plurality of offline models using the filtered training data set, and select a best performing propagation model from the plurality of offline models.”) Regarding claim 2: Kakosyan shows the method of claim 1 as claimed and specified above. And Kakosyan shows “wherein the step of selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters includes: respectively filtering the first data set using at least a part of the configured data filters of the ensemble of data filters in order to obtain filtered data; classifying the filtered data based on the requirements of the machine learning algorithm in order to obtain classified data; and selecting data from the classified data based on the requirements of the machine learning algorithm, wherein the selected data form the data set for training and/or testing the machine learning algorithm.” (Paragraph [0087]: “FIG. 5 is a flowchart of one example embodiment of a process of filtering the network data to be used by the offline process. The filtering process can be implemented by a focused mobility model in the offline process. The example filtering process is provided by way of example and not limitation, and one skilled in the art would understand that other similar filtering processes can be utilized in conjunction with the online and offline processes. The process provides a classification of the data to filter out the data that is not relevant for training offline models. The filtering process filters network data to identify network data associated with high mobility. The input data is obtained from a CC (Block 501) and can be filtered sequentially to get desired data for the offline process. In the example, the first filter validates whether within a short-continuous time flow any handovers have occurred (Block 502). If the data does not relate to handovers, then this data can be discarded or ignored. For data that is related to handovers, the function checks whether the distance between mobile communication network cells is not walkable (i.e., a long distance) for a given time period (Block 503). If the data is related to a walkable or short distance, then the data can be discarded or ignored. For data that is related to a longer distance (i.e., a non-walkable distance), the data is analyzed to determine whether it correlates geo-spatially with nearby roads (Block 504) to confirm the data is relevant to moving along public roads that may be navigated by the autonomous vehicle. If the data is not correlated with a navigable road, then it may be discarded or ignored. The data that is related to navigable roads can be stored for using in the offline process (Block 505). This filter function is provided by way of example and not limitation. The filtering of data can utilize any number or variety of classifications that can improve the relevance of the data to the route prediction process of the prediction system.”) Regarding claim 3: Kakosyan shows the method of claim 2 as claimed and specified above. And Kakosyan shows “wherein the step of selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters further includes fusing the filtered data of various data filters of the ensemble of data filters in order to obtain fused filtered data, and wherein the step of classifying the filtered data based on the requirements of the machine learning algorithm includes classifying the fused filtered data based on the requirements of the machine learning algorithm.” (Paragraph [0087]: “FIG. 5 is a flowchart of one example embodiment of a process of filtering the network data to be used by the offline process. The filtering process can be implemented by a focused mobility model in the offline process. The example filtering process is provided by way of example and not limitation, and one skilled in the art would understand that other similar filtering processes can be utilized in conjunction with the online and offline processes. The process provides a classification of the data to filter out the data that is not relevant for training offline models. The filtering process filters network data to identify network data associated with high mobility. The input data is obtained from a CC (Block 501) and can be filtered sequentially to get desired data for the offline process. In the example, the first filter validates whether within a short-continuous time flow any handovers have occurred (Block 502). If the data does not relate to handovers, then this data can be discarded or ignored. For data that is related to handovers, the function checks whether the distance between mobile communication network cells is not walkable (i.e., a long distance) for a given time period (Block 503). If the data is related to a walkable or short distance, then the data can be discarded or ignored. For data that is related to a longer distance (i.e., a non-walkable distance), the data is analyzed to determine whether it correlates geo-spatially with nearby roads (Block 504) to confirm the data is relevant to moving along public roads that may be navigated by the autonomous vehicle. If the data is not correlated with a navigable road, then it may be discarded or ignored. The data that is related to navigable roads can be stored for using in the offline process (Block 505). This filter function is provided by way of example and not limitation. The filtering of data can utilize any number or variety of classifications that can improve the relevance of the data to the route prediction process of the prediction system.” And in paragraph [0089]: “There are two groups of correlations involved in the mapping process, whether mobile communication network components (e.g., cells) are located geographically close enough to the input route (Block 604) and whether mobile communication network components (e.g., cells) have physically strong signal coverage on the input route (Block 605) considering cell characteristics such as frequency band, tilt angle, probability of being selected in that area, and similar consideration. If mobile communication network components are not geo-relevant or do not provide route coverage, then the data associated with these components can be ignored or discarded. Based on this mapping only relevant mobile communication network components (e.g., tower cells) are output for further data filtering or processing in the offline process (Block 606).” – The mapping process and the continuing filtering of data is the fusing of data.) Regarding claim 4: Kakosyan shows the method of claim 1 as claimed and specified above. And Kakosyan shows “wherein the data potentially relevant to the machine learning algorithm are sensor data.” (Paragraph [0087]: “FIG. 5 is a flowchart of one example embodiment of a process of filtering the network data to be used by the offline process. The filtering process can be implemented by a focused mobility model in the offline process. The example filtering process is provided by way of example and not limitation, and one skilled in the art would understand that other similar filtering processes can be utilized in conjunction with the online and offline processes. The process provides a classification of the data to filter out the data that is not relevant for training offline models. The filtering process filters network data to identify network data associated with high mobility. The input data is obtained from a CC (Block 501) and can be filtered sequentially to get desired data for the offline process. In the example, the first filter validates whether within a short-continuous time flow any handovers have occurred (Block 502). If the data does not relate to handovers, then this data can be discarded or ignored. For data that is related to handovers, the function checks whether the distance between mobile communication network cells is not walkable (i.e., a long distance) for a given time period (Block 503). If the data is related to a walkable or short distance, then the data can be discarded or ignored. For data that is related to a longer distance (i.e., a non-walkable distance), the data is analyzed to determine whether it correlates geo-spatially with nearby roads (Block 504) to confirm the data is relevant to moving along public roads that may be navigated by the autonomous vehicle. If the data is not correlated with a navigable road, then it may be discarded or ignored. The data that is related to navigable roads can be stored for using in the offline process (Block 505). This filter function is provided by way of example and not limitation. The filtering of data can utilize any number or variety of classifications that can improve the relevance of the data to the route prediction process of the prediction system.” And in paragraph [0089]: “There are two groups of correlations involved in the mapping process, whether mobile communication network components (e.g., cells) are located geographically close enough to the input route (Block 604) and whether mobile communication network components (e.g., cells) have physically strong signal coverage on the input route (Block 605) considering cell characteristics such as frequency band, tilt angle, probability of being selected in that area, and similar consideration. If mobile communication network components are not geo-relevant or do not provide route coverage, then the data associated with these components can be ignored or discarded. Based on this mapping only relevant mobile communication network components (e.g., tower cells) are output for further data filtering or processing in the offline process (Block 606).” – The network data is sensor data.”) Regarding claim 5: Kakosyan shows the method of claim 1 as claimed and specified above. And Kakosyan shows “wherein the first data set includes metadata.” (Paragraph [0087]: “FIG. 5 is a flowchart of one example embodiment of a process of filtering the network data to be used by the offline process. The filtering process can be implemented by a focused mobility model in the offline process. The example filtering process is provided by way of example and not limitation, and one skilled in the art would understand that other similar filtering processes can be utilized in conjunction with the online and offline processes. The process provides a classification of the data to filter out the data that is not relevant for training offline models. The filtering process filters network data to identify network data associated with high mobility. The input data is obtained from a CC (Block 501) and can be filtered sequentially to get desired data for the offline process. In the example, the first filter validates whether within a short-continuous time flow any handovers have occurred (Block 502). If the data does not relate to handovers, then this data can be discarded or ignored. For data that is related to handovers, the function checks whether the distance between mobile communication network cells is not walkable (i.e., a long distance) for a given time period (Block 503). If the data is related to a walkable or short distance, then the data can be discarded or ignored. For data that is related to a longer distance (i.e., a non-walkable distance), the data is analyzed to determine whether it correlates geo-spatially with nearby roads (Block 504) to confirm the data is relevant to moving along public roads that may be navigated by the autonomous vehicle. If the data is not correlated with a navigable road, then it may be discarded or ignored. The data that is related to navigable roads can be stored for using in the offline process (Block 505). This filter function is provided by way of example and not limitation. The filtering of data can utilize any number or variety of classifications that can improve the relevance of the data to the route prediction process of the prediction system.” And in paragraph [0089]: “There are two groups of correlations involved in the mapping process, whether mobile communication network components (e.g., cells) are located geographically close enough to the input route (Block 604) and whether mobile communication network components (e.g., cells) have physically strong signal coverage on the input route (Block 605) considering cell characteristics such as frequency band, tilt angle, probability of being selected in that area, and similar consideration. If mobile communication network components are not geo-relevant or do not provide route coverage, then the data associated with these components can be ignored or discarded. Based on this mapping only relevant mobile communication network components (e.g., tower cells) are output for further data filtering or processing in the offline process (Block 606).” – The data associated with specific components is the meta data, that is filtered sequentially, along with data associated with specific non changing characteristics is meta data associated.) Regarding claims 6 and 13, taking claim 6 as exemplary: Kakosyan shows “A method for training a machine learning algorithm, comprising the following steps: generating a data set for training the machine learning algorithm by: providing a first data set, wherein the first data set includes data potentially relevant to the machine learning algorithm, providing an ensemble of data filters, configuring each data filter of the ensemble of data filters based on requirements of the machine learning algorithm, and selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters in order to obtain data for training the machine learning algorithm, wherein the data form the data set for training the machine learning algorithm; and training the machine learning algorithm based on the generated data set.” (Paragraph [0087]: “FIG. 5 is a flowchart of one example embodiment of a process of filtering the network data to be used by the offline process. The filtering process can be implemented by a focused mobility model in the offline process. The example filtering process is provided by way of example and not limitation, and one skilled in the art would understand that other similar filtering processes can be utilized in conjunction with the online and offline processes. The process provides a classification of the data to filter out the data that is not relevant for training offline models. The filtering process filters network data to identify network data associated with high mobility. The input data is obtained from a CC (Block 501) and can be filtered sequentially to get desired data for the offline process. In the example, the first filter validates whether within a short-continuous time flow any handovers have occurred (Block 502). If the data does not relate to handovers, then this data can be discarded or ignored. For data that is related to handovers, the function checks whether the distance between mobile communication network cells is not walkable (i.e., a long distance) for a given time period (Block 503). If the data is related to a walkable or short distance, then the data can be discarded or ignored. For data that is related to a longer distance (i.e., a non-walkable distance), the data is analyzed to determine whether it correlates geo-spatially with nearby roads (Block 504) to confirm the data is relevant to moving along public roads that may be navigated by the autonomous vehicle. If the data is not correlated with a navigable road, then it may be discarded or ignored. The data that is related to navigable roads can be stored for using in the offline process (Block 505). This filter function is provided by way of example and not limitation. The filtering of data can utilize any number or variety of classifications that can improve the relevance of the data to the route prediction process of the prediction system.” And in paragraph [0089]: “There are two groups of correlations involved in the mapping process, whether mobile communication network components (e.g., cells) are located geographically close enough to the input route (Block 604) and whether mobile communication network components (e.g., cells) have physically strong signal coverage on the input route (Block 605) considering cell characteristics such as frequency band, tilt angle, probability of being selected in that area, and similar consideration. If mobile communication network components are not geo-relevant or do not provide route coverage, then the data associated with these components can be ignored or discarded. Based on this mapping only relevant mobile communication network components (e.g., tower cells) are output for further data filtering or processing in the offline process (Block 606).”) Regarding claims 8 and 15, taking claim 8 as exemplary: Kakosyan shows “A method for verifying a machine learning algorithm trained to solve a particular problem, the method comprising the following steps: providing a machine learning algorithm trained to solve the particular problem; providing an ensemble of further machine learning algorithms trained to solve the particular problem; providing first output data by processing provided input data using the machine learning algorithm and providing further output data by processing the provided input data using at least a part of the machine learning algorithms of the ensemble of further machine learning algorithms;” (Paragraph [0087]: “FIG. 5 is a flowchart of one example embodiment of a process of filtering the network data to be used by the offline process. The filtering process can be implemented by a focused mobility model in the offline process. The example filtering process is provided by way of example and not limitation, and one skilled in the art would understand that other similar filtering processes can be utilized in conjunction with the online and offline processes. The process provides a classification of the data to filter out the data that is not relevant for training offline models. The filtering process filters network data to identify network data associated with high mobility. The input data is obtained from a CC (Block 501) and can be filtered sequentially to get desired data for the offline process. In the example, the first filter validates whether within a short-continuous time flow any handovers have occurred (Block 502). If the data does not relate to handovers, then this data can be discarded or ignored. For data that is related to handovers, the function checks whether the distance between mobile communication network cells is not walkable (i.e., a long distance) for a given time period (Block 503). If the data is related to a walkable or short distance, then the data can be discarded or ignored. For data that is related to a longer distance (i.e., a non-walkable distance), the data is analyzed to determine whether it correlates geo-spatially with nearby roads (Block 504) to confirm the data is relevant to moving along public roads that may be navigated by the autonomous vehicle. If the data is not correlated with a navigable road, then it may be discarded or ignored. The data that is related to navigable roads can be stored for using in the offline process (Block 505). This filter function is provided by way of example and not limitation. The filtering of data can utilize any number or variety of classifications that can improve the relevance of the data to the route prediction process of the prediction system.” And in paragraph [0089]: “There are two groups of correlations involved in the mapping process, whether mobile communication network components (e.g., cells) are located geographically close enough to the input route (Block 604) and whether mobile communication network components (e.g., cells) have physically strong signal coverage on the input route (Block 605) considering cell characteristics such as frequency band, tilt angle, probability of being selected in that area, and similar consideration. If mobile communication network components are not geo-relevant or do not provide route coverage, then the data associated with these components can be ignored or discarded. Based on this mapping only relevant mobile communication network components (e.g., tower cells) are output for further data filtering or processing in the offline process (Block 606).” – the “identify network data associated with high mobility” of Kakosyan is the solving of a particular problem. ) “and verifying the machine learning algorithm by comparing the first output data with the further output data.” (Paragraph [0085]: “With the training data set prepared, the offline models can be trained against the updated information (Block 357). After the offline models have been trained and further modified, they may be validated by testing mechanisms to determine the accuracy of each offline model and identify errors (Block 361). The offline model with the best performance can be selected for use in the online process and can be referred to as the ‘current’ offline model while in use by the online process (Block 363).” And in paragraph [0090]: “An offline model corresponding to the most probable class is trained on TS data (Block 703) which can then be analyzed with cross-validation techniques (Block 704). All offline models, validation metrics, and the best model object will be stored (Block 705) for further use in the online process.”) Regarding claim 10: Kakosyan shows the method of claim 8 as claimed and specified above. And Kakosyan shows “wherein at least one machine learning algorithm of the ensemble of further machine learning algorithms is configured to perform a different task than other machine learning algorithms of the ensemble of further machine learning algorithms.” (Paragraph [0068]: “The offline process can be further divided into an offline service level indicator (SLI) modelling process and an offline retraining process. Multiple offline models can be created from combinations of these sources. The offline models can include an SLI model, mobility model, and/or a propagation model, which are referred to herein as offline models, generally. The SLI model matches network information such as cell tower KPI values to an SLI value in a geographical area relevant to a route or segment. The SLI modelling involves the network data being fed to all the offline models, which are then scored based on performance. Any number and variety of offline models can be utilized by each RPB 104. Each offline model can be a machine learning (ML) model having been trained on different sets of input data to generate SLI predictions for areas, segments, routes, locations, or similar divisions of a mobile communication network.”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kakosyan in view of Roth et al., (US 2021/0334955 A1, hereinafter Roth). Regarding claims 7 and 14, taking claim 7 as exemplary: Kakosyan shows “training a machine learning algorithm, the training including: generating a data set for training the machine learning algorithm by: providing a first data set, wherein the first data set includes data potentially relevant to the machine learning algorithm, providing an ensemble of data filters, configuring each data filter of the ensemble of data filters based on requirements of the machine learning algorithm, and selecting the first data set by filtering the first data set using at least a part of the configured data filters of the ensemble of data filters in order to obtain data for training the machine learning algorithm, wherein the data form the data set for training the machine learning algorithm, and training the machine learning algorithm based on the generated data set;” (Paragraph [0087]: “FIG. 5 is a flowchart of one example embodiment of a process of filtering the network data to be used by the offline process. The filtering process can be implemented by a focused mobility model in the offline process. The example filtering process is provided by way of example and not limitation, and one skilled in the art would understand that other similar filtering processes can be utilized in conjunction with the online and offline processes. The process provides a classification of the data to filter out the data that is not relevant for training offline models. The filtering process filters network data to identify network data associated with high mobility. The input data is obtained from a CC (Block 501) and can be filtered sequentially to get desired data for the offline process. In the example, the first filter validates whether within a short-continuous time flow any handovers have occurred (Block 502). If the data does not relate to handovers, then this data can be discarded or ignored. For data that is related to handovers, the function checks whether the distance between mobile communication network cells is not walkable (i.e., a long distance) for a given time period (Block 503). If the data is related to a walkable or short distance, then the data can be discarded or ignored. For data that is related to a longer distance (i.e., a non-walkable distance), the data is analyzed to determine whether it correlates geo-spatially with nearby roads (Block 504) to confirm the data is relevant to moving along public roads that may be navigated by the autonomous vehicle. If the data is not correlated with a navigable road, then it may be discarded or ignored. The data that is related to navigable roads can be stored for using in the offline process (Block 505). This filter function is provided by way of example and not limitation. The filtering of data can utilize any number or variety of classifications that can improve the relevance of the data to the route prediction process of the prediction system.” And in paragraph [0089]: “There are two groups of correlations involved in the mapping process, whether mobile communication network components (e.g., cells) are located geographically close enough to the input route (Block 604) and whether mobile communication network components (e.g., cells) have physically strong signal coverage on the input route (Block 605) considering cell characteristics such as frequency band, tilt angle, probability of being selected in that area, and similar consideration. If mobile communication network components are not geo-relevant or do not provide route coverage, then the data associated with these components can be ignored or discarded. Based on this mapping only relevant mobile communication network components (e.g., tower cells) are output for further data filtering or processing in the offline process (Block 606).”) But Kakosyan does not appear to explicitly recite “A method for classifying image data, comprising: … and classifying image data using the trained machine learning algorithm.” However, Roth teaches “A method for classifying image data, comprising: … and classifying image data using the trained machine learning algorithm.” (Paragraph [0067]: “a process 450 illustrated in FIG. 4B can be used, at inference time, to infer segmentation of objects represented in image data. In at least one embodiment, one or more instances of image data can be received 452 and provided 454 as input to a trained segmentation model. In at least one embodiment, an inferred segmentation can be received 456 as output from this trained model that corresponds to an object of interest represented in this input image.” And in paragraph [0056]: “various deep learning networks can be utilized, such as a U-Net architecture. In at least one embodiment, residual blocks can be constructed per block of encoder and decoder structure of U-Net, as residual blocks can be beneficial for training and preventing over-fitting. In at least one embodiment, four encoding layers and three decoding layers can be used for this U-Net architecture. In at least one embodiment, initial filters are set to eight for all data sets. In at least one embodiment, batch normalization and “relu” activations can be used for each layer except a last layer, which can be activated using a “softmax” layer. In at least one embodiment, an ensemble of U-Net models can be used to form a committee”) Kakosyan and Roth are analogous in the arts because both Kakosyan and Roth describe training of multiple models and determining the right use of data. Therefore, it would be obvious to one of ordinary skill in the art at the filing date of the instant application, having the teachings of Kakosyan and Roth before him or her, to modify the teachings of Kakosyan to include the teachings of Roth in order to expand the capabilities of Kakosyan to include image classification (see Roth paragraph [0067]) and thereby increase marketability of Kakosyan. Claim(s) 9 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kakosyan in view of Nasr-Azadani et al., (US 2021/0224425 A1, hereinafter Nasr-Azadani). Regarding claim 9: Kakosyan shows the method of claim 8 as claimed and specified above. But Kakosyan does not appear to explicitly recite “wherein the step of verifying the machine learning algorithm includes determining consistency of the first output data and the further output data.” However, Nasr-Azadani teaches “wherein the step of verifying the machine learning algorithm includes determining consistency of the first output data and the further output data.” (Paragraph [0018]: “As further shown in FIG. 1, the consistency engine (CE) 112 may be optionally used in the online pipeline 140 as a secondary precautionary adversarial detection engine. Specifically, the prediction output 111 from the main machine learning model 110 in production may be passed to the CE 112. The CE 112 may compare the prediction output 111 against prediction results returned by an ensemble of proxy models of various architectures after processing the same live data 107. As shown in 113 of FIG. 1, if no significant difference between the prediction output 111 of the main model and the proxy models in the CE 112 is observed, the prediction output 111 by the main model 110 may be determined to be safe. Otherwise, as shown in 115 of FIG. 1, the CE 112 may flag the prediction output 111 by the main machine learning model 110 as unsafe and inform the escalator 108 for issuing an alert indicating the that prediction by the main machine learning model is unsafe.”) Kakosyan and Nasr-Azadani are analogous in the arts because both Kakosyan and Nasr-Azadani describe ensemble prediction models. Therefore, it would be obvious to one of ordinary skill in the art at the filing date of the instant application, having the teachings of Kakosyan and Nasr-Azadani before him or her, to modify the teachings of Kakosyan to include the teachings of Nasr-Azadani in order to verify and check the models of Kakosyan to determine if they are safe or need an alert to indicate that they are unsafe (see Nasr-Azadani paragraph [0018]) in order to increase accuracy of Kakosyan. Regarding claim 11: Kakosyan shows the method of claim 8 as claimed and specified above. But Kakosyan does not appear to explicitly recite “wherein at least one machine learning algorithm of the ensemble of further machine learning algorithms has a different architecture than other machine learning algorithms of the ensemble of further machine learning algorithms.” However, Nasr-Azadani teaches wherein at least one machine learning algorithm of the ensemble of further machine learning algorithms has a different architecture than other machine learning algorithms of the ensemble of further machine learning algorithms.” (Paragraph [0018]: “As further shown in FIG. 1, the consistency engine (CE) 112 may be optionally used in the online pipeline 140 as a secondary precautionary adversarial detection engine. Specifically, the prediction output 111 from the main machine learning model 110 in production may be passed to the CE 112. The CE 112 may compare the prediction output 111 against prediction results returned by an ensemble of proxy models of various architectures after processing the same live data 107. As shown in 113 of FIG. 1, if no significant difference between the prediction output 111 of the main model and the proxy models in the CE 112 is observed, the prediction output 111 by the main model 110 may be determined to be safe. Otherwise, as shown in 115 of FIG. 1, the CE 112 may flag the prediction output 111 by the main machine learning model 110 as unsafe and inform the escalator 108 for issuing an alert indicating the that prediction by the main machine learning model is unsafe.”) Kakosyan and Nasr-Azadani are analogous in the arts because both Kakosyan and Nasr-Azadani describe ensemble prediction models. Therefore, it would be obvious to one of ordinary skill in the art at the filing date of the instant application, having the teachings of Kakosyan and Nasr-Azadani before him or her, to modify the teachings of Kakosyan to include the teachings of Nasr-Azadani in order to verify and check the models of Kakosyan to determine if they are safe or need an alert to indicate that they are unsafe (see Nasr-Azadani paragraph [0018]) in order to increase accuracy of Kakosyan and thereby be able to support models of different architectures (see Nasr-Azadani paragraph [0018]) and increase marketability of Kakosyan. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Khanzada (US 2022/0037022 A1), part of the prior art made of record, describes the ensemble filtering of data of claims 1, 6-8, and 12-15 in paragraph [0016] through the use of preprocessing and filtering of data before ensemble model training. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE D WOOLWINE whose telephone number is (571)272-4138. The examiner can normally be reached M-F 9:30-6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MIRANDA HUANG can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHANE D. WOOLWINE Primary Examiner Art Unit 2124 /SHANE D WOOLWINE/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Sep 08, 2022
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596741
SYSTEMS AND METHODS FOR SEMANTIC CONCEPT DEFINITION AND SEMANTIC CONCEPT RELATIONSHIP SYNTHESIS UTILIZING EXISTING DOMAIN DEFINITIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12591764
FAIRNESS ASSESSMENT FOR DEEP GENERATIVE MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12567005
DETECTING ANOMALOUS DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12561618
ANOMALY DETECTION SYSTEM USING MULTI-LAYER SUPPORT VECTOR MACHINES AND METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Patent 12554985
OPERATIONAL NEURAL NETWORK PERFORMANCE VIA FEATURE SPACE ANALYSIS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+21.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month