Prosecution Insights
Last updated: April 19, 2026
Application No. 18/133,125

AUTOMATIC GENERATION OF EXEMPLAR QUANTITY FOR TRAINING MACHINE LEARNING MODELS

Non-Final OA §101§103
Filed
Apr 11, 2023
Examiner
YI, HYUNGJUN B
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
18%
Grant Probability
At Risk
1-2
OA Rounds
4y 7m
To Grant
49%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
3 granted / 17 resolved
-37.4% vs TC avg
Strong +32% interview lift
Without
With
+31.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
39 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
26.3%
-13.7% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
4.7%
-35.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 17 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to the claims filed on 04/11/2023. Claims 1-20 are pending for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 05/06/2024 and 04/11/2023 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Allowable Subject Matter Claims 6 and 14 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Statutory Categories Claims 1-9 are directed to a method. Claims 10-14 is directed to an computer-readable medium. Claims 15-20 are directed to a system. Independent Claim 1, 10, and 15 Step 2A Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes. Independent claim 1, 10 and 15 recites limitations that are abstract ideas in the form of mental processes: Claim 1 recites: A computer-implemented method, comprising: determining an available quantity of training vectors that are available in a set of time series signals, wherein the training vectors are designated for use in training a machine learning model; (this limitation recites determining a value based on predetermined values, stated at a high level with no further indication as to how the determination should be performed, which can reasonably be performed as a mental process or with aid of pen and paper) selecting a boost function from a plurality of different boost functions, wherein the selected boost function is selected based on the available quantity of the training vectors falling within a quantity range associated with the selected boost function, (this limitation recites selecting a function based on predetermined values, which can reasonably be performed as a mental process or with aid of pen and paper. Further addressing the automatic limitation it should be noted that ‘The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid… Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer.’, as cited from MPEP 2106.04(iii)) generating a selection quantity of the exemplar vectors to select from the training vectors by applying the selected boost function to the training vectors; (this limitation recites generating a numerical value from a function based on predetermined values, which can reasonably be performed as a mental process or with aid of pen and paper.) selecting a quantity of the exemplar vectors from the training vectors based on the selection quantity; (this limitation recites selecting a subset of vectors based on preselected vectors, which can reasonably be performed as a mental process or with aid of pen and paper.) This claim further recites the following additional elements for the purposes of Step 2A Prong Two analysis: automatically selecting…and wherein each boost function from the plurality of different boost functions is configured to determine a different selection quantity of exemplar vectors to be selected from the training vectors; (this limitation invokes boost functions merely as a tool to perform an existing process and is considered as mere instructions to apply an exception, see MPEP 2106.05(f)) and training the machine learning model to detect anomalies in the time series signals based on the exemplar vectors that were selected. (this limitation invokes machine learning models merely as a tool to perform an existing process and is considered as mere instructions to apply an exception, see MPEP 2106.05(f)) The additional limitations fail step 2A Prong 2 of the 101 analysis because they do not transform the claim into a practical application. These limitations are too abstract or lack technical improvement that would make the concept practically useful. Without clear utility or integration into a specific field, the claim does not relate to any particular application. It does not meet the requirements of Step 2A Prong 2, as it fails to make the concept meaningfully applicable in practice. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. This claim recites the following additional elements for the purposes of Step 2B analysis: automatically selecting…and wherein each boost function from the plurality of different boost functions is configured to determine a different selection quantity of exemplar vectors to be selected from the training vectors; (this limitation invokes boost functions merely as a tool to perform an existing process and is considered as mere instructions to apply an exception, see MPEP 2106.05(f)) and training the machine learning model to detect anomalies in the time series signals based on the exemplar vectors that were selected. (this limitation invokes machine learning models merely as a tool to perform an existing process and is considered as mere instructions to apply an exception, see MPEP 2106.05(f)) The claim also fails Step 2B of the analysis because the additional limitations do not amount to significantly more than the abstract idea itself. The additional limitations do not enhance the claim in a way that would move it beyond its abstract ideas as they minimally elaborate on the core concept without adding any inventive or technical substance. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Claims 10 and 15 recite additional limitations for consideration: A non-transitory computer-readable medium that includes stored thereon computer-executable instructions that when executed by at least a processor of a computer system cause the computer system to: (Under step 2A prong II and step 2B, this limitation invokes computers and machinery merely as a tool to perform an existing process and is considered as mere instructions to apply an exception using generic computer, see MPEP 2106.05(f)) A computing system, comprising: at least one processor; at least one memory connected to the at least one processor; a non-transitory computer readable medium including instructions stored thereon that when executed by at least the processor cause the computing system to: (Under step 2A prong II and step 2B, this limitation invokes computers and machinery merely as a tool to perform an existing process and is considered as mere instructions to apply an exception using generic computer, see MPEP 2106.05(f)) Dependents of Claims 1, 10, and 15 The remaining dependent claims corresponding to independent claims 1, 10, and 15 do not recite additional elements, whether considered individually or in combination, that are sufficient to integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. The analysis of which is shown below: The claims below recite additional limitations which fail step 2A Prong 2 of the 101 analysis because they do not transform the claim into a practical application. These limitations are too abstract or lack technical improvement that would make the concept practically useful. Without clear utility or integration into a specific field, the claim does not relate to any particular application. It does not meet the requirements of Step 2A Prong 2, as it fails to make the concept meaningfully applicable in practice. The claims also fails Step 2B of the analysis because the additional limitations do not amount to significantly more than the abstract idea itself. The additional limitations do not enhance the claim in a way that would move it beyond its abstract ideas as they minimally elaborate on the core concept without adding any inventive or technical substance. The claims are unpatentable. Claim 2 recites the further limitation of: The computer-implemented method of claim 1, wherein the selected boost function adjusts the selection quantity of the exemplar vectors by a different coefficient for each of a set of quantity ranges, wherein the set of quantity ranges includes the quantity range. (this limitation merely recites mathematics in the form of mathematical algorithms, functions, or calculation, see specification paragraphs [0085-0087] for the related mathematical disclosure) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 3 recites the further limitation of: The computer-implemented method of claim 1, further comprising applying a taper coefficient in the boost function, wherein the taper coefficient reduces the selection quantity by an extent that is based on the available quantity of the training vectors. (this limitation merely recites mathematics in the form of mathematical algorithms, functions, or calculation, see specification paragraphs [0087-0088] for the related mathematical disclosure) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 4 recites the further limitation of: The computer-implemented method of claim 1, further comprising, in response to the quantity range satisfying a threshold for being a memory- specific range, applying a square root taper coefficient in the boost function, wherein the square root taper coefficient is a square root function of the available quantity of the training vectors. (this limitation merely recites mathematics in the form of mathematical algorithms, functions, or calculation, see specification paragraphs [0087-0088] for the related mathematical disclosure) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 5 recites the further limitation of: The computer-implemented method of claim 1, further comprising, in response to the quantity range satisfying a threshold for being a processor- specific range, applying a cube root taper coefficient in the boost function, wherein the cube root taper coefficient is a cube root function of the available quantity of the training vectors. (this limitation merely recites mathematics in the form of mathematical algorithms, functions, or calculation, see specification paragraphs [0088] for the related mathematical disclosure) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 6 recites the further limitation of: The computer-implemented method of claim 1, wherein automatically selecting a boost function further comprises: where the quantity range is less than a first threshold, selecting a first boost function that is a linear function of a signal quantity of the time series signals; (selecting a function in response to a threshold being satisfied is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) where the quantity range is between the first threshold and a second threshold that is higher than the first threshold, selecting a second boost function that is a function of a window quantity of windows that subdivide the training vectors and the signal quantity of the time series signals; (selecting a function in response to a threshold being satisfied is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) where the quantity range is between the second threshold and a third threshold that is higher than the second threshold, selecting a third boost function that is tapered by a square root function of the available quantity of the training vectors; (selecting a function in response to a threshold being satisfied is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) and where the quantity range is more than the third threshold, selecting a fourth boost function that is tapered by a cube root function of the available quantity of the training vectors. (selecting a function in response to a threshold being satisfied is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 7 recites the further limitation of: The computer-implemented method of claim 1, further comprising subdividing the training vectors into a predetermined number of windows, wherein the quantity of the exemplar vectors are selected from the training vectors within more than one of the windows. (further dividing predetermined vectors into predetermined windows and selecting exemplars from more than one window is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 8 recites the further limitation of: The computer-implemented method of claim 1, wherein the method further comprises, prior to selecting the exemplar vectors from the training vectors, constraining the selection quantity of the exemplar vectors to not exceed the available quantity of the training vectors. (further limiting the number of exemplars to be less than the amount of training vectors is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 9 recites the further limitation of: The computer-implemented method of claim 1, further comprising: monitoring the time series signals with the trained machine learning model to detect an anomaly; (this limitation invokes machine learning models merely as a tool to perform an existing process and is considered as mere instructions to apply an exception, see MPEP 2106.05(f)) and in response to detecting a particular anomaly in the time series signals, generating an electronic alert that the particular anomaly has occurred. (Step 2A Prong II/Step 2B: this limitation merely recites receiving or transmitting data in the form of an alert when an anomaly has occurred and is being considered as well-understood, routine, and conventional insignificant extra-solution activity. It should be known that the courts have recognized receiving/transmitting data as well-understood, routine, and conventional activity, see MPEP 2106.05(d)(ii), Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information);) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Claim 11 recite limitations substantially similar to claim 2, as such a similar analysis applies. Claim 12 recites limitations substantially similar to claim 4, as such a similar analysis applies. Claim 13 recites limitations substantially similar to claim 5, as such a similar analysis applies. Claim 14 recites the further limitation of: The non-transitory computer-readable medium of claim 10, wherein the instructions for automatically selecting the boost function when executed by at least the processor further cause the computer system to: in response to the available quantity falling within a quantity range for which neither memory nor processor time are significant constraints of the resource constraints, selecting a first boost function that is a linear function of a signal quantity of the time series signals; (selecting a function in response to a threshold being satisfied is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) in response to the available quantity falling within a quantity range for which the available quantity is sufficiently large to allow short term activity to be missed, selecting a second boost function that is a function of a window quantity of windows that subdivide the training vectors and the signal quantity of the time series signals; (selecting a function in response to a threshold being satisfied is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) in response to the available quantity falling within a quantity range for which memory footprint drives resource consumption, selecting a third boost function that is tapered by a square root function of the available quantity of the training vectors; (selecting a function in response to a threshold being satisfied is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) and in response to the available quantity falling within a quantity range for which processor time drives resource consumption, selecting a fourth boost function that is tapered by a cube root function of the available quantity of the training vectors. (selecting a function in response to a threshold being satisfied is being considered a mental process of evaluation that would be reasonably performed in human mind or with aid of pen and paper) Claim 16 recites the further limitation of: The computing system of claim 15, wherein the instructions to generate the selection quantity of exemplar vectors further cause the computing system to: in response to selection of the first boost function, adjust the selection quantity of the exemplar vectors by a first coefficient, (this limitation merely recites mathematics in the form of mathematical algorithms, functions, or calculation, see specification paragraphs [0087-0088] for the related mathematical disclosure) and in response to selection of the second boost function, adjust the selection quantity of the exemplar vectors by a second coefficient. (this limitation merely recites mathematics in the form of mathematical algorithms, functions, or calculation, see specification paragraphs [0087-0088] for the related mathematical disclosure) Claim 17 recites the further limitation of: The computing system of claim 15, wherein the instructions to generate the selection quantity of exemplar vectors further cause the computing system to: in response to selection of the first boost function, lessen the selection quantity of the exemplar vectors by a square root taper coefficient that attenuates the selection quantity by a square root function, (this limitation merely recites mathematics in the form of mathematical algorithms, functions, or calculation, see specification paragraphs [0087-0088] for the related mathematical disclosure) and in response to selection of the second boost function, lessen the selection quantity of the exemplar vectors by a cube root taper coefficient that attenuates the selection quantity by a cube root function. (this limitation merely recites mathematics in the form of mathematical algorithms, functions, or calculation, see specification paragraphs [0088] for the related mathematical disclosure) Claim 18 recites the further limitation of: The computing system of claim 15, wherein the instructions further cause the computing system to: subdivide the training vectors into a plurality of windows; (a mental process of evaluation which can reasonably be performed in human mind or with aid of pen and paper) and increase the selection quantity of the exemplar vectors to accommodate selections of the training vectors from within the plurality of windows. (a mental process of evaluation which can reasonably be performed in human mind or with aid of pen and paper) Claim 19 recites the further limitation of: The computing system of claim 15, wherein the instructions further cause the computing system to reduce the selection quantity of the exemplar vectors to the available quantity of the training vectors in response to the selection quantity exceeding the available quantity. (a mental process of evaluation which can reasonably be performed in human mind or with aid of pen and paper) Claim 20 recites the further limitation of: The computing system of claim 15, wherein the instructions further cause the computing system to detect an anomaly in the time series signals using the trained machine learning model. (this limitation invokes machine learning models merely as a tool to perform an existing process and is considered as mere instructions to apply an exception, see MPEP 2106.05(f)) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 8, 11-13, 16-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable by Jones et al., (US20150356421A1), hereafter referred to as Jones, in view of Numpy’s Histogram Documentation, (Numpy.histogram. numpy.histogram - NumPy v1.12 Manual. (2020, September 25). https://web.archive.org/web/20200925115942/https://docs.scipy.org/doc/numpy-1.12.0/reference/generated/numpy.histogram.html ), hereafter referred to as Numpy. Claim 1: Jones teaches the following limitations: A computer-implemented method, comprising: determining an available quantity of training vectors that are available in a set of time series signals, (Jones, paragraph 11, “The main idea of the invention is to model the training time series data as a set of exemplars. The exemplars represent a variety of different windows or subsequences in the time series data. A final set of exemplars is substantially smaller than a total set of overlapping windows in the training time series data.”, Jones, paragraph 28, “Determine SST feature vectors for each overlapping window in the training time series data and define the initial set of exemplars 201 equal to the set of all SST feature vectors.”, Jones teaches generating feature vectors for each overlapping window and defining an initial set based on those feature vectors. Under BRI, the SST feature vectors correspond to the claimed training vectors, and the set of SST feature vectors (for all overlapping windows) is the “available quantity” of training vectors.) wherein the training vectors are designated for use in training a machine learning model; (Jones, paragraph 18, “A set of exemplars 111 is learned by summarizing training time series data 101 using a divide-and-conquer procedure 200. An exemplar is a representation of a set of similar windows of the time series data. ”, Jones, paragraph 28, “Determine SST feature vectors for each overlapping window in the training time series data and define the initial set of exemplars 201 equal to the set of all SST feature vectors.”, Jones teaches that vectors derived from training time series (SST feature vectors) are used to learn/store the model (final exemplars). Under BRI, learning the exemplar set from the training vectors constitutes training a machine-learning/instance-based model for anomaly detection.) selecting a quantity of the exemplar vectors from the training vectors based on the selection quantity; (Jones, paragraph 11, “The main idea of the invention is to model the training time series data as a set of exemplars. The exemplars represent a variety of different windows or subsequences in the time series data. A final set of exemplars is substantially smaller than a total set of overlapping windows in the training time series data.”, Jones, paragraph 22, “A selection procedure selects a smaller set of exemplars from a given set of exemplars. The smaller set of exemplars is chosen to represent well the given set.” Jones, paragraph 28, “Determine SST feature vectors for each overlapping window in the training time series data and define the initial set of exemplars 201 equal to the set of all SST feature vectors.”, Under BRI, the “given set of exemplars” corresponds to the training-derived SST feature vectors (training vectors), and the “smaller set of exemplars” corresponds to the claimed selected exemplar vectors.) and training the machine learning model to detect anomalies in the time series signals based on the exemplar vectors that were selected. (Jones, paragraph 4, “To detect anomalies, each window of the testing time series is compared to every window of the training time series, and a distance to a nearest matching window is used as an anomaly score. If the anomaly score is above a threshold, then an anomaly is signaled,” Jones paragraph 19, “For each window of a testing time series data 102, a distance to a nearest exemplar is determined 120. The distance is used as an anomaly score 121. Then, an anomaly 131 is signaled 130 when the anomaly score for the window is greater than a threshold T.”, This explicitly teaches that anomaly detection is performed based on the final set of exemplars (i.e., the selected exemplar vectors).) the available quantity of the training vectors (Jones, paragraph 28, “Determine SST feature vectors for each overlapping window in the training time series data and define the initial set of exemplars 201 equal to the set of all SST feature vectors.” Jones teaches generating SST feature vectors for each overlapping window, i.e., a set of training-derived feature vectors; under BRI, the claimed “training vectors” read on Jones’s SST feature vectors, and the “available quantity” corresponds to the number/count of SST feature vectors that exist in the initial set (one per overlapping window) and are therefore available for selection.) quantity of exemplar vectors to be selected from the training vectors (Jones, paragraph 11, “The main idea of the invention is to model the training time series data as a set of exemplars… A final set of exemplars is substantially smaller than a total set of overlapping windows in the training time series data.” Jones teaches selecting/learning a final exemplar set from a larger candidate set derived from overlapping windows; under BRI, the exemplars correspond to “exemplar vectors” and the window-derived feature vectors correspond to “training vectors,” such that Jones teaches a quantity of exemplar vectors selected from the training vectors.) Numpy, in the same field of , teaches the following limitations which Jones fails to teach: automatically selecting a boost function from a plurality of different boost functions, wherein the selected boost function is selected based on the available quantity … within a quantity range associated with the selected boost function, (Numpy, notes, “The methods to estimate the optimal number of bins are well founded in literature, and are inspired by the choices R provides for histogram visualisation… ‘Auto’ (maximum of the ‘Sturges’ and ‘FD’ estimators) A compromise to get a good value. For small datasets the Sturges value will usually be chosen, while larger datasets will usually default to FD. Avoids the overly conservative behaviour of FD and Sturges for small and large datasets respectively. Switchover point is usually a.size = 1000”, Numpy page 2, parameters, “If bins is a string from the list below, histogram will use the method chosen to calculate the optimal bin width and consequently the number of bins (see Notes for more detail on the estimators)…”, Numpy teaches automatically selecting among a plurality of different sizing methods based on dataset size ranges, e.g., ‘Auto’ selects between ‘Sturges’ and ‘FD’ such that ‘For small datasets the Sturges value will usually be chosen, while larger datasets will usually default to FD… Switchover point is usually a.size = 1000.’ In the present combination, the claimed ‘training vectors’ and their ‘available quantity’ are taught by Jones (i.e., Jones’s window/subsequence-derived feature vectors/candidate exemplars). It is interpreted by the examiner that Numpy’s size-based method-selection mechanism would apply to Jones’s available quantity of training vectors by using the number/count of Jones’s available training vectors as the ‘a.size’ input that drives the small-vs-large range selection in Numpy. NumPy is not relied upon to teach ‘training vectors’; it is relied upon only for the range-based function-selection logic.) and wherein each boost function from the plurality of different boost functions is configured to determine a different selection quantity… (Numpy, page 2, parameters, “histogram will use the method chosen to calculate the optimal bin width and consequently the number of bins (see Notes for more detail on the estimators)” Numpy, notes, “‘Auto’ (maximum of the ‘Sturges’ and ‘FD’ estimators) A compromise to get a good value. For small datasets the Sturges value will usually be chosen, while larger datasets will usually default to FD. Avoids the overly conservative behaviour of FD and Sturges for small and large datasets respectively. Switchover point is usually a.size = 1000”, Numpy teaches that a selected method (from a plurality of different methods, e.g., ‘auto’, ‘fd’, ‘sturges’, ‘rice’, ‘sqrt’, etc.) is used to calculate the the number of bins, i.e., each method is configured to output a resulting count (a quantity) for the dataset. Because the listed methods are distinct estimators (e.g., Sturges vs FD vs Rice vs Sqrt), they necessarily yield different resulting counts when applied to the same dataset size. In the present combination, the claimed training vectors and exemplar vectors are provided by Jones (i.e., Jones’s window/subsequence-derived feature vectors as training vectors and Jones’s selected/learned exemplars as exemplar vectors). Numpy is not relied upon to teach training or exemplar vectors; rather, NumPy is relied upon for the plurality of different quantity-determining rules (boost functions). A POSITA would apply each Numpy estimator to the available quantity of Jones’s training vectors (the count of Jones’s window-derived vectors) to produce a target selection quantity (analogous to Numpy’s computed bin count), and then use that target selection quantity to determine how many of Jones’s exemplars are selected from Jones’s training vectors. Accordingly, each boost function (NumPy estimator) is configured to determine a different selection quantity of exemplar vectors to be selected from the training vectors (Jones).) generating a selection quantity of the exemplar vectors to select from the training vectors by applying the selected boost function to the training vectors; (Numpy, notes, “‘FD’ (Freedman Diaconis Estimator) PNG media_image1.png 44 97 media_image1.png Greyscale The binwidth is proportional to the interquartile range (IQR) and inversely proportional to cube root of a.size. Can be too conservative for small datasets, but is quite good for large datasets. The IQR is very robust to outliers.”, Numpy, page 2, parameters, “histogram will use the method chosen to calculate the optimal bin width and consequently the number of bins… The final bin count is obtained from np.round(np.ceil(range / h)).”, Jones teaches applying the selected estimator to the input data size a.size to compute the output bin count. Under BRI, applying the selected estimator to the training data size (of Jones) generates the claimed “selection quantity.”) It would have been obvious to a person of ordinary skill in the art (POSITA) before the effective filing date of this invention to incorporate Numpy’s data-size-based estimator switching into Jones’s exemplar-learning system to automatically compute an appropriate target selection quantity (i.e., how many exemplars to retain) as the available training set grows, thereby avoiding under/over-selection; NumPy expressly teaches an automatic size-dependent compromise method—“‘Auto’… A compromise to get a good value… Avoids the overly conservative behavior… Switchover point is usually a.size ≈ 1000” (Numpy, notes) —so a POSITA would predictably apply the same “choose-a-sizing-rule based on dataset-size range” mechanism to Jones’s exemplar retention step by treating Jones’s count of candidate training windows/vectors as the a.size input and using NumPy’s selected rule output as the selection quantity that governs how many exemplars Jones selects. Claim 2: Jones and Numpy teaches the limitations of claim 1, Jones further teaches: the selection quantity of the exemplar vectors (Jones, paragraph 11, “A final set of exemplars is substantially smaller than a total set of overlapping windows in the training time series data.” Jones teaches determining/producing a final exemplar set with a particular retained size relative to the available candidate set; under BRI, the retained size (count) of the final exemplar set corresponds to the claimed “selection quantity of the exemplar vectors.”) Numpy further teaches: The computer-implemented method of claim 1, wherein the selected boost function adjusts the selection quantity… by a different coefficient for each of a set of quantity ranges, wherein the set of quantity ranges includes the quantity range. (Numpy, notes, “‘Auto’ (maximum of the ‘Sturges’ and ‘FD’ estimators) A compromise to get a good value. For small datasets the Sturges value will usually be chosen, while larger datasets will usually default to FD.”, teaches selecting different estimators in different dataset-size ranges; and the estimator descriptions show different mathematical scaling rules (e.g., ‘sturges’ “only accounts for data size” via a log relationship; ‘fd’ uses data size with cube-root dependence), which necessarily apply different multiplicative/functional coefficients to a.size in producing the resulting count. Numpy further teaches that the ‘auto’ method selects between different estimators depending on dataset-size range (e.g., for small datasets Sturges is chosen, while for larger datasets FD is chosen), thereby causing the computed resulting count to be adjusted differently across size ranges.) Claim 3: Jones and Numpy teaches the limitations of claim 1, Numpy further teaches: The computer-implemented method of claim 1, further comprising applying a taper coefficient in the boost function, wherein the taper coefficient reduces the selection quantity by an extent that is based on the available quantity of the training vectors. (Numpy, notes, “‘Rice’ PNG media_image2.png 30 86 media_image2.png Greyscale The number of bins is only proportional to cube root of a.size. It tends to overestimate the number of bins and it does not take into account data variability.”, Numpy teaches sublinear (cube-root) dependence on data size a.size and therefore a taper/attenuation of the resulting count growth as data size increases. It is interpreted by the examiner that the available quantity of training vectors, from which the selection quantity is derived from, originates from the SST feature vectors of Jones used to train their model as disclosed above for claim 1. Under BRI, input data size a.size corresponds to the available quantity of training vectors of Jones, and the cube-root dependence constitutes the claimed taper coefficient reducing the selection quantity as a function of available quantity.) Claim 8: Jones and Numpy teaches the limitations of claim 1, Jones further teaches: The computer-implemented method of claim 1, wherein the method further comprises, prior to selecting the exemplar vectors from the training vectors, constraining the selection quantity of the exemplar vectors to not exceed the available quantity of the training vectors. (Jones, paragraph 28, “Determine SST feature vectors for each overlapping window in the training time series data and define the initial set of exemplars 201 equal to the set of all SST feature vectors.”, Jones, paragraph 22, “A selection procedure selects a smaller set of exemplars from a given set of exemplars. The smaller set of exemplars is chosen to represent well the given set.”, Because Jones begins with an initial set equal to all window-derived feature vectors and then selects a smaller subset, the quantity selected cannot exceed the available quantity in the initial set. Under BRI, this teaches constraining the selection quantity to not exceed the available quantity.) Claim 11 recite limitations substantially similar to claim 2, as such a similar analysis applies. Claim 12: Jones and Numpy teaches the limitations of claim 1, Numpy further teaches: The non-transitory computer-readable medium of claim 10, further comprising instructions that when executed by at least the processor cause the computer system to apply a square root taper coefficient in the boost function, wherein the square root taper coefficient attenuates the selection quantity by a square root function of the available quantity of the training vectors. (Numpy, notes, “‘Sqrt’ PNG media_image3.png 46 91 media_image3.png Greyscale The simplest and fastest estimator. Only takes into account the data size.”, Numpy provides an explicit taper function based on available quantity n; under BRI, a square-root taper coefficient based on the available quantity n is taught. It is interpreted by the examiner that when combined with Jones, the quantity of exemplars selected would be tapered by this quantity function using the available quantity of SST training vectors provided from the multiple overlapping windows.) Claim 13: Jones and Numpy teaches the limitations of claim 1, Numpy further teaches: The non-transitory computer-readable medium of claim 10, further comprising instructions that when executed by at least the processor cause the computer system to apply a cube root taper coefficient in the boost function, wherein the cube root taper coefficient attenuates the selection quantity by a cube root function of the available quantity of the training vectors. (Numpy, notes, “‘FD’ (Freedman Diaconis Estimator) PNG media_image4.png 43 90 media_image4.png Greyscale The binwidth is proportional to the interquartile range (IQR) and inversely proportional to cube root of a.size. Can be too conservative for small datasets, but is quite good for large datasets. The IQR is very robust to outliers.”, Numpy defines a sizing rule that outputs a quantity using a cube root function of n. It is interpreted by the examiner that when combined with Jones, the quantity of exemplars selected would be tapered by this quantity function using the available quantity of SST training vectors provided from the multiple overlapping windows.) Claim 16 Jones and Numpy teaches the limitations of claim 1, Numpy further teaches: The computing system of claim 15, wherein the instructions to generate the selection quantity of exemplar vectors further cause the computing system to: in response to selection of the first boost function, adjust the selection quantity of the exemplar vectors by a first coefficient, (Numpy, notes, “‘Sqrt’ PNG media_image3.png 46 91 media_image3.png Greyscale The simplest and fastest estimator. Only takes into account the data size.”, Numpy provides an explicit taper function based on available quantity n; under BRI, a square-root taper coefficient (the first coefficient) based on the available quantity n is taught. It is interpreted by the examiner that when combined with Jones, the quantity of exemplars selected would be tapered by this quantity function using the available quantity of SST training vectors provided from the multiple overlapping windows.) and in response to selection of the second boost function, adjust the selection quantity of the exemplar vectors by a second coefficient. (Numpy, notes, “‘FD’ (Freedman Diaconis Estimator) PNG media_image4.png 43 90 media_image4.png Greyscale The binwidth is proportional to the interquartile range (IQR) and inversely proportional to cube root of a.size. Can be too conservative for small datasets, but is quite good for large datasets. The IQR is very robust to outliers.”, Numpy defines a sizing rule that outputs a quantity using a cube root function of n. It is interpreted by the examiner that when combined with Jones, the quantity of exemplars selected would be tapered by this quantity function using the available quantity of SST training vectors provided from the multiple overlapping windows.) Claim 17: Jones and Numpy teaches the limitations of claim 1, Numpy further teaches: The computing system of claim 15, wherein the instructions to generate the selection quantity of exemplar vectors further cause the computing system to: in response to selection of the first boost function, lessen the selection quantity of the exemplar vectors by a square root taper coefficient that attenuates the selection quantity by a square root function, (Numpy, notes, “‘Sqrt’ PNG media_image3.png 46 91 media_image3.png Greyscale The simplest and fastest estimator. Only takes into account the data size.”, Numpy teaches a count computed as a square-root function of data size; under BRI, using a square-root-of-size rule yields a sublinear (attenuated) count relative to linear scaling and therefore “lessens” the selection quantity as a function of available quantity. When combined with Jones the selection quantity of exemplars would be tapered by the quantity computed by the function.) and in response to selection of the second boost function, lessen the selection quantity of the exemplar vectors by a cube root taper coefficient that attenuates the selection quantity by a cube root function. (Numpy, notes, “‘FD’ (Freedman Diaconis Estimator) PNG media_image4.png 43 90 media_image4.png Greyscale The binwidth is proportional to the interquartile range (IQR) and inversely proportional to cube root of a.size. Can be too conservative for small datasets, but is quite good for large datasets. The IQR is very robust to outliers.”, Numpy also explains the final count is obtained from range/h; thus the resulting count scales with cube-root dependence on data size a.size. When combined with Jones, the selection quantity of exemplars would be tapered by the quantity computed by the function, which attenuates the selection quantity by a cube root function over all available training vectors n. Under BRI, using cube-root dependence attenuates/lessens the resulting selection quantity as dataset size increases.) Claim 19: Jones and Numpy teaches the limitations of claim 1, Jones further teaches: The computing system of claim 15, wherein the instructions further cause the computing system to reduce the selection quantity of the exemplar vectors to the available quantity of the training vectors in response to the selection quantity exceeding the available quantity. (Jones, paragraph 28, “Determine SST feature vectors for each overlapping window in the training time series data and define the initial set of exemplars 201 equal to the set of all SST feature vectors.”, Jones, paragraph 22, “A selection procedure selects a smaller set of exemplars from a given set of exemplars. The smaller set of exemplars is chosen to represent well the given set.”, Because Jones begins with an initial set equal to all window-derived feature vectors and then selects a smaller subset, the quantity selected cannot exceed the available quantity in the initial set. Under BRI, this teaches constraining the selection quantity to not exceed the available quantity.) Claim 20: Jones and Numpy teaches the limitations of claim 1, Jones further teaches: The computing system of claim 15, wherein the instructions further cause the computing system to detect an anomaly in the time series signals using the trained machine learning model. (Jones, paragraph 3, “Therefore, it is desired to efficiently learn a model of one-dimensional time series data. Then, the model can be used to detect anomalies in future testing time series data from the same source. Typically, the model is learned from a training time series without anomalies.” paragraph 19, “For each window of a testing time series data 102, a distance to a nearest exemplar is determined 120. The distance is used as an anomaly score 121. Then, an anomaly 131 is signaled 130 when the anomaly score for the window is greater than a threshold T.”, Jones expressly teaches using a trained machine learned model to detect anomalies via thresholded anomaly scores.) Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable by Jones in view of Numpy and in further view of Hogan et al., (US20100205617A1), hereafter referred to as Hogan. Claim 4: Jones and Numpy teaches the limitations of claim 1, Numpy further teaches: The computer-implemented method of claim 1, further comprising, in response to the quantity range satisfying a threshold… applying a square root taper coefficient in the boost function, wherein the square root taper coefficient is a square root function of the available quantity of the training vectors. (Numpy, notes, “‘Sqrt’ PNG media_image3.png 46 91 media_image3.png Greyscale The simplest and fastest estimator. Only takes into account the data size.”, Numpy provides an explicit taper function based on available quantity n; under BRI, a square-root taper coefficient based on the available quantity n is taught. It is interpreted by the examiner that when combined with Jones, the quantity of exemplars selected would be tapered by this quantity function using the available quantity of SST training vectors provided from the multiple overlapping windows.) Hogan in the same field of memory validation, teaches the following which Jones and Numpy fail to teach: a threshold for being a memory- specific range, (Hogan, paragraph 21, “Step 21: Maximal application executing while monitoring status of the maximal application (e.g., responsiveness, etc.) and status of computing system resources against a first status threshold.”, Hogan discloses monitoring “status of computing system resources” and comparing such status against a “first status threshold”. It is interpreted by the examiner that the computed quantity range, provided from the boost functions of Jones combined with Numpy, would be a status that is compared to a first status threshold. Hogan, paragraph 29, “Step 22: If the status of the computing resources is an acceptable level relative to the first status threshold (e.g., availability of memory, processor, etc.), then the process proceeds back to step 21, otherwise the process proceeds to step 23.”, Hogan identifies “availability of memory” as a resource status evaluated relative to the threshold.) It would have been obvious to a POSITA before the effective filing date of this invention to incorporate Hogan’s resource-threshold monitoring into Jones’s exemplar-learning and NumPy- sizing rules to adapt exemplar selection when memory becomes a limiting constraint, because Hogan explicitly teaches gating behavior based on memory availability—e.g., “availability of memory, processor, etc.” evaluated “relative to the… status threshold” (Hogan, paragraph 27) —and a POSITA would predictably respond to that “memory-specific range” by applying NumPy’s simple size-only square-root rule (“‘Sqrt’… Only takes into account the data size” (Numpy, notes)) as a taper coefficient on Jones’s exemplar-count determination, thereby reducing the target exemplar quantity as training-vector quantity increases to keep Jones’s exemplar set within the memory-constrained operating regime identified by Hogan. Claim 5: Jones and Numpy teaches the limitations of claim 1, Numpy further teaches: The computer-implemented method of claim 1, further comprising, in response to the quantity range satisfying a threshold… applying a cube root taper coefficient in the boost function, wherein the cube root taper coefficient is a cube root function of the available quantity of the training vectors. (Numpy, notes, “‘FD’ (Freedman Diaconis Estimator) PNG media_image4.png 43 90 media_image4.png Greyscale The binwidth is proportional to the interquartile range (IQR) and inversely proportional to cube root of a.size. Can be too conservative for small datasets, but is quite good for large datasets. The IQR is very robust to outliers.”, Numpy defines a sizing rule that outputs a quantity using a cube root function of n. It is interpreted by the examiner that when combined with Jones, the quantity of exemplars selected would be tapered by this quantity function using the available quantity of SST training vectors provided from the multiple overlapping windows.) Hogan in the same field of memory validation, teaches the following which Jones and Numpy fail to teach: a threshold for being a processor- specific range, (Hogan, paragraph 25, “Such circumstances may include: level of availability of one or more resources (e.g., response time delay of the application, amount of free memory, percentage of processor utilization,”, Hogan monitors “percentage of processor utilization,” i.e., a CPU/processor-specific resource metric. Hogan, paragraph 29, “Step 22: If the status of the computing resources is an acceptable level relative to the first status threshold (e.g., availability of memory, processor, etc.), then the process proceeds back to step 21, otherwise the process proceeds to step 23.”, Hogan compares system resources (processor included) status against a threshold.) A motivation for combining Jones and Numpy with Hogan is similar to that as applied for claim 4 above. Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable by Jones in view of Numpy and in further view of Bachorik et al., (US20220342790A1), hereafter referred to as Bachorik. Claim 7: Jones and Numpy teaches the limitations of claim 1, Bachorik, in the same field of time window data recording, further teaches the following which Jones and Numpy fail to teach: The computer-implemented method of claim 1, further comprising subdividing the training vectors into a predetermined number of windows, wherein the quantity of the exemplar vectors are selected from the training vectors within more than one of the windows. (Bachorik, paragraph 69, “The sampling environment 200 includes a plurality of sampling windows 202 a-d. Each of the windows 202 a-d corresponds to a 500 millisecond period of time. Each window 202 a-d includes a set 204 a-d of one or more exceptions, a subset of which are sampled exceptions. The number of sampled exceptions depends on the sampling rate specified for the sampling window”, Bachorik’s “sampling windows” correspond to the claimed “windows,” the subset within a window corresponds to the claimed “training vectors” available in that window, and the “subset … sampled” corresponds to selecting exemplar vectors from the training vectors. Because Bachorik’s sampling occurs across the plurality of windows (not just one), the selected subset (exemplars) is selected from within more than one window. Further, because the sampling window duration/period is a configured parameter (e.g., fixed time period per window), the resulting windowing for a given dataset interval yields a determinable (i.e., predetermined) number of windows.) It would have been obvious to a person of ordinary skill in the art before the effective filing date to incorporate Bachorik’s window-based subset selection and adaptive sample-count control into Jones’s window-derived exemplar selection and Numpy’s quantity selection because Bachorik expressly teaches selecting a “subset” from each of a “plurality of sampling windows” (promoting representative coverage across multiple windows rather than a single segment), and further teaches increasing the sampling rate/number of selected samples in later windows when earlier windows yield too few samples (i.e., “increases the sampling rate for subsequent sampling windows” resulting in a higher sampled count, Bachorik, paragraph 70). A POSITA would have found it predictable to use Bachorik’s per-window sampling/coverage mechanism to ensure Jones’s exemplar set is drawn from more than one window and to raise the exemplar target count as needed to adequately represent multiple windows, while using Numpy’s size-based estimator logic to compute/adjust the overall selection quantity. Claim 18: Jones and Numpy teaches the limitations of claim 1, Jones further teaches: The computing system of claim 15, wherein the instructions further cause the computing system to: subdivide the training vectors into a plurality of windows; (Jones, paragraph 11, “The exemplars represent a variety of different windows or subsequences in the time series data… [and] a total set of overlapping windows in the training time series data,” Jones explicitly teaches forming training candidates from a plurality of (overlapping) windows/subsequences of the training time series. Under BRI, the window/subsequence-derived vectors correspond to the claimed “training vectors,” and generating them from overlapping windows reads on subdividing the training vectors into a plurality of windows.) Bachorik, in the same field of time window data recording, further teaches the following which Jones and Numpy fail to teach: and increase the selection quantity of the exemplar vectors to accommodate selections of the training vectors from within the plurality of windows. (Bachorik, paragraph 70, “The monitoring system increases the sampling rate for subsequent sampling windows in response to a determination that the monitoring budget has not yet been met. The sampling rate is adjusted up to 50% of exceptions (every other exception). In the third sampling window 202 c, 16 exceptions occur and eight samples are obtained.”, Bachorik expressly teaches increasing the number of selected samples (selection quantity) across windows by increasing the sampling rate. Increasing the sampling rate/number of sampled items is increasing the “selection quantity” of exemplars, and it is done in the context of sampling across subsequent windows (i.e., to accommodate selections from within the plurality of windows).) Claims 9, 10, and 15 are rejected under 35 U.S.C. 103 as being unpatentable by Jones in view of Numpy and in further view of Huu et al., (US20220335347A1), hereafter referred to as Huu. Claim 9: Jones and Numpy teaches the limitations of claim 1, Jones further teaches: The computer-implemented method of claim 1, further comprising: monitoring the time series signals with the trained machine learning model to detect an anomaly; (Jones, paragraph 4, “To detect anomalies, each window of the testing time series is compared to every window of the training time series, and a distance to a nearest matching window is used as an anomaly score. If the anomaly score is above a threshold, then an anomaly is signaled,”, Jones discloses anomaly detection by signaling anomalies based on an anomaly score/threshold while processing windows of time series.) Huu in the same field of machine learning, teaches the following which Jones and Numpy fail to teach: and in response to detecting a particular anomaly in the time series signals, generating an electronic alert that the particular anomaly has occurred. (Huu, paragraph 16, “The example embodiments are directed to a new system that is capable of determining when an anomaly is likely to occur in the future, and outputting a warning to a screen, application, etc., prior to the occurrence of the anomaly.”, Huu discloses generating an electronic warning output (screen/application) in response to an anomaly being detected, which maps directly to “generating an electronic alert.”.) It would have been obvious to a POSITA before the effective filing date of the invention to add Huu’s electronic warning output to Jones’s anomaly detection pipeline and with Numpy’s sizing constraint because Jones already produces an anomaly event/decision (signaling when a score exceeds a threshold), and Huu expressly teaches “outputting a warning to a screen, application, etc.” (Huu, paragraph 16) upon anomaly detection , so a POSITA would be motivated to implement the predictable step of generating an electronic alert when Jones detects a particular anomaly to improve operational usability and enable timely mitigation by users or downstream systems. Claims 10 and 15 recite limitations substantially similar to claim 1, as such a similar analysis applies. Claim 10 recites the following additional limitation for consideration which Huu further teaches: A non-transitory computer-readable medium that includes stored thereon computer-executable instructions that when executed by at least a processor of a computer system cause the computer system to (Huu, paragraph 55, “Any such resulting program, having computer-readable code, may be embodied or provided within one or more non-transitory computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed examples of the disclosure. For example, the non-transitory computer-readable media may be, but is not limited to, a fixed drive, diskette, optical disk, magnetic tape, flash memory, external drive, semiconductor memory such as read-only memory (ROM), random-access memory (RAM), and/or any other non-transitory transmitting and/or receiving medium such as the Internet, cloud storage, the Internet of Things (IoT),”) A motivation for combining Jones and Numpy with Huu is similar to that as applied for claim 9 above. Claim 15 recites the following additional limitation for consideration which Huu further teaches: A computing system, comprising: at least one processor; at least one memory connected to the at least one processor; a non-transitory computer readable medium including instructions stored thereon that when executed by at least the processor cause the computing system to (Huu, paragraph 55, “Any such resulting program, having computer-readable code, may be embodied or provided within one or more non-transitory computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed examples of the disclosure. For example, the non-transitory computer-readable media may be, but is not limited to, a fixed drive, diskette, optical disk, magnetic tape, flash memory, external drive, semiconductor memory such as read-only memory (ROM), random-access memory (RAM), and/or any other non-transitory transmitting and/or receiving medium such as the Internet, cloud storage, the Internet of Things (IoT),”) A motivation for combining Jones and Numpy with Huu is similar to that as applied for claim 9 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Lane, T., & Brodley, C. E. (1999). Temporal sequence learning and data reduction for anomaly detection. ACM Transactions on Information and System Security (TISSEC), 2(3), 295-331. US20200097810A1 - Automated window based feature generation for time-series forecasting and anomaly detection Sener, O., & Savarese, S. (2017). Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489. Wei, K., Iyer, R., & Bilmes, J. (2015, June). Submodularity in data subset selection and active learning. In International conference on machine learning (pp. 1954-1963). PMLR. Sivasubramanian, D., Iyer, R., Ramakrishnan, G., & De, A. (2021). Training data subset selection for regression with controlled generalization error. arXiv preprint arXiv:2106.12491. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HYUNGJUN B YI whose telephone number is (703)756-4799. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.B.Y./Examiner, Art Unit 2146 /USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Apr 11, 2023
Application Filed
Feb 11, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536429
INTELLIGENTLY MODIFYING DIGITAL CALENDARS UTILIZING A GRAPH NEURAL NETWORK AND REINFORCEMENT LEARNING
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
18%
Grant Probability
49%
With Interview (+31.7%)
4y 7m
Median Time to Grant
Low
PTA Risk
Based on 17 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month